I just read this book review of Egan's The Educated Mind. Here are some thoughts I had, written for all but grammatically directed at the review's anonymous author (like a typical comment!).
I'd love to go to that middle school, and that high school. It would set a lower-bound on how many different areas someone has passing knowledge of. (This sounds like faint praise, but Gell-Mann Amnesia partly depends on people not having this.)
I don't know if this would really help all the students, even if I mostly-agree with the Bruner(/HPMOR) statement that basically anyone can learn anything, after the age where pouring-water-into-a-taller-container-looks-like-adding-water no longer applies. But aside from the credential/signaling model of education (which, itself, could explicitly select against anything remotely pleasant or fun or effective), I think a lot of school's function is just daycare. The school does its "job" of keeping a kid off the streets and out of their parents' hair for 8 hours, but adding this curriculum to the existing system doesn't e.g. solve a kid's learning problems from being in an abusive home. (This probably isn't for the curriculum itself to have to solve, but I will say that The Bottleneck could just be on mental/physical health. E.g. schools fuck up teens' sleep.)
This sorta flatters my bias in favor of my own idea for how to teach math, so that's something. Ditto for my ADHD, as you discussed; That "curiosity about everything" is indeed quite fun. (No wonder TVTropes and social media feeds are both addictive!)
I got most of the Really Cool Stuff (taught in these Cool Intuitive Ways!) from a combination of pop-science books, YouTube videos, and blog posts on LessWrong and elsewhere. I was lucky that some of the core parts were already thrown into my middle/high school (e.g. how Bayes' formula can be derived from looking at a Venn Diagram). Now that I'm trying to specialize and amp-up my skills (for AI alignment), I can go into more depth using textbooks etc.
Some of the ending part of this post feel like they were written by ChatGPT according to a stereotypical formula, even if that "stereotype" is only know to those of us who're nerdy about this sort of thing and even if that formula is pretty easy to write while still technically transferring 100% of the important stuff from one's notes to one's essay.
Also: Dammit, I was hoping the "unveiling" would be something closer to the earlier mention of "everything in the world can be made interesting". This is such a core part of intellectual curiosity, and it's even a big part of why I love Yudkowsky's writing so much! He shows where the complicated things could be simple, and where the simple things turn out to be complicated. He uses concrete examples, stories, and so on. (Add-on after getting to the end: Actually, "imagination" is a better descriptor than the 5-revolutions haiku! Still not great though, for reasons discussed below. (Same with your suggestion of "human", but IMHO with worse connotations of unseriousness, actually!))
(This deserves further remark: When I, and maybe lots of others, think about "teaching rationality to the masses", I think we make the mistake of aiming at the elementary-school level (simple stories, slogans, vibes), when we should aim a bit higher. For instance, the biggest YouTubers often aim for the middle-school level (gossip, heroes, extremes, hobbies, collections, emotions + some nontrivial level of articulation). A speedrunner probably gets more kids into the hacker mindset, in the top of the funnel, than a simple fable about Mr. Unseeing And His Sidekick Dog Neo And Their Mutual Friends Alice And Simplicio would. So, in a way that could either be a "developmental stage" or "historical contingency"... Not only are Rationalist ideas getting outcompeted by "Egan-izing is eating the world", but at least part of it is due to our aesthetic being more "religious/spiritual/simple" than "exciting/complicated", even though anyone who's already in the space knows that second space is here.)
Another point: I hate the project-template that goes "Let's take this (not-identity-or-group-related!) technical word with existing cultural meanings and connotations, and let's all coordinate to bring it back to its subtler philosophical meaning!". It won't work, all it does it make you harder to interact with, period (which is bad for, uh, coordination). Do not call that phase Irony.
I do think LessWrong-style rationalism is better than the (naive part of) the Philosophic Stage, and part of that is probably some of the skepticism. But (again going back to Yudkowsky) the key is that I have object-level reasons to believe the meta parts!
I don't think "ends don't justify means", I think "I am running on untrusted hardware". I don't think "there's no absolute truth", I think "absolute certainty requires literally infinite evidence". I don't think "science is just one tool, but it's the best we've got", I think (as sorta-implied by the Twelfth Virtue as mentioned) "Bayesian math is the laws of thought that works well, until/unless we come up with better ones, so the other stuff is either approximating it or (if a better part is found) containing it". I don't think "smile warmly and wink for deep knowledge and we all go off into the sunset", I say "I was about to write something depressing here, perhaps I should talk to a friend or seek medical help".
I get seriously concerned when someone can write a pretty fair summary of how "postrats" (post-Rationalists) differ from rationalists and then still be a postrat! It's like people don't even see the part of This All-Encompassing Lens that looks at their specific pet peeve. Like consequentialism, rationality is not about "acting like a dumb stereotype of that kind of person" (see 12.4.4 here), it's about "doing the thing, getting the truth".
Forgot to mention, but HPMOR is a good example of using heroes and adolescent-type story elements to teach people rationality. Perhaps this is related to it being debatably the most-successful Rationalist recruitment device to date?
I just read this book review of Egan's The Educated Mind. Here are some thoughts I had, written for all but grammatically directed at the review's anonymous author (like a typical comment!).
I'd love to go to that middle school, and that high school. It would set a lower-bound on how many different areas someone has passing knowledge of. (This sounds like faint praise, but Gell-Mann Amnesia partly depends on people not having this.)
I don't know if this would really help all the students, even if I mostly-agree with the Bruner(/HPMOR) statement that basically anyone can learn anything, after the age where pouring-water-into-a-taller-container-looks-like-adding-water no longer applies. But aside from the credential/signaling model of education (which, itself, could explicitly select against anything remotely pleasant or fun or effective), I think a lot of school's function is just daycare. The school does its "job" of keeping a kid off the streets and out of their parents' hair for 8 hours, but adding this curriculum to the existing system doesn't e.g. solve a kid's learning problems from being in an abusive home. (This probably isn't for the curriculum itself to have to solve, but I will say that The Bottleneck could just be on mental/physical health. E.g. schools fuck up teens' sleep.)
This sorta flatters my bias in favor of my own idea for how to teach math, so that's something. Ditto for my ADHD, as you discussed; That "curiosity about everything" is indeed quite fun. (No wonder TVTropes and social media feeds are both addictive!)
I got most of the Really Cool Stuff (taught in these Cool Intuitive Ways!) from a combination of pop-science books, YouTube videos, and blog posts on LessWrong and elsewhere. I was lucky that some of the core parts were already thrown into my middle/high school (e.g. how Bayes' formula can be derived from looking at a Venn Diagram). Now that I'm trying to specialize and amp-up my skills (for AI alignment), I can go into more depth using textbooks etc.
The overall idea of "learn through diverse examples" definitely fits well with other things in my
memeplexbelief system, like "the scaling hypothesis is basically true" and "most of our knowledge has to come from our prior knowledge (and, implicitly, we should learn about lots of things)" and "multiple different kinds of explanations should be used when teaching/learning". I still think about 80/20, simple rules that work, hands-on experience being logically non-magical, and things that generally fall under "how compressible reality is". I'm between the "two extremes" of the "two camps", but for now I'm log-closest to the secret 2nd axis of the graph labeled "COMPUTATIONAL TRACTABILITY/BOUNDS ARE IMPORTANT SOMEWHERE IN HERE".Some of the ending part of this post feel like they were written
by ChatGPTaccording to a stereotypical formula, even if that "stereotype" is only know to those of us who're nerdy about this sort of thing and even if that formula is pretty easy to write while still technically transferring 100% of the important stuff from one's notes to one's essay.Also: Dammit, I was hoping the "unveiling" would be something closer to the earlier mention of "everything in the world can be made interesting". This is such a core part of intellectual curiosity, and it's even a big part of why I love Yudkowsky's writing so much! He shows where the complicated things could be simple, and where the simple things turn out to be complicated. He uses concrete examples, stories, and so on. (Add-on after getting to the end: Actually, "imagination" is a better descriptor than the 5-revolutions haiku! Still not great though, for reasons discussed below. (Same with your suggestion of "human", but IMHO with worse connotations of unseriousness, actually!))
(This deserves further remark: When I, and maybe lots of others, think about "teaching rationality to the masses", I think we make the mistake of aiming at the elementary-school level (simple stories, slogans, vibes), when we should aim a bit higher. For instance, the biggest YouTubers often aim for the middle-school level (gossip, heroes, extremes, hobbies, collections, emotions + some nontrivial level of articulation). A speedrunner probably gets more kids into the hacker mindset, in the top of the funnel, than a simple fable about Mr. Unseeing And His Sidekick Dog Neo And Their Mutual Friends Alice And Simplicio would. So, in a way that could either be a "developmental stage" or "historical contingency"... Not only are Rationalist ideas getting outcompeted by "Egan-izing is eating the world", but at least part of it is due to our aesthetic being more "religious/spiritual/simple" than "exciting/complicated", even though anyone who's already in the space knows that second space is here.)
Another point: I hate the project-template that goes "Let's take this (not-identity-or-group-related!) technical word with existing cultural meanings and connotations, and let's all coordinate to bring it back to its subtler philosophical meaning!". It won't work, all it does it make you harder to interact with, period (which is bad for, uh, coordination). Do not call that phase Irony.
I do think LessWrong-style rationalism is better than the (naive part of) the Philosophic Stage, and part of that is probably some of the skepticism. But (again going back to Yudkowsky) the key is that I have object-level reasons to believe the meta parts!
I don't think "ends don't justify means", I think "I am running on untrusted hardware". I don't think "there's no absolute truth", I think "absolute certainty requires literally infinite evidence". I don't think "science is just one tool, but it's the best we've got", I think (as sorta-implied by the Twelfth Virtue as mentioned) "Bayesian math is the laws of thought that works well, until/unless we come up with better ones, so the other stuff is either approximating it or (if a better part is found) containing it". I don't think "smile warmly and wink for deep knowledge and we all go off into the sunset", I say "I was about to write something depressing here, perhaps I should talk to a friend or seek medical help".
I get seriously concerned when someone can write a pretty fair summary of how "postrats" (post-Rationalists) differ from rationalists and then still be a postrat! It's like people don't even see the part of This All-Encompassing Lens that looks at their specific pet peeve. Like consequentialism, rationality is not about "acting like a dumb stereotype of that kind of person" (see 12.4.4 here), it's about "doing the thing, getting the truth".