The most important traits of the new humans are that... they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat
Interestingly, as a LessWronger, I don't think of myself in quite this way. I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.
One that I've dealt with, that I think is pertinent for a lot of people, is being aware of how social media can destroy my attention and leave me feeling quite socially self-conscious. Bringing them into my environment damages my ability to think.
On the one hand, becoming able to think clearly and make good decisions while using social media is valuable and for many necessary. Here are some of the ways I try to do that, in the style of the Homo Novis:
It would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"
Great point. A few (related) examples come to mind:
This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality
...
- I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
- People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".
One is strongly predictiv
Hah, I was thinking of replying to say I was largely just repeating things you said in that post.
Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It's not bad to have two posts saying the same thing (slightly differently).
I note that I haven't said out loud, and should say out loud, that I endorse this history. Not every single line of it (see my other comment on why I reject verificationism) but on the whole, this is well-informed and well-applied.
As a note on terminology, I don't think that (Yudkowskian) rationalists use the word "rationalism" to describe our worldview/practice. It's a natural modification of "rationalist", and I've seen a few people outside the rationalist community use it to refer to our worldview, but e.g. no one ever comes up to me at a party and says, "Have any thoughts about rationalism lately?" We tend to just say "rationality" or "the art of rationality".
I'd also strongly advocate that we not start using the word "rationalism" for it. Mostly this is because I share your grumble about how the word "rationalist" already has a well-defined meaning to the rest of the world, and I don't want to extend that overloading and inevitable confusion by using the word "rationalism" alongside it.
I'm tempted to try to come up with better names for our worldview, but there are actually some advantages to not having a clear proper-noun-type name. One is that everyone immediately gets the gist of what "rationalists" are about. Stereotypes aside, it's an advantage over being called "the Frobnitzists" or something else inscrutable. Another is that, as described in the virtue of the void, we don't know exactly what the name is for what we want; we're trying to move toward that which cannot be named. If we give our current best-guess a proper noun like the Debiasers or the Bayesian Conspiracy, then we might be stuck with that even after we shift to a better understanding, or worse yet, we might think we've found the ultimate answer and become stuck to it through the name.
One minor note is that, among the reasons I haven't looked especially hard into the origins of "verificationism"(?) as a theory of meaning, is that I do in fact - as I understand it - explicitly deny this theory. The meaning of a statement is not the future experimental predictions that it brings about, nor isomorphic up to those predictions; all meaning about the causal universe derives from causal interactions with us, but you can have meaningful statements with no experimental consequences, for example: "Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us." For my actual theory of meaning see the "Physics and Causality" subsequence of Highly Advanced Epistemology 101 For Beginners.
That is: among the reasons why I am not more fascinated with the antecedents of my verificationist theory of meaning is that I explicitly reject a verificationist account of meaning.
"Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us" trivially unpacks to "If we had methods to make observations outside our light cone, we would pick up the signatures that galaxies after the expanding universe has carried them over the horizon of observation from us defined by c."
You say "Any meaningful belief has a truth-condition". This is exactly Peirce's 1878 insight about the meaning of truth claims, expressed in slightly different language - after all, your "truth-condition" unpacks to a bundle of observables, does it not?
The standard term of art you are missing when you say "verificationist" is "predictivist".
I can grasp no way in which you are not a predictivist other than terminological quibbles, Eliezer. You can refute me by uttering a claim that you consider meaningful, e.g. having a "truth-condition", where the truth condition does not implicitly cash out as hypothetical-future observables - or, in your personal terminology, "anticipated experiences"
Amusingly, your "anticipated experiences" terminology is actually closer to the language of Peirce 1878 than the way I would normally express it, which is influenced by later philosophers in the predictivist line, notably Reichenbach.
I reiterate the galaxy example; saying that you could counterfactually make an observation by violating physical law is not the same as saying that something's meaning cashes out to anticipated experiences. Consider the (exact) analogy between believing that galaxies exist after they go over the horizon, and that other quantum worlds go on existing after we decohere them away from us by observing ourselves being inside only one of them. Predictivism is exactly the sort of ground on which some people have tried to claim that MWI isn't meaningful, and they're correct in that predictivism renders MWI meaningless just as it renders the claims "galaxies go on existing after we can no longer see them" meaningless. To reply "If we had methods to make observations outside our quantum world, we could see the other quantum worlds" would be correctly rejected by them as an argument from within predictivism; it is an argument from outside predictivism, and presumes that correspondence theories of truth can be defined meaningfully by imagining an account from outside the universe of how the things that we've observed have their own causal processes generating those observations, such that having thus identified the causal processes through observation, we may speak of unobservable but fully identified variables with no observable-to-us consequences such as the continued existence of distant galaxies and other quantum worlds.
Just jaunt superquantumly to another quantum world instead of superluminally to an unobservable galaxy. What about these two physically impossible counterfactuals is less than perfectly isomorphic? Except for some mere ease of false-to-fact visualization inside a human imagination that finds it easier to track nonexistent imaginary Newtonian billiard balls than existent quantum clouds of amplitude, with the latter case, in reality, covering both unobservable galaxies distant in space and unobservable galaxies distant in phase space.
As a fellow "back reader" of Yudkowsky, I have a handful of books to add to your recommendations:
Engines Of Creation by K. Eric Drexler
Great Mambo Chicken and The Transhuman Condition by Ed Regis
EY has cited both at one time or another as the books that 'made him a transhumanist'. His early concept of future shock levels is probably based in no small part on the structure of these two books. The Sequences themselves borrow a ton from Drexler, and you could argue that the entire 'AI risk' vs. nanotech split from the extropians represented an argument about whether AI causes nanotech or nanotech causes AI.
I'd also like to recommend a few more books that postdate The Sequences but as works of history help fill in a lot of context:
Korzybski: A Biography by Bruce Kodish
A History Of Transhumanism by Elise Bohan
Both of these are thoroughly well researched works of history that help make it clearer where LessWrong 'came from' in terms of precursors. Kodish's biography in particular is interesting because Korzybski gets astonishingly close to stating the X-Risk thesis in Manhood of Humanity:
...At present I am chiefly concerned to drive home the fact that it is the great disparity between the
Thank you for writing this. Having read both your writings and Eliezer's, and many of the books listed, the story is as I expected it to be, but it is good to see the history laid out.
Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage:
For this essay, I think the answer is "No" for basically all three (with the last one maybe being a bit true, but not really), so overall I decided to move this to the frontpage.
Heh. Come to think of it from that angle, "a bit true, but not really" would have been exactly my assessment if I were in your shoes. Thanks, I appreciate the nuanced judgment.
This was not just informationally useful but also just plain well-written and enjoyable. I think you succeeded in communicating some of the feel. Thank you.
Thanks Eric for writing this post, I found it fascinating.
I imagine that there are are lot of lessons from General Semantics or analytic philosophy that might not have made it into rational-sphere, so if you ever find time to share some of that with us, I imagine it would be well-received.
This is great, strong upvoted!
Offtopic but I've really enjoyed your work over the years (CATB & Hacker's Dictionary from before I was a Less Wronger; Dancing With the Gods since). Glad to see you on LW, and thanks for the pointer to Heinlein's Gulf which I hadn't read, but was a solid read (though very clearly from the 1950s in its attitude - feels very outdated now).
As a teenager totally unattached to the larger software community (and open source, until years later), the New Hacker's DIctionary and the appended stories, along with Stoll's Cuckoo's Egg were formative for me. I had absolutely no contact with this culture, but I knew I wanted in. Finding that it overlaps with LessWrong, which I found independently later on, honestly feels bizarre.
Now I'm wondering if it's less that hacker culture as presented in those stories was attractive to me in itself, than if there was a common factor shining through. Interesting people, reasonable people...!
Probably, but there is something else more subtle.
Both the cultures you're pointing at are, essentially, engines to support achieving right mindset. It's not quite the same right mindset, but in either case you have to detach for "normal" thinking and its unquestioned assumptions in order to be efficient at the task around which the culture is focused.
Thus, in both cultures there's a kind of implicit mysticism. If you recoil from that word because you associate it with anti-rationality I can't really blame you, but I ask you to consider the idea of mysticism as "techniques for consciousness alteration" detached from any particular beliefs about the universe.
This is why both cultures a have a use for Zen. It is a very well developed school of mystical technique whose connection to religious belief has become tenuous. You can take the Buddhism out of it and the rest is still coherent and interesting.
Perhaps this implicit mysticism is part of the draw for you. It is for me.
You have an outside view of my writing, so I'm curious. On a scale of 0 = "But of course" to 5 = "Wow, that was out of left field", how surprising did you find it that I would write this essay?
If you can find anything more specific to say along these lines (why it's surprising/unsurprising) I would find that interesting.
I was slightly surprised, mostly because I had the expectation that if you've known about LW for a while, then I would have thought that you'd end up contributing either early or not at all. Curious what caused it to happen in 2021 in particular.
I don't really have an interesting answer, I'm afraid. Busy life, lots of other things to pay attention to, never got around to it before.
Now that I've got the idea, I may re-post some rationality-adjacent stuff from my personal blog here so the LW crowd can know it exists.
The way I have set this up for writers in the past has been to setup crossposting from an RSS feed under a tag (e.g. crossposting all posts tagged 'lesswrong').
I spent a minute trying and failed to figure out how to make an RSS feed from your blog under a single category. But if you have such an rss feed, and you make a category like 'lesswrong' then I'll set up a simple crosspost, and hopefully save you a little time in expectation. This will work if you add the category old posts as well as new ones.
I think the main reason some people have strong opinions about ESR is that he has some strong opinions, some of which are highly controversial, and he states some of those controversial opinions openly. In particular, much in US politics is super-divisive, and in five minutes on Eric's blog you can readily find five things that some (otherwise?) reasonable people will get very angry about.
... I thought I should actually test that, so I went over to have a look. His blog has been a bit less political lately than at some other times. But in exactly five minutes I found the following assertions (all the following are my paraphrases; I have no intent to distort but error is always possible, especially when reading quickly, so if you are minded to be angry at Eric you should first go and check what he actually wrote): the US has a problem with Communist oppression, Kyle Rittenhouse is a hero, white people at BLM protests should be assumed to be communists and shot at will [EDITED to add: as habryka points out in a reply, this paraphrase is potentially misleading; more below], an armed storming of the Michigan State House was an appropriate response to stay-at-home orders. (That's April ...
Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK".
I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with the key difference being "white rioters" instead of "white people". While there is still plenty to criticize in that sentence, this seems like a really crucial distinction that makes that sentence drastically less bad.
Topics like this tend to get really politicized and emotional, which I think means it's reasonable to apply some extra scrutiny and care to not misrepresent what other people said, and generally err on the side of quoting verbatim (ideally while giving substantial additional context).
As you'll see from the edit to my original comment, I found something Eric said in the discussion on his blog that drew a fairly explicit boundary between rioters and mere protestors. My impression is that if Eric actually acts strictly according to the principles stated there, the law will not protect him and he will end up in jail (thinking that someone has intent to commit crimes is not generally sufficient justification in law for shooting them); several commenters on his blog expressed the same concern.
I worry that we may be getting into arguing about Eric's opinions themselves, rather than merely answering the question "why do some people have such strong opinions about him", and I think that's not a useful topic for discussion here. Of course that's mostly my fault for not getting my summaries perfectly accurate, for which once again I apologize.
I've curated this essay[1].
Getting a sense of one's own history can be really great for having perspective. The primary reason I've curated this is because the post really helped give me perspective on the history of this intellectual community, and I imagine also for many other LWers.
I wouldn't have been able to split it into "General Semantics, analytic philosophy, science fiction, and Zen Buddhism" as directly as you did, nor would I know which details to pick out. (I would've been able to talk about sci-fi, but I wouldn't quite know how to relate the r...
Eliezer was more influenced by probability theory, I by analytic philosophy, yes. These variations are to be expected. I'm reading Jaynes now and finding him quite wonderful. I was a mathematician at one time, so that book is almost comfort food for me - part of the fun is running across old friends expressed in his slightly eccentric language.
I already had a pretty firm grasp on Feynman's "first-principles approach to reasoning" by the time I read his autobiographical stuff. So I enjoyed the books a lot, but more along the lines of "Great physicist and I think alike! Cool!" than being influenced by him. If I'd been able to read them 15 years earlier I probably would have been influenced.
One of the reasons I chose a personal, heavily narratized mode to write the essay in was exactly so I could use that to organize what would otherwise have been a dry and forbidding mass of detail. Glad to know that worked - and, from what you don't say, that I appear to have avoided the common "it's all about my feelings" failure mode of such writing.
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times. I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.
Thank you for introducing us to those who built this basilica. Just in looking up General Semantics, I've learned more about the culture wars that preceded the ones we now fight, and I learned who a few of the generals were on both sides.
If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.
Interestingly, this is how I often feel about western philosophy; my early experience of philosophy classes and books was very much about 'who said what', and a sort of intellectual territorialism that seemed disconnected from any ultrahumanist project to think better. [...
Ironically, I disagree a bit with lukeprog here - one of the few flaws I think I detect in the Sequences is due to Eliezer not having read enough philosophy. He does arrive at a predictivist theory of confirmation eventually, but it takes more effort and gear-grinding than it would have if he had understood Peirce's 1878 demonstration and expressed it in clearer language.
Ah well. It's a minor flaw.
Wow, this was quite a surprise seeing your post here, and finding out that you've been reading Less Wrong for all of these years !
(On the other hand, probably not, an English speaker with similar intellectual tendencies and Silicon Valley tropism would probably have quickly found about it, my case not being very typical ?)
I hope that you are well ?
(Here are some of my thoughts, reading through.)
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora.
It's strange, I don't feel the fog much in my life. I wonder if this a problem. It doesn't seem like I should feel like "I and everyone around me basically know what's going on".
I can imagine certain people for whom talking to them would feel like a flash of light in the fog. I probably ...
I'm only 23 - probably younger than most people here - but I imagine my father must have read many of the same books, as he raised me to think in a way which I now understand to be very much like Yudkowsky's version of rationality. As with what you quoted from Nancy, it all seemed really obvious to me when I read the Sequences, except for the mathematical components (Bayesianism still confuses me, but I'll get there eventually).
The main way I differ here though is that I have had lots of "mystical experiences" due to probably schizotypal or dissociative te...
In this you differ from the average rationalist but maybe not so much from Eric; see e.g. his essay "Dancing with the Gods".
Thanks for making that connection to Zen Buddhism. I never thought of it as a central theme of The Sequences before this.
I'm still not sure if I'm convinced that it actually is a central theme. In the preface to Rationality From AI to Zombies, Eliezer writes:
...It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and
I think a collection of examples and analysis would be a post in itself.
But I can give you one suggestive example from Twelve Virtues itself: "If you speak overmuch of the Way you will not attain it."
It is a Zen idea that the essence of enlightenment cannot be discovered by talking about enlightenment; rather one must put one's mind in the state where enlightenment is. Moreover, talk and chatter - even about Zen itself - drives that state away.
Eliezer is trying to say here that the the center of rationalist practice is not in what you know about rationality or how much cleverness you can demonstrate to others but in achieving a mental stance that processes evidence correctly and efficiently.
He is borrowing the rhetoric of Zen to say that because, as with Zen, the center of our Way is found in silence and non-attachment. The Way of Zen wants you to lose your attachment to desires; the Way of rationality wants you to lose your attachment to beliefs.
This post was personally meaningful to me, and I'll try to cover that in my review while still analyzing it in the context of lesswrong articles.
I don't have much to add about the 'history of rationality' or the description of interactions of specific people.
Most of my value from this post wasn't directly from the content, but how the content connected to things outside of rationality and lesswrong. So, basically, i loved the citations.
Lesswrong is very dense in self-links and self-citations, and to a lesser degree does still have a good number of li...
I like this post for reinforcing a point that I consider important about intellectual progress, and for pushing against a failure mode of the Sequences-style rationalists.
As far as I can tell, intellectual progress is made bit by bit with later building on earlier Sequences. Francis Bacon gets credit for landmark evolution of the scientific method, but it didn't spring from nowhere, he was building on ideas that had built on ideas, etc.
This says the same is true for our flavor of rationality. It's built on many things, and not just probability theory.
The f...
Fascinating and enjoyable read. I put a few of the recommended books onto my to-read list. Thank you.
In your journey, I wonder if you've come across Buckminster Fuller and, if yes, what's your opinion on his ideas?
I ask this because I found Fuller's works at the same time I found Korzybski's. And while vastly different in theme and scope, they seemed to be underpinned by the same spirit--positive, human-centered, problem-solving--one I would label as "humanism."
I also was a rationalist before Eliezer, but of Eric's four sources of information the only one I shared is science fiction. I had the advantage of growing up in a family where the relevance of reason to the world was taken for granted.
At one point, long after I had become an adult, my parents asked me whether it would have been better if they had brought me up in their parents' (Jewish) religion. I replied that I preferred having been brought up in the one they believed in — 18th century rationalism, the ideology of Adam Smith and David Hume.
Wonderful article.
I especially liked the part of Zen rhetoric surviving in the English language, and the part about feeling an epistemic disjunction.
The real question is, is there a historical precursor to /r/SneerClub? Perhaps an SF zine run by someone who didn't like Korzybski and Van Vogt...
I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed.
My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique.
My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.
My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.
Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.
Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly.
When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation.
Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined.
One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already."
Around the time Nancy and I first met, some years before Eliezer Yudkowsky was born, my maternal grandfather gave me a book called "People In Quandaries". It was an introduction to General Semantics. I don't know, because I didn't know enough to motivate the question when he was alive, but I strongly suspect that granddad was a member of one of the early GS study groups, probably the same one that included Robert Heinlein (they were near neighbors in Southern California in the early 1940s).
General Semantics is going to be a big part of my story. Twelve Virtues speaks of "carrying your map through to reflecting the territory"; this is a clear, obviously intentional callback to a central GS maxim that runs "The map is not the territory; the word is not the thing defined."
I'm not going to give a primer on GS here. I am going to affirm that it rocked my world, and if the clue in Twelve Virtues weren't enough Eliezer has reported in no uncertain terms that it rocked his too. It was the first time I encountered really actionable advice on the practice of rationality.
Core GS formulations like cultivating consciousness of abstracting, remembering the map/territory distinction, avoiding the verb "to be" and the is-of-identity, that the geometry of the real world is non-Euclidean, that the logic of the real world is non-Aristotelian; these were useful. They helped. They reduced the inefficiency of my thinking.
For the pre-Sequences rationalist, those of us stumbling around in that fog, GS was typically the most powerful single non-fictional piece of the available toolkit. After the millennium I would find many reflections of it in the Sequences.
This is not, however, meant to imply that GS is some kind of supernal lost wisdom that all rationalists should go back and study. Alfred Korzybski, the founder of General Semantics, was a man of his time, and some of the ideas he formulated in the 1930s have not aged well. Sadly, he was an absolutely terrible writer; reading "Science and Sanity", his magnum opus, is like an endless slog through mud with occasional flashes of world-upending brilliance.
If Eliezer had done nothing else but give GS concepts a better presentation, that would have been a great deal. Indeed, before I read the Sequences I thought giving GS a better finish for the modern reader was something I might have to do myself someday - but Eliezer did most of that, and a good deal more besides, folding in a lot of sound thinking that was unavailable in Korzybski's day.
When I said that Eliezer's sources are probably more difficult to back-read today than they were in 2006, I had GS specifically in mind. Yudkowskian-reform rationalism has since developed a very different language for the large areas where it overlaps GS's concerns. I sometimes find myself in the position of a native Greek speaker hunting for equivalents in that new-fangled Latin; usually present but it can take some effort to bridge the gap.
Next I'm going to talk about some more nonfiction that might have had that kind of importance if a larger subset of aspiring rationalists had known enough about it. And that is the analytic tradition in philosophy.
I asked Eliezer about this and learned that he himself never read any of what I would consider core texts: C.S. Peirce's epoch-making 1878 paper "How To Make Our Ideas Clear", for example, or W.V. Quine's "Two Dogmas of Empiricism". Eliezer got their ideas through secondary sources. How deeply pre-Sequences rationalists drew directly from this well seems to be much more variable than the more consistent theme of early General Semantics exposure.
However: even if filtered through secondary sources, tropes originating in analytic philosophy have ended up being central in every formulated version of rationalism since 1900, including General Semantics and Yudkowskian-reform rationalism. A notable one is the program of reducing philosophical questions to problems in language analysis, seeking some kind of flaw in the map rather than mysterianizing the territory. Another is the definition of "truth" as predictive power over some range of future observables.
But here I want to focus on a subtler point about origins rather than ends: these ideas were in the air around every aspiring rationalist of the last century, certainly including both myself and the younger Eliezer. Glimpses of light through the fog...
This is where I must insert a grumble, one that I hope is instructive about what it was like before the Sequences. I'm using the term "rationalist" retrospectively, but those among us who were seeking a way forward and literate in formal philosophy didn't tend to use that term of ourselves at the time. In fact, I specifically avoided it, and I don't believe I was alone in this.
Here's why. In the history of philosophy, a "rationalist" is one who asserts the superiority of a-priori deductive reasoning over grubby induction from mere material facts. The opposing term is "empiricist", and in fact Yudkowskian-reform "rationalists" are, in strictly correct terminology, skeptical empiricists.
Alas, that ship has long since sailed. We're stuck with "rationalist" as a social label now; the success of the Yudkowskian reform has nailed that down. But it's worth remembering that in this case not only is our map not the territory, it's not even immediately consistent with other equally valid maps.
Now we get to the fun part, where I talk about science fiction.
SF author Greg Bear probably closed the book on attempts to define science fiction as a genre in 1994 when he said "the branch of fantastic literature which affirms the rational knowability of the universe". It shouldn't be surprising, then, that ever since the Campbellian Revolution in 1939 invented modern science fiction there has been an important strain in it of fascination with rationalist self-improvement.
I'm not talking about transhumanism here. The idea that we might, say, upload to machines with vastly greater computational capacity is not one that fed pre-Yudkowskian rationalism, because it wasn't actionable. No; I'm pointing at more attainable fictions about learning to think better, or discovering a key that unlocks a higher level of intelligence and rationality in ourselves. "Ultrahumanist" would be a better term for this, and I'll use it in the rest of this essay.
I'm going to describe one such work in some detail, because (a) wearing my SF-historian hat I consider it a central exemplar of the ultrahumanist subgenre, and (b) I know it had a large personal impact on me.
"Gulf", by Robert A. Heinlein, published in the October–November 1949 Astounding Science Fiction. A spy on a mission to thwart an evil conspiracy stumbles over a benign one - people who call themselves "Homo Novis" and have cultivated techniques of rationality and intelligence increase, including an invented language that promotes speed and precision of thought. He is recruited by them, and a key part of his training involves learning the language.
At the end of the story he dies while saving the world, but the ostensible plot is not really the point. It's an excuse for Heinlein to play with some ideas, clearly derived in part from General Semantics, about what a "better" human being might look and act like - including, crucially, the moral and ethical dimension. One of the tests the protagonist doesn't know he's passing is when he successfully cooperates in gentling a horse.
The most important traits of the new humans are that (a) they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat; and (b) they're not some kind of mutation or artificial superrace. They are human beings who have chosen to pool their efforts to make themselves more reliably intelligent.
There was a lot of this sort of GS-inspired ultrahumanism going around in Golden Age SF between 1940 and 1960. Other proto-rationalists may have been more energized by other stories in that current. Eliezer remembers and acknowledges "Gulf" as an influence but reports having been more excited by "The World of Null-A" (1946). Isaac Asimov's "Foundation" novels (1942-1953) were important to him as well even though there was not much actionable in them about rationality at the individual level.
As for me, "Gulf" changed the direction of my life when I read it sometime around 1971. Perhaps I would have found that direction anyway, but...teenage me wanted to be homo novis. More, I wanted to deserve to be homo novis. When my grandfather gave me that General Semantics book later in the same decade, I was ready.
That kind of imaginative fuel was tremendously important, because we didn't have a community. We didn't have a shared system. We didn't have hubs like Less Wrong and Slate Star Codex. Each of us had to bootstrap our own rationality technique out of pieces like General Semantics, philosophical pragmatism, the earliest most primitive research on cognitive biases, microeconomics, and the first stirrings of what became evolutionary psych.
Those things gave us the materials. Science fiction gave us the dream, the desire that it took to support the effort of putting it together and finding rational discipline in ourselves.
Last I'm going to touch on Zen Buddhism. Eliezer likes to play with the devices of Zen rhetoric; this has been a feature of his writing since Twelve Virtues. I understood why immediately, because that attraction was obviously driven by something I myself had discovered decades before in trying to construct my own rationalist technique.
Buddhism is a huge, complex cluster of religions. One of its core aims is the rejection of illusions about how the universe is. This has led to a rediscovery, at several points in its development, of systematic theories aimed at stripping away attachments and illusions. And not just that; also meditative practices intended to shift the practitioner into a mental stance that supports less wrongness.
If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.
One of the most recent periods of such rediscovery followed the 18th-century revival of Japanese Buddhism by Hakuin Ekaku. There's a fascinating story to be told about how Euro-American culture imported Zen in the early 20th century and refined it even further in the direction Hakuin had taken it, a direction scholars of Buddhism call "ultimatism". I'm not going to reprise that story here, just indicate one important result of it that can inform a rationalist practice.
Here's the thing that Eliezer and I and other 20th-century rationalists noticed; Zen rhetoric and meditation program the brain for epistemic skepticism, for a rejection of language-driven attachments, for not just knowing that the map is not the territory but feeling that disjunction.
Somehow, Zen rhetoric's ability to program brains for epistemic skepticism survives not just disconnection from Japanese culture and Buddhist religious claims, but translation out of its original language into English. This is remarkable - and, if you're seeking tools to loosen the grip of preconceptions and biases on your thinking, very useful.
Alfred Korzybski himself noticed this almost as soon as good primary sources on Zen were available in the West, back in the 1930s; early General Semantics speaks of "silence on the objective level" in a very Zen-like way.
No, I'm not saying we all need to become students of Zen any more than I think we all need to go back and immerse ourselves in GS. But co-opting some of Zen's language and techniques is something that Eliezer definitely did. And I did, and other rationalists before the Yudkowskian reformation tended to find their way to.
If you think about all these things in combination - GS, analytic philosophy, Golden Age SF, Zen Buddhism - I think the roots of the Yudkowskian reformation become much easier to understand. Eliezer's quest and the materials he assembled were not unique. His special gift was the same ambition as Alfred Korzybski's; to form from what he had learned a teachable system for becoming less wrong. And, of course, the intellectual firepower to carry that through - if not perfectly, at least well enough to make a huge difference.
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times. I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.
Some of you, recognizing my name, will know that I ended up changing the world in my own way a few years before Eliezer began to write the Sequences. That this ensued after long struggle to develop a rationalist practice is not coincidence; if you improve your thinking hard enough over enough time I suspect it's difficult to avoid eventually getting out in front of people who aren't doing that.
That's what Eliezer did, too. In the long run, I rather hope that his reform movement will turn out to have been more important than mine.
Selected sources follow. The fiction list could have been a lot longer, but I filtered pretty strongly for works that somehow addressed useful models of individual rationality training. Marked with * are those Eliezer explicitly reports he has read.
Huikai, Wumen: "The Gateless Barrier" (1228)
Peirce, Charles Sanders: "How To Make Our Ideas Clear" (1878)
Korzybski, Alfred: "Science and Sanity" (1933)
Chase, Stuart: "The Tyranny of Words" (1938)
Hayakawa, S. I: "Language in Thought and Action" (1939) *
Russell, Bertrand: "A History of Western Philosophy" (1945)
Orwell, George: "Politics and the English Language" (1946) *
Johnson, Wendell: "People in Quandaries: The Semantics of Personal Adjustment" (1946)
Van Vogt, A. E: "The World of Null-A" (1946) *
Heinlein, Robert Anson: "Gulf" (1949) *
Quine, Willard Van Orman: "Two Dogmas of Empiricism" (1951)
Heinlein, Robert Anson: "The Moon Is A Harsh Mistress" (1966) *
Williams, George: "Adaptation and Natural Selection" (1966) *
Pirsig, Robert M.: "Zen and the Art of Motorcycle Maintenance" (1974) *
Benares, Camden: "Zen Without Zen Masters" (1977)
Smullyan, Raymond: "The Tao is Silent" (1977) *
Hill, Gregory & Thornley, Kerry W.: "Principia Discordia (5th ed.)" (1979) *
Hofstadter, Douglas: "Gödel, Escher, Bach: An Eternal Golden Braid" (1979) *
Feynman, Richard: "Surely You're Joking, Mr. Feynman!" (1985) *
Pearl, Judea: "Probabilistic Reasoning in Intelligent Systems" (1988) *
Stiegler, Marc: "David's Sling" (1988) *
Zindell, David: "Neverness" (1988) *
Williams, Walter John: "Aristoi" (1992) *
Tooby & Cosmides: "The Adapted Mind: Evolutionary Psychology and the Generation of Culture" (1992) *
Wright, Robert: "The Moral Animal" (1994) *
Jaynes, E.T.: "Probability Theory: The Logic of Science" (1995) *
The assistance of Nancy Lebovitz, Eliezer Yudowsky, Jason Azze, and Ben Pace is gratefully acknowledged. Any errors or inadvertent misrepresentations remain entirely the author's responsibility.