This post was rejected for the following reason(s):
Hey Benjamin, I see that you put a lot of effort into this post and it touches on topics that matter a lot to you. I don't enjoy saying it, but I don't think this is a good first post for LessWrong. Especially for first posts, we have a high bar. And some topics are especially hard to write about well. I'm rejecting this post. We're happy to review another attempt. Major points would be a post that's shorter, has a clearer single point, and shorter paragraphs. All the best.
(Wherein somebody—I guess it has to be me? Of all people?—disagrees fundamentally with Yudkowsky for reasons that are actually not, by all appearances, things he’s already considered deeply and rigorously for thirty years at least and is just being heroically patient to not be driven into a rage by, Yudkowsky being the very person I’m most used to defending for years now against empty accusations of being wrong, because he’s more crucially right/less-wrong than maybe anybody ever given the significance of what he’s talking about, for reasons—my criticisms of him, I mean—I expect that even he, master-rationalist, has, being human, still failed to consider at all, so, like, give me a break, at least I’m trying. I think Yudkowsky is effectively considering if not outright endorsing a form of child abuse, so I have to say something.)
“It’s called the planning fallacy, and the best way to fix it is…called using the outside view instead of the inside view. But when you’re doing something new and can’t do that, you just have to be really, really, really pessimistic. Like, so pessimistic that reality actually turns out better than you expected around as often and as much as it comes out worse. It’s actually really hard to be so pessimistic that you stand a decent chance of undershooting real life.” – Harry (Yudkowsky)
The point: the very assumption that “intelligence” (which we define axiomatically and which we can’t “prove” as an idea within any sufficiently powerful and consistent formal system of logic due to Godel’s Incompleteness/Completeness theorems) is inherently “good”, or inherently “necessary” not only to “solve our most fundamental problems” but even to stave off machine learning extinction, this idea is itself one of our most (if not the most) fundamental and devastating problems, or cognitive biases—I’m a pessimist, or “depressive realist”, admittedly, so I was never on board with the singularity-could-be-great thing, and I’m not on board with any salvaging effort. My focus, bluntly, is: I can make a non-trivial, logically consistent argument for the idea that a corpse is literally “more intelligent” than John Von Neuman (while he was at his living “peak intelligence”), and this argument will be no more nor less “wrong” than any argument to the contrary (see below). This is all we need recognize in order to recognize that any suggestion of sacrificing our children to intelligence augmentation experiments is completely insane.
For context, to start: we’re fucked, sure. Yudkowsky’s right (except for his underlying assumptions regarding “intelligence” itself). Unfortunately, people like me have to both know this (we’re fucked, Yudkowsky’s right, except for where he’s wrong), because we’ve done the math, which is, of course, the only way to come to such a conclusion (“the end is nigh, repent!” is not the same as “the probability of mass extinction including humans sooner than we are collectively willing to accept is rapidly approaching 100%, thus we have to prioritize dying with dignity”), and at the same time we have to know that the people who don’t recognize this in all likelihood never will. I assume everybody like me, rather than doing “business as usual”, and rather than “freaking the fuck out”, have been for years now preoccupied with, well, let’s see…letting go, thinking of how to actually help the children post-hope, and of course how to form a Bayesian theory of suicidal reasoning when it will become both increasingly impossible to not consider suicide and at the same time more confusing as to exactly when and how, if so, because of the children, who need us, literally, not in the way we pretend they do while we’re actually abusing them and lying to ourselves about it. Anyway…one opening paragraph of venting seems fairly justified given, you know, the destruction of everything we’ve ever known.
Moving on (as if we can move on from extinction): I’m a diagnosed “autistic”, if that wasn’t apparent already. I’m sure many Less Wrong readers are as well. However, I’ve never considered this term, “autism”, to be anything other than a euphemistic distraction—this is why I was diagnosed only later in life, because I never bothered bringing “it” up to a medical professional. Why bother? I got used to calculated self-censorship to avoid abuse very early in life. I was born into a lower-middle-class situation sliding down into lower and lower class situations, to a somewhat unusually dysfunctional (for my community) family that didn’t encourage my autodidacticism except accidentally—they didn’t know what the word means, and they never learned, nor were they willing to look up anything I talked about, and would as a rule never listen to any explanation of this or any other idea from the likes of me, an uppity know-it-all kid (Yudkowsky writes beautifully of precisely this in HPMOR), and I’m not about to bring personal childhood abuse up (in detail, anyway) so early into what I intend to be a straight-forward argumentative essay, but that was (predictably, to me, looking back as an adult, but not predictably clear enough to any adult around child-me to do anything, because societally speaking we especially don’t give a shit about children—the fact that we think we do is only evidence of our collective insanity, of course) a thing too, yeah. I was abused. Also, the ultimate point of this essay is to discourage even suggesting a specific form of child abuse (subjecting children to “intelligence augmentation” experiments). Most “autistic people” experience some form of childhood (and on-going) abuse, to bring it back to that. But I hobbled together my rickety form of autodidacticism and have done my best. As a result of my particular disadvantages, I’m both less educated than Yudkowksy and obviously have far less social status than Yudkowksy (you haven’t heard of me, ~nobody has or ever will, beyond this post, which could very well be entirely ignored, and I could mention “The University of Michigan, gee whiz” and the fact that I was there for a bit and had people with authority+status+money+parents who gave an effectively-altruistic-shit about them who would tell me they thought I was in some rarified percentile of “intelligence”, but that could also be said of Ted Kaczynski, who ruined that status appeal, I guess, and anyway I’d have to also mention that I dropped out fairly soon after arriving there, for “autistic” reasons, because I truly couldn’t stand being in the environment, which turns out to matter a lot to “autistic people” in general). But I’m not afraid of Yudkowsky, and that’s not a challenge, that’s just how it goes when you’re “autistic”. He gets this, of course. He and I both don’t care whatsoever about status, for essentially physiological reasons—Yudkowksy writes beautifully about exactly this in “Inadequate Equilibria”, especially the sections dealing directly with “modesty epistemology”. Us status-is-cognitive-bias folks know, if you will, what’s up. The reason I love Yudkowsky dearly, in fact, is that he points this out and says additionally so many of the things I and others like me have tried to tell people throughout my life/our lives to little avail (i.e., we’ve mostly just watched humanity passively or even actively hasten extinction our whole adult-at-least lives), except I was having a way more dysfunctional time of things so my writing was impenetrable to anyone but, like, a handful of other weirdos like me who were born with too much social disadvantage to achieve significant social status, so you haven’t heard of them either (not a “woe is me”, but a “woe is us”). It’s only now—I’m 37, embarrassingly old to be admitting this—that I’m even trying to figure out how to write clearly so that people could actually read and understand me (I gave up on that notion—or so I thought—a long time ago). It was in fact only brought up for me, finally, because I was exasperating a well-meaning therapist by bringing up the things I’m discussing here, calmly, deliberately, but persistently, in the context of severe depression and suicidal reasoning, and they concluded that, because they were getting exasperated, and yet there I am, going on about these things, week after week—the whole “suicide” thing, the whole “euthanasia” thing, the whole “existential pessimism” or “depressive realism” thing, the whole “alarmingly high probability of imminent global catastrophe up to and including total extinction due to simultaneous and unsolvable existential risks” thing (good grief, where does it end!)—that I must be “autistic”, and that if they diagnosed me as this, we could move on to the part where I learn to stop talking, basically. I’m sure many Less Wrong readers have gone through exactly this experience innumerable times. I’ve been watching Eliezer Yudkowsky go through this experience continually ever since I started reading his writings many years ago. Allow me to explain—“autistically”, as I inevitably will, or you can just tell me to stop talking, and when you get tired of that, can institutionalize me, and so on, until at some point people like me are killed off if not by machines per se than by people like, say, actual-Nazis, who as a rule do things like, say, murder everybody in the mental hospitals because they’re “inferior”, which turns out to be exactly the logic undergirding “the mental health system” itself, which no less than Jean Amery (Jew, “autistic”, pessimist, suicide practitioner) pointed out in no uncertain terms after surviving Nazi concentration camps and before finally killing himself, and which even Freud (Jew, “autistic”, pessimist, predictor of Nazism) tried to point out as much as he was able to without being completely kicked out of you-get-to-be-a-psychologist land by his “colleagues”. “Autistic” enough for you? Run-on sentences? Not considering the “average attention span of readers”? Yep. Fully aware. Again, I’m trying.
It’s often noted that “autistic people” are “disabled”, or “deficient”, or (let’s face it, what people are really saying is) “inferior” with regard to “social skills”. As a society, we will say things like, “autistic people are differently abled”. Interesting. What are they “differently abled” at? This is where it gets awkward. It’s rationality. There’s no way the answer is anything other than rationality. “How did that kid beat Tetris?! Must be magic!?” Nope. Magic’s not a thing. It’s rationality. “How did Isaac Newton do the thing where he…?!?” It’s rationality. I noted above the tendency to distract from “effective altruism” (meaning heroism—I’m not convinced we shouldn’t just call this “heroism”, that is, for obvious motivational reasons, and not worry about whether people get hung up on the idea that this sounds like it means “Superman” or whatever) with flattery. You’re a “good person”. Sounds well-meaning, but is ultimately meaningless, irrelevant, irrational. Hence, effective altruism—you know, the kind where it actually does something instead of just reinforcing delusions, however “well-meaning” those delusions may be (road to hell, paved with good intention cognitive bias, the whole thing). And, hence, the societal authority figure or non-“autistic” person will inevitably tell the “autistic” person, “Gee, you’re” (wait for it, wait for it…) “really smart”. At which point the conversation is just over, they’re saying. They can’t follow, so, “please be contented with this gift of meaningless flattery and stop talking now. Please, for the love of God.”
You may be thinking, “Woah, woah, what about those ‘severely autistic’ people? They’re not like Eliezer Yudkowsky—teaching themselves, like, everything, writing great books, doing critical research, appearing on podcasts and being heroically patient with people of status who are obviously incapable of understanding what they’re saying despite the fact that this modest-epistemology paralysis, being a majority view, means we’re doomed, etc. These severely autistic people tend to be instead, say, non-verbal, even completely non-verbal, maybe the whole time they’re alive, which is maybe a long time. They tend to make strange noises and make strange body movements and be extremely sensitive to any and all sensory stimulation such that they are in near constant emotional flux with regard to it, and that seems to be, as far as us folks who get to be verbal and ‘control ourselves’ and be ‘emotionally regulated’ to some kind of ‘social standard of etiquette’ can tell, all that their life consists of. That is, this seems to be more or less all they can ‘do’. They don’t, for example, give us stuff like, say, ‘General Relativity’, with which we can build the really neat stuff like, say, a nuclear bomb, with which we can, say, destroy ourselves and all life on this planet without intending to. They don’t give us stuff like, say, ‘Quantum Mechanics’, with which we can build stuff like, say, the Internet, or the ‘Smart Phone’ [great name, folks], with which we can, say, steadily devalue all of human creativity by rendering it impossible to exchange any example of it for meaningful resources, which happened to be one of the main-stay ‘reasons to go on living’, oops. They don’t give us stuff like, say, a forced false dichotomy choice between two theories of economics, one where resources are magically distributed evenly/perfectly without regard to the actual mathematics of, you know, distribution, such that we voluntarily manifest entirely avoidable mass famine and genocide to clear the way for the ‘future utopia’, or one (theory of economics) where the concentration of wealth magically ‘trickles down’, like a voluptuous breast whose teat secretes sweet nourishing milk for all, freely and naturally, such that tomorrow’s poor will inevitably be yesterday’s royalty, without regard to the actual mathematics of, you know, infinity/limits, such that we ignore obvious instances of market failure up to and including destroying the goddam earth altogether without any exit strategy. They don’t give us stuff like, say, the ‘Theory of Computation’, with which we can build stuff like, say, uncontrollable machine learning with which we exponentially increase the inevitability of the least dignified form of extinction possible. Surely that’s…a disability?!”
Let’s break down that claim. You’re telling me—you being the U.S. mental health system, education system, average non-“autistic” person, or even non-“severely autistic” people—that there’s a graph. A continuum. A “spectrum”, as we insist on calling it for some (euphemistic) reason—namely, thinking that calling it a “spectrum” will distract “autistic” people from realizing that it’s just a graph, as if an “autistic” person doesn’t know a graph when they see one. So, you’re telling me, I’m on the lower end of that graph, in that I’m able to do the be-verbal, eye-contact, shake-the-hand crap, and that this correlates causally to me being above-“average” (context dependent, prone to selection bias as to how this is calculated) in terms of what you’re calling “different abilities”, but which, being “autistic”, I know actually just means rationality.
According to my overall philosophical system (or whatever), which I refer to as “radical depressive realism” (and which I’ll hopefully be able to share a more-or-less full expression of on Less Wrong soon, we’ll see), existential-pessimism is the default, obvious conclusion of essentially everyone, to begin with, and also ultimately. In the between time, though, we come up with all kinds of nonsense. But our existences are at least bookended inescapably by the undeniable reality of our condition, and we are inescapably aware of this, whether we admit it openly (or even privately) or not. My main evidence for this idea is: Exhibit A, babies; Exhibit B, “severely autistic” people. These are the two groups of people who are so incapable of lying to you (think “coping skills”, think “social skills”), and who care the least about offending anybody, that they will just straight up tell you exactly what is going on, as we all actually deep down see (or saw, or will see) it, straight-no-chaser. And both of these groups are existential-pessimists—both shriek inconsolably, basically, wherever they deem that to be an appropriate reaction to existence, without concealing this. This is precisely why as a “coping, social” collective, we have only two official reactions to these people: we either find them “adorable” (which is to say, we don’t take them seriously), or we find them exceptionally annoying (as in, “Oh my God, will somebody shut them up or I’m going to start actively abusing them to make myself feel better and, statistically, historically speaking, will eventually just kill them in extreme cases.”)
Putting it bluntly: we don’t like that these people are “smarter” than us. It’s offensive.
At the risk of pursuing a seemingly off-topic but, to me, very relevant point, I’d like to make a quick observation here regarding Jordan Peterson—I can hear the hypothetical moans, the groans, the false-dichotomy assumptions that I’m either for/against this person in ways we’re all, I’m sure, tired of hearing about. I don’t care, though. This is an extremely important point and this person is a very useful illustration of this. Who is this person? Well, this was, statistically speaking, an extremely good clinical psychologist with a very impressive track record in the field—especially compared to the overall failure of the field itself, the field of clinical psychology. Then what happened? Why is he not doing the thing that he’s so good at anymore, and why doesn’t he care that, while advising the world to be sure to identify things they’re really good at and pursue them (along with advising people to blindly seek “marriage” and reproduce irresponsibly as many children as they can, not even as many as they want, because the more “responsibility” you embrace—which, in this case, means responsibility for multiplying suffering and death—the better your chances, supposedly, of forcing yourself to be “emotionally mature”) he himself has abandoned the very thing he’s best at? What’s going on here? Well, he did a heroic thing, which is, in short, he thought rationally in public, at a time when both the public and authority figures were being particularly irrational, and he wasn’t deterred by the fact that this would mean him suffering abusive punishment for speaking up—I should say, I do identify as “trans”, according to my own interpretation of what this idea means, so don’t even get me started on that. But yeah, he referred to past evidence and the significant probability of future disastrous consequences relevant to a specific example of foolish Canadian legislation, at a time when this past evidence and probability calculation were being otherwise ignored by all those in positions of authority, of status. So, good. However, he then proceeded to fall into a pointless self-esteem generating whirlwind of, on one hand, obviously irrational criticism that shouldn’t even be engaged (because there’s no point), and on the other hand, wild praise and affirmation of his worth, financial reward, fame, and so forth (status, in short). At this point, he is so lost in this whirlwind of chaos (the very thing he encourages people to avoid), that it seems very unlikely that he will stop to consider simply going back to being a clinical psychologist. Even more importantly, it seems very unlikely that he will stop to rationally scrutinize the theory he put forth in “Maps of Meaning” and admit that there is no evidence to support the supposedly original ideas within it, and that he has been wasting time in this whirlwind that could have been used trying to gather meaningful evidence for his theory, if there was even any evidence to gather—and there isn’t. Still more importantly, it seems very unlikely that he will recognize that his psychology colleagues in Terror Management Theory have directly answered the question of why humans commit genocide, and their theory has an enormous amount of replicated evidence for it, and thus is “proven”, scientifically speaking, until something better comes along, and meanwhile he claims that TMT is somehow wrong, with no actual argument or evidence, and at the same time publicly and regularly calls attention to the importance of avoiding genocide. Sad, right? It’s sad to see an exceptionally good clinical psychologist be cognitive-biased out of being a clinical psychologist because now he thinks he has “more important things to do”, as if directly helping people isn’t important enough.
Why do I bring this up? Well, even though I think Eliezer Yudkowsky is a far, far more impressive (which is to say, intellectually rigorous and responsible and thus effective) human being than Jordan Peterson, I still think Yudkowsky has more or less fallen into the same kind of whirlwind/cognitive bias, even though his situation may look very different compared to Peterson’s, and that whole mixed-bag “Intellectual Dark Web” sort of thing.
The point: Yudkowksy on inadequate equilibria, modest epistemology, Moloch’s toolbox? Brilliant, indispensable. Yudkowksy on the methods of rationality and cognitive bias in general? Brilliant, indispensable. Yukowsky on automation-misalignment and existential risks in general and the probability calculations necessary to conclude we’re doomed? Brilliant, indispensable. Yudkowsky on transhumanism/cryogenics/augmentation of human intelligence? Completely insane, as far as I can tell. This appears shocking, except it isn’t, because of what we know about cognitive bias. He’s too afraid of death, in short, and it has distorted his otherwise very impressive thinking.
I don’t believe any complicated reasoning is necessary here. Any definition we give of “intelligence” turns out to be unavoidably and irresolvably paradoxical, contradictory—in a nutshell, due to Godel’s Incompleteness/Completeness theorems, because any definition of “intelligence” we can give amounts to defining the formally consistent system of logic within which even ZFC set theory itself exists, and thus, aside from the fact that we already know we have no mathematically verifiable definition of this system because it’s beyond ZFC, even if it were as mathematically verifiable in its consistency and scope as ZFC, whatever this definition of “intelligence” is, we know we can demonstrate that, if it is formally consistent and powerful enough to be taken seriously as a definition by humans in the first place, it is also therefore incomplete. There is an inescapable arbitrariness to any assumption as to what “intelligence” is or is supposed to be. The clearest illustration of this is to simply assert a definition of intelligence which demands axiomatically that intelligence be “complete” (not so different an idea from “generalized”), which is to say the “highest form of intelligence” is any form which has zero logically definable problems, where “zero problems” is indistinguishable from “a complete set of solutions” conceptually. As I alluded to above, this means that we have every reason, according to this definition of “intelligence”, to conclude that a corpse is literally “smarter” than John von Neumann when he was still living and being the “smartest person ever”. According to this definition of “intelligence”, dead-John von Neumann is literally smarter than living-John von Neumann. Driving the point home: this view is really not so different (though it is, still, importantly different) from the Catholic view that dead-John von Neumann is literally smarter than living-John von Neumann because the former is now supposedly in eternal communion with Jesus Christ, the “Logos” (the guy with all the big picture, fundamental solutions, supposedly), on the grounds that, by all appearances, living-John von Neumann was kinda actually a believing Catholic and thus deserves, supposedly, to go to heaven—he appears at least as believing as, say, Frank Tipler, even though the former never tried to, like, present a theory for how Jesus actually turned water into wine and such (very “intelligent” mathematical physicist ends up thinking of how the Shroud of Turin is somehow a facial-laser-print or whatever; sad). In other words, put that in your rationalist pipe and smoke it! Right? Counter-intuitive, no doubt, yet true. This corpse > genius definition of “intelligence” is surely inconsistent, but it is complete. The usual intuitive notion that John von Neumann was the “smartest person ever” due to, say, his exceptional in-the-head calculating speed, or the astonishing “generalized” scope of his insights across many disciplines, and even the sheer length of his list of scientific contributions, is, surely, of far greater consistency and breadth as an explanatory model, but it’s also unavoidably incomplete. Thus, this definition of “intelligence” cannot be proven to be the definition. I could just as well say that the most essential indicator of “intelligence” is some/any recognition of the priority of minimizing suffering over anything else, and, since no, say, dog has ever committed genocide, any dog is smarter than, say, any example of genocidal humans, and that the only humans smarter than dogs would have to be something like Jainists, and the only people smarter than Jainists are dead Jainists and hypothetical “people” who have never existed in the first place. “Prove me wrong”, as we like to say, smugly. But seriously, go ahead.
Is this reasoning somehow beyond Yudkowsky’s comprehension? Of course not. He (and others) taught me this stuff. This is no more beyond Yudkowsky than probability theory is beyond Frank Tipler, and yet…here we are. Cognitive bias to the left of me, cognitive bias to the right, here I am, stuck in the middle with…Yudkowsky endorsing a specific form of child abuse, somehow?
More to the point: via the assumption that “intelligence” is equivalent to or inextricably linked causally to “rationality”, conceptually, and according to the one-dimensional obvious graph that we call the “autism spectrum”, which indicates a clear causal correlation between rationality and “autism” (where, again, the whole “social skills deficiency” thing is just itself an indication of exceptional rationality, so this is really just a graph of capacity for sustained rationality, is all), we have every apparent reason to conclude that the “smartest people on earth” are “severely autistic people”. Realizing that these people have not even been included in the official measurement of “extreme human intelligence” leads us to conclude that, firstly, we already have (and have had, and maybe have always—roughly speaking—had) “superhuman intelligence” in the form of at least these “severely autistic people”, and thus, secondly, we can already observe what an “aligned superhuman intelligence” would look like because it/they are literally, like, right over there, and thus, finally, we can conclude that, since these people not only aren’t working on machine learning alignment nor on quantum computing nor on reconciling geometrically or otherwise the Lie Group SU(3) x SU(2) x U(1) gauge symmetry of the Standard Model to General Relativity’s pseudo-Riemannian manifold and its inherent metric tensor and seemingly contradictory (to particle physics) geometric laws, but they aren’t even engaging with “human language” or the “human project” in any “humanly meaningful” or “humanly measurable” way, at all. Thus, well…pretty much everything we humans are doing that we consider “smart”, according to the “smartest people alive”, is…yeah, basically stupid, a waste of time. According to the “smartest people alive”, if we were “smart enough”, we would see this as obviously as they do, but we aren’t, so we don’t, thus on we go reproducing and perpetuating the madness which is cognitive bias (duh).
In summary, not only should we never (ever) be submitting our children to “intelligence augmentation” experiments (which, by the way, is of course what all of evolutionary history has already been, and what a project that’s turned out to be), and not only should we never (ever) even suggest the idea of doing this (say, on social media), but we should (here’s me trying) call out the obvious insanity of this idea, and encourage ourselves and others to let go of the illusion of the very notion, the faintest hint, even, of indefinitely utilitarian/optimizable “intelligence”, just like Voltaire called out the insanity of Leibniz thinking that “optimization” in calculus, which depends on arbitrary assumptions as to what it is the calculus practitioner considers worthy of “optimization” in the first place, means we are “living in the best of all possible worlds”. We are not, nor have we ever been, nor will we ever be living in “the best of all possible worlds”. We are living in some circle of hell, as it were, and have to climb to whatever circle is least hellish, and thus must as a prerequisite let go of the idea of allowing worse hells to manifest only to reach “purgatory” or “paradise”, when there are no such things and no rational reasons for believing in such things. Dante was a great poet, but he was also an idiot to follow Virgil blindly into deeper circles of hell, trusting in Virgil’s authority, his status. We would be similarly foolish to follow Yudkowsky, even if that just means re-tweeting or whatever, with regards to transhumanism/cryogenics/augmentation of intelligence (human or otherwise). I hate having to Pangloss you, Eliezer, but…QED.