In 2004, Michael Vassar gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the contemporary Less Wrong zeitgeist.

Although transhumanism is not a religion, advocating as it does the critical analysis of any position; it does have certain characteristics which may lead to its identification as such by concerned skeptics. I am sure that everyone here has had to deal with this difficulty, and as it is a cause of perplexity for me I would appreciate it if anyone who has some suggested guidelines for interacting honestly with non-transhumanists share them at the end of my presentation. It seems likely to me that each of our minds contains either meme complexes or complex functional adaptations which have evolved to identify “religious” thoughts and to neutralize their impact on our behavior. Most brains respond to these memes by simply rejecting them. Others however, instead neutralize such memes simply by not acting according to the conclusions that should be drawn from such memes. In almost any human environment prior to the 20th century this religious hypocrisy would be a vital cognitive trait for every selectively fit human. People who took in religious ideas and took them too seriously would end up sacrificing their lives overly casually at best, and at worst would become celibate priests. Unfortunately, these memes are no more discriminating than the family members and friends who tend to become concerned for our sanity in response to their activity. Since we are generally infested with the same set of memes, we genuinely are liable to insanity, though not of the suspected sort. A man who is shot by surprise is not particularly culpable for his failure to dodge or otherwise protect himself, though perhaps he should have signed up with Alcor. A hunter gatherer who confronts an aggressive European with a rifle for the first time can also receive sympathy when he is slain by the magic wand that he never expected to actually work. By contrast, a modern Archimedes who ignores a Roman soldier’s request that he cease from his geometric scribbling is truly a mad man. Most of people of the world, unaware of molecular nanotechnology and of the potential power of recursively self-improving AI are in a position roughly analogous to that of the first man. The business and political figures that dismiss eternal life and global destruction alike as plausible scenarios are in the position of the second man. By contrast, it is we transhumanists who are for the most part playing the part of Archimedes. With death, mediated by technologies we understand full well staring us in the face; we continue our pleasant intellectual games. At best a few percent of us have adopted the demeanor of an earlier Archimedes and transferred our attention from our choice activities to other, still interesting endeavors which happen to be vital to our own survival. The rest are presumably acting as puppets of the memes which react to the prospect of immortality by isolating the associated meme-complex and suppressing its effects on actual activity.

OK, so most of us don't seem to be behaving in an optimal manner. What manner would be optimal? This ISN'T a religion, remember? I can't tell you that. At best I can suggest an outline of the sort of behavior that seems to me least likely to lead to this region of space becoming the center of a sphere of tiny smiley faces expanding at the speed of light.

The first thing that I can suggest is that you take rationality seriously. Recognize how far you have to go. Trust me; the fact that you can't rationally trust me without evidence is itself a demonstration that at least one of us isn't even a reasonable approximation of rational, as demonstrated by Robin Hanson and Tyler Emerson of George Mason University in their paper on rational truth-seekers. The fact is that humans don't appear capable of approaching perfect rationality to anything like the degree to which most of you probably believe you have approached it. Nobel Laureate Daniel Kahneman and Amos Tversky provided a particularly valuable set of insights into this fact with their classic book Judgement Under Uncertainty: Heuristics and Biases and in subsequent works. As a trivial example of the uncertainty that humans typically exhibit, try these tests. (Offer some tests from Judgement Under Uncertainty)

I hope that I have made my point. Now let me point out some of the typical errors of transhumanists who have decided to act decisively to protect the world they care about from existential risks. After deciding to rationally defer most of the fun things that they would like to do for a few decades until the world is relatively safe, it is completely typical to either begin some quixotic quest to transform human behavior on a grand scale over the course of the next couple decades or to go raving blithering Cthulhu-worshiping mad and try to build an artificial intelligence. I will now try to discourage such activities.

One of the first rules of rationality is not to irrationally demand that others be rational. Demanding that someone make a difficult mental transformation has never once lead them to making said transformation. People have a strong evolved desire to make other people accept their assertions and opinions. Before you let the thought cross your mind that a person is not trying to be rational, I would suggest that you consider the following. If you and your audience were both trying to be rational, you would be mutually convinced of EVERY position that the members of your audience had on EVERY subject and vice versa. If this does not seem like a plausible outcome then one of you is not trying to be rational, and it is silly to expect a rational outcome from your discussion. By all means, if a particular person is in a position to be helpful try to blunder past the fact of your probably mutual unwillingness to be rational; in a particular instance it is entirely possible that ordinary discussion will lead to the correct conclusion, though it will take hundreds of times longer than it would if the participants were able to abandon the desire to win an argument as a motivation separate from the desire to reach the correct conclusion. On the other hand, when dealing with a group of people, or with an abstract class of people, Don't Even Try to influence them with what you believe to be a well-reasoned argument. This has been scientifically shown not to work, and if you are going to try to simply will your wishes into being you may as well debate the nearest million carbon atoms into forming an assembler and be done with it, or perhaps convince your own brain to become transhumanly intelligent. Hey, it's your brain, if you can't convince it to do something contrary to its nature that it wants to do is it likely that you can convince the brains of many other people to do something contrary to their natures that they don't want to do just by generating a particular set of vocalizations?

My recommendation that you not make an AI is slightly more urgent. Attempting to transform the behavior of a substantial group of people via a reasoned argument is a silly and superstitious act, but it is still basically a harmless one. On the other hand, attempts by ordinary physicist Nobel Laureate quality geniuses to program AI systems are not only astronomically unlikely to succeed, but in the shockingly unlikely event that they do succeed they are almost equally likely to leave nothing of value in this part of the universe. If you think you can do this safely despite my warning, here are a few things to consider:

  1. A large fraction of the greatest computer scientists and other information scientists in history have done work on AI, but so far none of them have begun to converge on even the outlines of a theory or succeeded in matching the behavioral complexity of an insect, despite the fantastic military applications of even dragonfly-equivalent autonomous weapons.
  2. Top philosophers, pivotal minds in the history of human thought, have consistently failed to converge on ethical policy.
  3. Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws (if you don't see why the three laws wouldn't work I refer you to the Singularity Institute for Artificial Intelligence's campaign against the three laws)
  4. Science fiction authors as a class, a relatively bright crowd by human standards, have subsequently thrown more time into considering the question of machine ethics than they have any other philosophical issue other than time travel, yet have failed to develop anything more convincing than the three laws.
  5. AI ethics cannot be arrived at either through dialectic (critical speculation) or through the scientific method. The first method fails to distinguish between an idea that will actually work and the first idea you and your friends couldn't rapidly see big holes in, influenced as you were by your specific desire for a cool-sounding idea to be correct and your more general desire to actually realize your AI concept, saving the world and freeing you to devote your life to whatever you wish. The second method is crippled by the impossibility of testing a transhumanly intelligent AI (because it could by definition trick you into thinking it had passed the test) and by the irrelevance of testing an ethical system on an AI without transhuman intelligence. Ask yourself, how constrained would your actions be if you were forced to obey the code of Hammurabi but you had no other ethical impulses at all. Now keep in mind that Hammurabi was actually FAR more like you than an AI will be. He shared almost all of your genes, your very high by human standards intellect, and the empathy that comes from an almost identical brain architecture, but his attempt at a set of rules for humans was a first try, just as your attempt at a set of rules for AIs would be.
  6. Actually, if you are thinking in terms of a set of rules AT ALL this implies that you are failing to appreciate both a programmer's control over an AI's cognition and an AI's alien nature. If you are thinking in terms of something more sophisticated, and bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far, bear in mind that the first such "more sophisticated" theory was discovered on careful analysis to itself be inadequate, as was the second.

 

If you can't make people change, and you can't make an AI, what can you do to avoid being killed? As I said, I don't know. It's a good bet that money would help, as well as an unequivocal decision to make singularity strategy the focus of your life rather than a hobby. A good knowledge of cognitive psychology and of how people fail to be rational may enable you to better figure out what to do with your money, and may enable you to better co-operate your efforts with other serious and rational transhumanists without making serious mistakes. If you are willing to try, please let's keep in touch. Seriously, even if you discount your future at a very high rate, I think that you will find that living rationally and trying to save the world is much more fun and satisfying than the majority of stuff that even very smart people spend their time doing. It really really beats pretending to do the same, yet even such pretending is or once was a very popular activity among top-notch transhumanists.

Aiming at true rationality will be very difficult in the short run, a period of time which humans who expect to live for less than a century are prone to consider the long run. It entails absolutely no social support from non-transhumanists, and precious little from transhumanists, most of whom will probably resent the implicit claim that they should be more rational. If you haven't already, it will also require you to put your every-day life in order and acquire the ability to interact positively with people or a less speculative character. You will get no VC or angel funding, terribly limited grant money, and in general no acknowledgement of any expertise you acquire. On the other hand, if you already have some worth-while social relationships, you will be shocked by just how much these relationships improve when you dedicate yourself to shaping them rationally. The potential of mutual kindness, when even one partner really decides not to do anything to undermine it, shines absolutely beyond the dreams of self-help authors.

If you have not personally acquired a well-paying job, in the short term I recommend taking the actuarial tests. Actuarial positions, while somewhat boring, do provide practice in rationally analyzing data of a complexity that denies intuitive analysis or analytical automatonism. They also pay well, require no credentials other than tests in what should be mandatory material for anyone aiming at rationality, and have top job security in jobs that are easy to find and only require 40 hours per week of work. If you are competent with money, a few years in such a job should give you enough wealth to retire to some area with a low cost of living and analyze important questions. A few years more should provide the capital to fund your own research. If you are smart enough to build an AI's morality it should be a breeze to burn through the 8 exams in a year, earn a six-figure income, and get returns on investment far better than Buffet does. On the other hand, doing that doesn't begin to suggest that you are smart enough to build an AI's morality. I'm not convinced that anything does.

Fortunately ordinary geniuses with practiced rationality can contribute a great deal to the task of saving the world. Even more fortunately, so long as they are rational they can co-operate very effectively even if they don't share an ethical system. Eternity is an intrinsically shared prize. On this task more than any other the actual behavioral difference between an egoist, altruist, or even a Kantian should fade to nothing in terms of its impact on actual behavior. The hard part is actually being rational, which requires that you postpone the fun but currently irrelevant arguments until the pressing problem is solved, even perhaps with the full knowledge that you  are actually probably giving them up entirely, as they may be about as interesting as watching moss grow post-singularity. Delaying gratification in this manner is not a unique difficulty faced by transhumanists. Anyone pursuing a long-term goal, such as a medical student or PhD candidate, does the same. The special difficulty that you will have to overcome is the difficulty of staying on track in the absence of social support or of appreciation of the problem, and the difficulty of overcoming your mind's anti-religion defenses, which will be screaming at you to cut out the fantasy and go live a normal life, with the normal empty set of beliefs about the future and its potential.

Another important difficulty to overcome is the desire for glory. It isn't important that the ideas that save the world be your ideas. What matters is that they be the right ideas. In ordinary life, the satisfaction that a person gains from winning an argument may usually be adequate compensation for walking away without having learned what they should have learned from the other side, but this is not the case when you elegantly prove to your opponent and yourself that the pie you are eating is not poisoned. Another glory-related concern is that of allowing science fiction to shape your expectations of the actual future. Yes it may be fun and exciting to speculate on government conspiracies to suppress nanotech, but even if you are the right conspiracy theories don't have enough predictive power to test or to guide your actions. If you are wrong, you may well end up clinically paranoid. Conspiracy thrillers are pleasant silly fun. Go ahead and read them if you lack the ability to take the future seriously, but don't end up in an imaginary one, that is NOT fun.

Likewise, don't trust science fiction when it implies that you have decades or centuries left before the singularity. You might, but you don't know that; it all depends on who actually goes out and makes it happen. Above all, don't trust its depictions of the sequence in which technologies will develop or of the actual consequences of technologies that enhance intelligence. These are just some author's guesses. Worse still, they aren't even the author's best guesses, they are the result of a lop-sided compromise between the author's best guess and the set of technologies that best fit the story the author wants to tell. So you want to see Mars colonized before singularity. That's common in science fiction, right? So it must be reasonably likely. Sorry, but that is not how a rational person estimates what is likely. Heuristics and Biases will introduce you to the representativeness heuristic, roughly speaking the degree to which a scenario fits a preconceived mental archetype. People who haven't actively optimized their rationally typically use representativeness as their estimate of probability because we are designed to do so automatically so we find it very easy to do so. In the real world this doesn't work well. Pay attention to logical relationships instead.

Since I am attempting to approximate a rational person, I don't expect e-mails from any of you to show up in my in-box in a month or two requesting my cooperation on some sensible and realistic project for minimizing existential risk. I don't expect that, but I place a low certainty value on most of my expectations, especially regarding the actions of outlier humans. I may be wrong. Please prove me wrong. The opportunity to find that I am mistaken in my estimates of the probability of finding serious transhumanists is what motivated me to come all the way across the continent. I'm betting we all die in a flash due to the abuse of these technologies. Please help me to be wrong.

New Comment
18 comments, sorted by Click to highlight new comments since:

My attempt at summary:

Our brains pattern-match transhumanism to religion. Human evolved reaction to religion is to either dismiss it as dangerous, or to profess the beliefs but fail to realize the logical consequences in real life. Thus we can expect that most people will react to transhumanism in one of these two ways. Even wannabe rationalists are deeply irrational.

You should not: 1) expect that other people will become more rational, if you yell at them enough; 2) try to build an AI. Our brains are optimized for winning debates, not achieving correct conclusions. Your AI project will most likely fail; but even if it won't, the resulting AI will most likely be not Friendly.

So, what should you do? No certain answer, only suggestions: 1) Be serious about your rationality, and cooperate with other people who are serious abour rationality. Cooperation among rational people is powerful. Keep your ego in check; instead of winning, try to learn. Don't rely on fictional evidence. 2) Make a lot of money, because it's instrumentally useful. If you don't have a well-paying job, learn for actuarial tests: they overlap with rationality and will allow you to make nice money later.

Be prepared that this will be considered weird even by most transhumanists, so don't expect much social support even within your niche. But it's fun and better than what most smart people do. And there is a small chance this could somehow help you save the world.

You should not: 1) expect that other people will become more rational, if you yell at them enough;

I think the rest of your summary is accurate, but Vassar actually said that you can't use to rational argument to convince people of much of anything. Now that I think about it, carefully judged yelling probably works better than rational argument.

"You can't convince anyone of anything using rational argument" is one of those cached thoughts that makes you sound cool and mature but isn't actually true. Rational argument works a hell of a lot worse than smart people think it does, but it works in certain contexts and with certain people enough of the time that it's worth trying sometimes. Even normal people are swayed by facts from time to time.

[-]Jiro190

Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws

The three laws were not intended to be bulletproof, just a starting point. And Asimov knew very well that they had holes in them; most of his robot stories were about holes in the three laws.

The Three Laws of Robotics are normally rendered as regular English words, but in-universe they are defined not by words but by mathematics. Asimov's robots don't have "thou shalt not hurt a human" chiseled into their positronic brain, but instead are built from the ground up to have certain moral precepts, summarized for laypeople as the three laws, so built into their cognition that robots with the three laws taken out or modified don't work right, or at all.

Asimov actually gets the whole idea of making AI ethics being hard more than any other sci-fi author I can think of. although this stuff is mostly in the background since the plain English descriptions of the three laws are good enough for a story, but IIRC The Caves of Steel talks about this, and makes it abundantly clear that the Three Laws are made part of robots on the level of very complicated, very thorough coding - something that loads of futurists and philosopher alike often ignore if they think they've come up with some brilliant schema to create an ethical system, for AI or for humans.

The line where Vassar says that sci-fi authors haven't improved on them looks like it is very probably incorrect. I don't read sci if, but I bet in all the time since then authors have developed and worked on the ideas a fair bit.

Tyler Emerson is a real person, but Robin Hanson has never co-authored a paper with him. Vassar was probably thinking of Are Disagreements Honest? by Robin Hanson and Tyler Cowen.

How should I contact Vassar regarding my willingness to follow his lead regarding whatever projects he deems sensible?

He appears to be open to Facebook messaging.

Private message on his LW account could be one option. He currectly works at MetaMed, so offering them your help (if you can contribute) could be another.

Nitpick: Asimov was a member of Mensa on and off, but was highly critical of it, and didn't like Mensans. He was an honorary vice president, not president (according Asimov, anyway.) And he wasn't very happy about it.

Relevantly to this: "Furthermore, I became aware that Mensans, however high their paper IQ might be, were likely to be as irrational as anyone else." (See the book "I.Asimov," pp.379-382.) The vigor of Asimov's distaste for Mensa as a club permeates this essay/chapter.

Nitpick it is, but Asimov deserves a better fate than having a two-sentence bio associate him with Mensa.

Confirmation of the statement that actuaries earn six figures with no formal education necessary and with a good job market:

  • This seems to confirm that you technically don't need more than the exams, although a degree of some sort is a definite help.
  • Another page from the same site confirms the six figures (at least for experienced fellows).
  • This pdf interview seems to more or less confirm the job market claim.

I'm curious if anybody here has actually done this sort of thing?

[-]Omid40

How smart do you have to be in order to follow this advice? Are we talking two standard deviations or five?

My guess is that you have to be pretty smart but not astonishingly smart to understand the argument, and likewise for becoming an actuary.

The really rare trait is the ability to take ideas seriously enough to act on them.

bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far,

Who is he referring to here?

I would trust Scott Alexander on almost any topic without evidence.

I feel evil today. Please read this.

[-]Larks-10

Thanks for posting this, it was interesting. Definitely had a retro-feel. I wonder how much would differ if he gave the speech today. Some semi-informed speculation:

  • I've heard the market for actuaries has become more efficient recently, but with things like AppAcademy there doesn't seem much excuse to be poor and intelligent.
  • Maybe a lower estimate of how much personal relationships are improved by a little rationality.