Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.
This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.
The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.
We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.
Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).
Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working, and experienced rationalists assigned a 12% chance. We see the same pattern.
So experienced rationalists are consistently less likely to believe in cryonics than proto-rationalists, and rationalist training probably makes you less likely to believe cryonics will work.
On the other hand, 0% of proto-rationalists had signed up for cryonics compared to 13% of experienced rationalists. 48% of proto-rationalists rejected the idea of signing up for cryonics entirely, compared to only 25% of experienced rationalists. So although rationalists are less likely to believe cryonics will work, they are much more likely to sign up for it. Last year's survey shows the same pattern.
This is not necessarily surprising. It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.
Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.
Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.
Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.
The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.
The relevant difference is that Gallant knows how to take ideas seriously.
Taking ideas seriously isn't always smart. If you're the sort of person who falls for proofs that 1 = 2 , then refusing to take ideas seriously is a good way to avoid ending up actually believing that 1 = 2, and a generally excellent life choice.
On the other hand, progress depends on someone somewhere taking a new idea seriously, so it's nice to have people who can do that too. Helping people learn this skill and when to apply it is one goal of the rationalist movement.
In this case it seems to have been successful. Proto-rationalists think there is a 21% chance of a new technology making them immortal - surely an outcome as desirable as any lottery jackpot - consider it an interesting curiosity, and go do something else because only weirdos sign up for cryonics.
Experienced rationalists think there is a lower chance of cryonics working, but some of them decide that even a pretty low chance of immortality sounds pretty good, and act strategically on this belief.
This is not to either attack or defend the policy of assigning a non-negligible probability to cryonics working. This is meant to show only that the difference in cryonics status between proto-rationalists and experienced rationalists is based on meta-level cognitive skills in the latter whose desirability is orthogonal to the object-level question about cryonics.
(an earlier version of this article was posted on my blog last year; I have moved it here now that I have replicated the results with a second survey)
Are you sure you didn't think you were replying to someone else? You made a lot of false assumptions about my mindstate.
So what has made you decide to live so far?
I combined two situations because I thought that would be more acceptable to you. That doesn't mean I'm conflating them. I do think there are good deaths and bad immortalities.
If I couldn't think of any interesting long term goals, I would have to agree. If that's not how you mean it, then I don't understand what you mean by arbitrary.
It's a value, and yes it's programmed by the blind idiot god called evolution, but my core values don't go away if I just think about them hard enough and why should they?
Why exactly does it matter why the value is there? It wasn't designed for anything or by anything. It just is, and the genes were just selected for and thus they are. Genes have goals no more than they can plan and even if they did I have no reason to privilege them. Evolution is an unplanned process not optimizing anything in particular, how could it possibly glitch and why should I care?
Not my thinking.
Any situation where my future could be expected to be net negative. Of course I can't imagine such a scenario specificly, as I can't reliably imagine what life is like even 20 years from now, so the extra years add nothing to the scenario. I can think of several situations that would make me end my life right now or a few years from now.
Sorry.
I'm alive. It is my default state.
I'm talking about (1) aging and disease and suffering vs. (2) death. They have zero to do with one another and should not be combined in this discussion.
... (read more)