This has been covered by Eliezer on OB, but I the debate will work better with the LW voted commenting system, and I hope I can add something to the OB debate, which I feel left the spectre of Pascalian religious apology clinically dead but not quite information theoretically dead. Anna Salamon writes:
The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.
Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcoming bias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.
This really got me worried: do I really rationally believe in the efficacy of cryonics and not of religion? Or did I write the bottom line first and then start thinking of justifications?
Of course, it is easy to write a post justifying cryonics in a way that shuns religion. That's what everyone wants to hear on this forum! What is hard is doing it in a way that ensures you're not just writing even more justification with no chance of retracting the bottom line. I hope that with this post I have succeeded in burying the Pascalian attack on cryonics for good; and in removing a little more ignorance about my true values.
To me, the justification for wanting to be cryopreserved is that there is, in fact, a good chance (more than the chance of rolling a 5 or a 6 on a six sided die)1 that I will be revived into a very nice world indeed, and that the chance of being revived into a hell I cannot escape from is less than or equal to this (I am a risk taker). How sensitive is this to the expected goodness and length-in-time of the utopia I wake up in? If the utopia is as good as Iain M Banks' culture, I'd still be interested in spending 5% of my income a year and 5% of my time getting frozen if the probability was around about the level of rolling two consecutive sixes.
Does making the outcome better change things? Suppose we take the culture and "upgrade it" by fulfilling all of my fantasies: The Banksian utopia I have described is analogous to the utopia of the tired peasant compared to what is possible. An even better utopia which appeals to me on an intellectual and subtly sentimental level would involve continued personal growth towards experiences beyond raw peak experience as I know it today. This perhaps pushes me to tolerating probabilities around the four sixes level, (1/(6*6*6*6) ~ 1/1000) but no further. For me this probability feels like "just a little bit less unlikely than impossible".
Now, how does this bear on Pascal's wager? Well, I just don't register long-term life outcomes that happen with a probability of less than one in a thousand. End of story! Heaven *could not be good enough* and hell *could not be bad enough* to make it matter to me, and I can be fairly sure about this because I have just visualized a plausible heaven that I actually "believe in".
Now what is my actual probability estimate of Cryonics working? Robin talks about breaking it down into a series of events and estimating their conditional probabilities. My breakdown of the probability of a successful outcome if you die right now is:
- The probability that human civilization will survive into the sufficiently far future (my estimate: 50%)
- The probability that you get cryopreserved rather than autopsied or shot in the head, and you get cooled down sufficiently quickly (my estimate: 80%, though this will improve)
- The probability that cryonics preserves appropriate brain structure (my estimate: 75%)
- The probability that you don't get destroyed whilst frozen, for example by incompetent financial management of cryonics companies (my estimate: 80%)
- The probability that someone will revive you into a pleasant society conditional upon the above (my estimate: 95%)
Yielding a disappointingly low probability of 0.228. [I expect this to improve to ~0.4 by the time I get old enough for it to be a personal consideration.] I don't think that one could be any more optimistic than the above. But this probability is tantalizing: Enough to get me very excited about all those peak experiences and growth that I described above, though it probably won't happen. It is roughly the probability of tossing a coin twice and getting heads both times.
It is also worth mentioning that the analyses I have seen relating to the future of humanity indicate that a Banksian almost-utopia is unlikely, that the positive scenarios are *very positive*, and negative scenarios usually involve the destruction of human technological society. My criterion of personal identity will probably be the limiting factor in how good a future I can experience. If I am prepared to spend 5% of my time and effort pursuing a 1 in 100 chance of the this maxed-out utopia, I should be prepared to put quite a lot of effort into making sure I "make it" given the probability I've just estimated.
If someone were to convince me that the probability of cryonics working was, in fact, less than 1 in 1000, I would (quite rationally) give up on it.
This relatively high probability I've estimated (two heads on a coin) has other consequences for cryonaughts alive today, if they believe it. We should be prepared to expend a non-negligible amount of effort moving somewhere where the probability of quick suspension is as high as possible. Making cryonics more popular will make probabilities 1, 2, and 4 increase. (2 will increase because people will have a stake in the future after their deanimation). The cryonics community should therefore spend some time and effort convincing more people to be cryopreserved, though this is a hard problem, intimately related to the purpose of Less Wrong, rationality and to secular ethics and secular "religions" such as secular humanism, h+ and the brights. Those who are pro-cryonics and have old relatives should be prepared to bite the social cost of attempting to persuade those relatives that cryonics is worth thinking about, at least to the extent that they care about their relatives. This is an actionable item that I intend to action with all of my 4 remaining grandparents in the next few months.
I have seen (but cannot find the citation for, though see this) research that predicts that 50% of people will suffer from dementia for the 6 months before they die by 2020 (and that this will get worse over time as life expectancy increases). If we add to my list above a term for "the probability that you won't be information theoretically dead before you're legally dead", and set it to 50%, the overall probability takes a huge hit; in addition, a controlled deanimation improves the probability of being deanimated without damage. Any cryonaught who really shares my beliefs about the rewards and probabilities for cryonics should be prepared to deainmate theselves before they would naturally die, perhaps by a significant amount, say 10 years. (yes, I know this is illegal, but it is a useful thought experiment, and it indicates that we should be campaigning hard for this). If you really believe the probabilities I've given for cryonics, you should deanimate instead of retiring. At a sufficiently high probability of cryonics working, you should rationally attempt to deanimate immediately or within a few years, no matter how old you are, in order to maximize the amount of your personal development which occurs in a really good environment. It seems unlikely that this situation will come to pass, but it is an interesting thought experiment; if you would not be prepared, under sufficiently compelling circumstances, to prematurely deanimate, you may be in cryonics for nonrational reasons.
1 [The use of dice rather than numbers to represent probabilities in this article comes from my war-gaming days. I have a good emotional intuition as to how unlikely rolling a 6 is, it is more informative to me than 0.1666. I've won and lost battles based on 6+ saving throws. I recommend that readers play some game that involves dice to get a similarly good intuition]
Here’s the thing: let’s say that there’re some “objective probabilities” out there, and that your estimate is indeed “most likely too optimistic” compared to those objective probabilities, but that there’s some significant (e.g., 10%) chance that it’s too pessimistic with compared to those same probabilities. If your estimate is “over optimistic”, its overoptimistic by at most 3/2048. If your estimate is “over pessimistic”, it could easily be over pessimistic by more than ten times that much (i.e., by more than 30/2048; Robin Hanson’s estimates the odds as “>5%”, i.e. more than 100/2048). And if you’re trying to do decision theory on whether or not to sign up for cryonics, you’re basically trying to take an average over the different values these “objective probabilities” could have, weighted by how likely they are to have those values -- which means that the scenarios in which your estimate is “too pessimistic” actually have a lot of impact, even if they’re only 10% likely.
Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.
Hmm... I have an idea regarding this, and also regarding Roko's suggestion to disregard low probabilities.
There are very many things that you'll only be able to estimate as "probability below 1/1000", some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you ... (read more)