Written with much help from here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
and , in response to various themesIt seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
- and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
- If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
- A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
- Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being indifferent to cryonics due to personal identity considerations .) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an , but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
- That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.
Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:
- Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to third option here shouldn't be ignored. etc. represents as obvious an error as, for instance, religion. The possibility of a
- People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the defensive makes one less likely toA Suite of Pragmatic Considerations in Favor of Niceness) their own irrationalities, if irrationalities they are. (See also:
- As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.
Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.
2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.
3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.
4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.
This post seems to focus too much on Singularity related issues as alternative arguments. Thus, one might think that if one assigns the Singularity a low probability one should definitely take cryonics. I'm going to therefore suggest a few arguments against cryonics that may be relevant:
First, there are other serious existential threats to humans. Many don't even arise from our technology. Large asteroids would be an obvious example. Gamma ray bursts and nearby stars going supernova are other risks. (Betelgeuse is a likely candidate for a nearby supernova making our lives unpleasant. If current estimates are correct there will be substantial radiation from Betelgeuse in that situation but not so much as to wipe out humanity. But we could be wrong.)
Second, one may see a high negative utility if one gets cryonics and one's friends and relatives do not. The abnormal after death result could substantially interfere with their grieving processes. Similarly, there's a direct opportunity cost to paying and preparing for cryonics.
The above argument about lost utility is normally responded to by claiming that the expected utility for cryonics is infinite. If this were actually the case, this would be a valid response.
This leads neatly to my third argument: The claim that my expected utility from cryonics is infinite fails. Even in the future, there will be some probability that I die at any given point. If that probability is never reduced below a certain fixed amount, then my expected life-span is still finite even if I assume cryonics succeeds. (Fun little exercise, suppose that my probability of dying is x on any given day. What is my expected number of days of life? Note that no matter how small x is, as long as x>0, you still get a finite number). Thus, even if one agrees that an infinite lifespan can give infinite utility, it doesn't follow that cryonics gives an expected value that is infinite. (Edit: What happens in a MWI situation is more complicated but similar arguments can be made as the fraction of universes where you exist declines at a geometric rate so the total sum of utility over all universes is still finite)
Fourth, it isn't even clear that one can meaningfully talk about infinite utility. For example, consider the situation where you are given two choices (probably given to you by Omega because that's the standard genie equivalent on LW). In one of them, you are guaranteed immortality with no costs. In the other you are guaranteed immortality but are first tortured for a thousand years. The expected utility for both is infinite, but I'm pretty sure that no one is indifferent to the two choices. This is closely connected to the fact that economists when using utility make an effort to show that their claims remain true under monotonic transformations of total utility. This cannot hold when one has infinite utility being bandied about (it isn't even clear that such transformations are meaningful in such contexts). So much of what we take for granted about utility breaks down.
And if the expected utility of cryonics is simply a very large yet finite positive quantity?