Written with much help from here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
and , in response to various themesIt seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
- and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
- If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
- A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
- Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being indifferent to cryonics due to personal identity considerations .) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an , but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
- That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.
Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:
- Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to third option here shouldn't be ignored. etc. represents as obvious an error as, for instance, religion. The possibility of a
- People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the defensive makes one less likely toA Suite of Pragmatic Considerations in Favor of Niceness) their own irrationalities, if irrationalities they are. (See also:
- As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.
Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.
2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.
3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.
4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.
I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Well, they say that cryonics works whether you believe in it or not. Why don't give it a try?