Written with much help from here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
and , in response to various themesIt seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
- and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
- If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
- A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
- Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being indifferent to cryonics due to personal identity considerations .) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an , but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
- That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.
Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:
- Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to third option here shouldn't be ignored. etc. represents as obvious an error as, for instance, religion. The possibility of a
- People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the defensive makes one less likely toA Suite of Pragmatic Considerations in Favor of Niceness) their own irrationalities, if irrationalities they are. (See also:
- As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.
Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.
2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.
3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.
4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.
Roko:
This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you'll be able to afford (1+X/100)Y of it after that period. It merely means that if you're wealthy enough today to afford a particular well-defined basket of goods -- whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all -- then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction -- even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen.
Therefore, if you have peculiar needs and wants that don't correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It's enough that one factor critical for human survival gets bid up exorbitantly and it's adios amigos. I can easily think of more than one candidate.
From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent's worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second's worth of human thinking, while we toil 24/7, barely able to make ends meet. I don't know with how much confidence we should expect that property rights would be stable in such a situation.
[*] - To be precise, the contents of the basket will also change during that period if it's of any significant length. This however gets us into the nebulous realm of Fisher's chain indexes and similar numerological tricks on which the dubious edifice of macroeconomic statistics rests to a large degree.
If the growth above inflation isn't defined in terms of today's standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.
I.e. Robin's prediction of fast growth rates is presumably in terms of today's basket of goods, right?
The point of ems is that... (read more)