Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Abnormal Cryonics

56 Post author: Will_Newsome 26 May 2010 07:43AM

Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)

It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):

  • Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
  • If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
  • A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
  • Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being  indifferent to cryonics due to personal identity considerations here.) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an important sense, but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
  • That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.

Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:

  • Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to FHI, SIAI4, SENS, etc. represents as obvious an error as, for instance, religion. The possibility of a third option here shouldn't be ignored.
  • People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the  defensive makes one less likely to accept and therefore overcome their own irrationalities, if irrationalities they are. (See also: A Suite of Pragmatic Considerations in Favor of Niceness)
  • As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.

Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

 

1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.

2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.

3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.

4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.

Comments (365)

Comment author: Liron 26 May 2010 10:14:34AM 12 points [-]

I think cryonics is used as a rationality test because most people reason about it from within the mental category "weird far-future stuff". The arguments in the post seem like appropriate justifications for choices within that category. The rationality test is whether you can compensate for your anti-weirdness bias and realize that cryonics is actually a more logical fit for the mental category "health care".

Comment author: byrnema 27 May 2010 09:25:35PM *  10 points [-]

This comment is a more fleshed-out response to VladimirM’s comment.

This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.

Whether cryonics is the right choice depends on your values. There are suggestions that people who don’t think they value revival in the distant future are mislead about their real values. I think it might be the complete opposite: advocation of cryonics completely missing what it is that people value about their lives.

The reason for this mistake could be that cryonics is such a new idea that we are culturally a step or two behind in identifying what it is that we value about existence. So people think about cryonics a while and just conclude they don’t want to do it. (For example, the stories herein.) Why? We call this a ‘weirdness’ or ‘creep’ factor, but we haven’t identified the reason.

When someone values their life, what is it that they value? When we worry about dying, we worry about a variety of obligations unmet (values not optimized), and people we love abandoned. It seems to me that people are attached to a network of interactions (and value-responsibilities) in the immediate present. There is also an element of wanting more experience and more pleasure, and this may be what cryonics advocates are over-emphasizing. But after some reflection, how do you think most people would answer this question: when it comes to experiencing 5 minutes of pleasure, does it matter if it is you or someone else if neither of you remember it?

A lot of the desperation we feel when faced with death is probably a sense of responsibility for our immediate values. We are a bundle of volition that is directed towards shaping an immediate network of experience. I don't really care about anything 200 years from now, and enjoy the lack of responsibility I feel for the concerns I would have if I were revived then. As soon as I was revived, however, I know I would become a bundle of volition directed towards shaping that immediate network of experience.

Considering what we do value about life -- immediate connections, attachments and interactions, it makes much more sense to invest in figuring out technology to increase lifespan and prevent accidental death. Once the technology of cryonics is established, I think that there could be a healthy market for people undergoing cryonics in groups. (Not just signing up in groups, but choosing to be vitrified simultaneously in order to preserve a network of special importance to them.)

Comment deleted 29 May 2010 04:43:30PM [-]
Comment author: byrnema 30 May 2010 12:45:35AM *  2 points [-]

The 'we' population I was referring to was deliberately vague. I don't know how many people have values as described, or what fraction of people who have thought about cryonics and don't choose cryonics this would account for. My main point, all along, is that whether cryonics is the "correct" choice depends on your values.

Anti-cryonics "values" can sometimes be easily criticized as rationalizations or baseless religious objections. ('Death is natural', for example.) However, this doesn't mean that a person couldn't have true anti-cryonics values (even very similar-sounding ones).

Value-wise, I don't know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I'm pretty sure it's going to be the right choice for at least a handful and the wrong choice for at least a handful.

Comment deleted 30 May 2010 01:48:34PM *  [-]
Comment author: Vladimir_M 31 May 2010 01:41:42AM *  2 points [-]

Roko:

Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company?

This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today's developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion.

What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson's vision, but based on my speculation, not a reflection of his views.]

This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I'm not at all sure I'd like to wake up in such a world, even if rich -- and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they're frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today's human standards may well grow even more rapidly as the Malthusian scenario unfolds.

Comment deleted 31 May 2010 01:35:35PM [-]
Comment author: jimrandomh 31 May 2010 01:42:42PM 3 points [-]

And then there's a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.

That's a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal's wager with a minus sign.

Comment deleted 31 May 2010 01:24:25PM [-]
Comment author: avalot 26 May 2010 03:37:30PM 29 points [-]

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."

Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"

Hibernation will sell people on the idea that fridges save lives. It doesn't have to be much more rational than that.

If you're young, you might be better off pushing hard to help that tech go mainsteam faster. That will lead to mainstream cryo faster than promoting cryo, and once cryo is mainstream, you'll be able to sign up for cheaper, probably better cryo, and more importantly, one that is integrated into the medical system, where they might transition me from hibernation to cryo, without needing to make sure I'm clinically dead first.

I will gladly concede that, for myself, there is still an irrational set of beliefs keeping me from buying into cryo. The argument above may just be a justification I found t avoid biting the bullet. But maybe I've stumbled onto a good point?

Comment author: cousin_it 26 May 2010 03:44:14PM *  6 points [-]

I don't think you stumbled on any good point against cryonics, but the scenario you described sounds very reassuring. Do you have any links on current hibernation research?

Comment author: avalot 26 May 2010 04:05:09PM *  17 points [-]

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.

May 2009: NIH awards a $2,227,500 grant

2006: Doctors chill, operate on, and revive a pig

Comment author: magfrump 26 May 2010 11:01:08PM 2 points [-]

Voted up for extensive linkage

Comment author: PhilGoetz 27 May 2010 04:50:06PM *  6 points [-]

I told Kenneth Storey, who studies various animals that can be frozen and thawed, about a new $60M government initiative (mentioned in Wired) to find ways of storing cells that don't destroy their RNA. He mentioned that he's now studying the Gray Mouse Lemur, which can go into a low-metabolism state at room temperature.

If the goal is to keep you alive for about 10 years while someone develops a cure for what you have, then this room-temperature low-metabolism hibernation may be easier than cryonics.

(Natural cryonics, BTW, is very different from liquid-nitrogen cryonics. There are animals that can be frozen and thawed; but most die if frozen to below -4C. IMHO natural cryonics will be much easier than liquid-nitrogen cryonics.)

Comment author: Vladimir_M 26 May 2010 11:44:41PM *  15 points [-]

I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.

First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)

Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.

This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.

That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.

Comment deleted 27 May 2010 10:40:56AM [-]
Comment author: JoshuaZ 26 May 2010 11:59:21PM *  4 points [-]

While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn't identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.

Your point about weirdness signaling is a good one, and I'd expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.

Comment author: Vladimir_M 27 May 2010 12:37:02AM *  4 points [-]

JoshuaZ:

While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.

The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject's identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan's principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.

In any case, I honestly don't see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.

Comment author: Will_Newsome 27 May 2010 12:46:42AM *  5 points [-]

This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.

Comment author: RichardW 27 May 2010 01:57:53PM *  7 points [-]

I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah

In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.

Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.

Comment author: JenniferRM 27 May 2010 09:43:08PM 2 points [-]

I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other.

Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own.

Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point.

But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.

Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now... unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost "continuity of awareness" in the middle because your brain will go into a repair and update mode that's not capable of sensing your environment or continuing to compute "continuity of awareness".

If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.

Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you're mostly focused on your "contextual value" (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).

The real thing to which you should be paying attention (other than to make sure they don't stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.

For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.

Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I'll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.

When I set up a "drake equation for cryonics" and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn't even have terms for negative value outcomes like "loss of value in 'some other context' because of cryonics/simulationist interactions".

So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.

Comment author: RichardW 28 May 2010 12:34:04PM 8 points [-]

Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.

No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.

Comment author: Blueberry 28 May 2010 02:33:10PM 2 points [-]

I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)

Comment author: RichardW 28 May 2010 05:30:49PM 3 points [-]

Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.

We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.

Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.

Comment author: JenniferRM 01 June 2010 05:17:10AM *  0 points [-]

Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.

If a proposed intrinsic value is questioned and justified with another value statement, then the supposed "intrinsic value" is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of "simple" fact. And you are quite right that these facts (by definition as "non value statements") will not be motivating.

We fundamentally like vanilla (if we do) "because we like vanilla" as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P

On the other hand... basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create "flow" where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.

As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that "I don't care what I will experience tomorrow" can be interpreted as a prediction that "Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions". This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you're really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.

Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow... then an inductive proof can be developed that "unless something important changes from one day to the next" you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the "something important changing" issue that they bring up.

For example, many people think that they only care about their children... until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.

Other people can't (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.

But you are not making any of these points so that they can even be objected to by myself or others... You're deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet "learned how to lose an argument gracefully and become smarter by doing so".

And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or "rationality") that a person has cultivated up to the point when they stake out one of the more "normal" anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising - issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(

Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn't make them bad people - in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.

That you are persisting in your position is a good sign, because you're clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don't push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.

On the other hand, I have limited time and limited resources and I can't afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent "childhood" :-)

Comment author: byrnema 28 May 2010 04:41:22PM *  1 point [-]

Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.)

In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self.

In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.

Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.

Comment author: RichardW 30 May 2010 12:05:51PM 3 points [-]

I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html

He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.

Comment author: Eneasz 28 May 2010 07:35:19PM 5 points [-]

For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress

I'm in the signing process right now, and I wanted to comment on the "work in progress" aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.

The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).

So yeah, the term "working on it" is not correctly applicable to this situation. Someone who's never climbed a flight of stairs may work out for months in preparation, but they really don't need to, and afterwards might be somewhat annoyed that no one who'd climbed stairs before had bothered to tell them so.

Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked "What's CI?" that it's a place that'll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact - on a daily basis - with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don't die, and realize that you never would have.

Comment author: JenniferRM 30 May 2010 08:39:23PM 6 points [-]

The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.

SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don't currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.

Secondarily, there are similarly complex social issues that come up because I'm married, love my family, am able to have philosophical conversations them, and don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?

Finally, I've worked on a personal version of a "drake equation for cryonics" and it honestly wasn't a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family - given that they are generally rational in their own personal ways :-)

Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may "stick" for the next forty years. Against this I weigh the issue of "best being the enemy of good" because (in point of fact) I'm not safe in any way at all right now... which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that "sloppiness calibration" over to the rest of my life?

So, yeah, its a work in progress.

I'm pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that's their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I'll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.

Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I'd need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that - it was more like the simple quality of my food and whether I'd be able to afford one bedroom vs half a bedroom) and they honestly didn't seem to be worth it. As I've gotten older and richer and more influential (and partly due to influence from this community) I've decided I should review the decision again.

The hard part for me is dotting the i's and crossing the t's (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.

Comment author: Eneasz 01 June 2010 05:47:56PM 2 points [-]

don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?

You can't hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can't view yourself as the primary source for their actions.

Comment author: DSimon 14 September 2010 02:51:33PM *  2 points [-]

It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.

The question is: to what degree is failing to sign up for cryonics like suicide by negligence?

Comment author: Alicorn 28 May 2010 07:39:00PM 2 points [-]

getting life insurance is trivially easy

I'm not finding this. Can you refer me to your trivially easy agency?

Comment author: Eneasz 28 May 2010 09:02:08PM 2 points [-]

I used State Farm, because I've had car insurance with them since I could drive, and renters/owner's insurance since I moved out on my own. I had discounts both for multi-line and loyalty.

Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.

Heck, my agent had to do much more work than I did, previous to this she didn't know that you can designate someone other than yourself as the owner of the policy, required some training.

Comment author: Alicorn 28 May 2010 11:02:50PM 4 points [-]

I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn't complete it for me. That was spooky. I don't want to do that.

Comment author: SilasBarta 28 May 2010 07:46:04PM 1 point [-]

and getting life insurance is trivially easy

Disagree. What's this trivially easy part? You can't buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you'll have to get a medical exam, but still...)

Of course, in fairness, I'm trying to combine it with "infinite banking" by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don't want to limit the policy to a specific term, risking that you'll die afterward and no be able to afford the preservation, when the take-off hasn't happened.)

Comment author: Blueberry 28 May 2010 07:53:36PM 1 point [-]

I would think whole life would make more sense than term anyway

Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you'll end up way ahead.

Comment author: SilasBarta 28 May 2010 08:04:44PM *  2 points [-]

Yes, I'm intimately familiar with the argument. And while I'm not committed to whole life, this particular point is extremely unpersuasive to me.

For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.

That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.

If you "buy term and invest the difference", you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you're ~60. The optimistic "long term" returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in '08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds -- and especially not after taxes.

Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries' fiscal situations, is looking more likely every day), they'll most likely be hit before life insurance policies.

So yes, I'm aware of the argument, but there's a lot about the calculation that people miss.

Comment author: RobinZ 28 May 2010 08:10:07PM 1 point [-]

It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.

Comment author: byrnema 27 May 2010 05:05:31PM 3 points [-]

I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.

Well said.

Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.

I think this is true. Cryonics being the "correct choice" doesn't just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can't imagine people not valuing life extension in this way.

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time.

I wouldn't pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)

Comment author: kodos96 26 May 2010 11:59:45PM 5 points [-]

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain

Would it change your mind if that computer program [claimed to] strongly identify with you?

Comment author: Vladimir_M 27 May 2010 12:11:30AM 2 points [-]

I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?

Comment author: kodos96 27 May 2010 12:25:00AM *  3 points [-]

Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.

Comment author: Vladimir_M 27 May 2010 12:53:50AM 2 points [-]

No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me.

I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.

Comment author: kodos96 27 May 2010 07:06:19AM 6 points [-]

If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)

Comment author: Vladimir_M 27 May 2010 09:04:28PM *  6 points [-]

For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.

Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.

Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.

Comment author: jimrandomh 27 May 2010 09:27:11PM 8 points [-]

Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?

Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.

Comment author: blogospheroid 27 May 2010 10:53:33AM 5 points [-]

I'm not sure if this is the right place to ask this or even if it is possible to procure the data regarding the same, but who is the highest status person who has opted for Cryonics? The wealthiest or the most famous..

Having high status persons adopt cryonics can be a huge boost to the cause, right?

Comment author: apophenia 28 May 2010 06:16:19AM 6 points [-]

It certainly boosts publicity, but most of the people I know of who have signed up for cryonics are either various sorts of transhumanists or celebrities. The celebrities generally seem to do it for publicity or as a status symbol. From the reactions I've gotten telling people about cryonics, I feel it has been mostly a negative social impact. I say this not because people I meet are creeped out by cryonics, but because they specifically mention various celebrities. I think if more scientists or doctors (basically, experts) opted for cryonics it might add credibility. I can only assume that lack of customers for companies like Alcor decreases the chance of surviving cryonics.

Comment author: RomanDavis 28 May 2010 05:25:55PM 3 points [-]

Uhhh... no. People developed the Urban legend about Walt Disney for a reason. It's easy to take rich, creative, ingenious, successful people and portray them as eccentric, isolated and out of touch.

Think about the dissonance between "How crazy those Scientologists are" and "How successful those celebrities are." We don't want to create a similar dissonance with cryonics.

Comment author: Jack 05 June 2010 10:50:48AM 1 point [-]

It depends on the celebrity. Michael Jackson, not so helpful. But Oprah would be.

Comment author: Dagon 26 May 2010 05:35:39PM 5 points [-]

I don't know if this is a self-defense mechanism or actually related to the motives of those promoting cryonics in this group, but I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement. If the intent is to remind me that things I do may later turn out to be not just wrong, but extremely wrong, it works pretty well.

It's a good topic to explore agreement theory, as different declared-intended-rationalists have different conclusions, and can talk somewhat dispassionately about such disagreement.

I have trouble believing that anyone means it literally, that for most humans a failure to sign up for cryonics at the earliest opportunity is as wrong as believing there's a giant man in the sky who'll punish or reward you after you die.

Comment author: Will_Newsome 26 May 2010 08:05:16PM 8 points [-]

I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement.

I hadn't thought of this, but if so, it's dangerous rhetoric and just begging to be misunderstood.

Comment author: ShardPhoenix 26 May 2010 12:52:32PM 5 points [-]

On a side note, speaking of "abnormal" and cryonics, apparently Britney Spears wants to sign up with Alcor: http://www.thaindian.com/newsportal/entertainment/britney-spears-wants-to-be-frozen-after-death_100369339.html

I think this can be filed under "any publicity is good publicity".

Comment author: Unnamed 26 May 2010 05:26:57PM 5 points [-]

Is there any way that we could get Britney Spears interested in existential risk mitigation?

Comment author: Will_Newsome 26 May 2010 05:34:01PM 10 points [-]

It's not obvious that this would be good: it could very well make existential risks research appear less credible to the relevant people (current or future scientists).

Comment author: JoshuaZ 26 May 2010 03:08:00PM *  4 points [-]

I was thinking of filing this as an example of Reversed stupidity is not intelligence.

Comment author: steven0461 26 May 2010 07:31:30PM 1 point [-]

I'm surprised. Last time it was Paris Hilton and it turned out not to be true, but it looks like there's more detail this time.

Comment author: steven0461 26 May 2010 08:29:29PM 2 points [-]

This claims it's a false rumor.

Comment author: ShardPhoenix 27 May 2010 07:15:54AM 2 points [-]

That only cites a "source close to the singer" compared to the detail given by the original rumour. However given the small prior probability of this being true I guess it's probably still more likely to be false.

Comment author: Morendil 28 May 2010 07:11:35AM 13 points [-]

This post, like many others around this theme, revolves around the rationality of cryonics from the subjective standpoint of a potential cryopatient, and it seems to assume a certain set of circumstances for that patient: relatively young, healthy, functional in society.

I've been wondering for a while about the rationality of cryonics from a societal standpoint, as applied to potential cryopatients in significantly different circumstances; two categories specifically stand out, death row inmates and terminal patients.

This article cites the cost of a death row inmate (over serving a life sentence) to $90K. This is a case where we already allow that society may drastically curtail an individual's right to control their own destiny. It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

As for terminal patients, this article says:

Aggressive treatments attempting to prolong life in terminally ill people typically continue far too long. Reflecting this overaggressive end-of-life treatment, the Health Care Finance Administration reported that about 25% of Medicare funds are spent in the last 6 months of life (about $68 billion in 2003 or $42,000 per dying patient). Actually, the last 6 months of a Medicare recipient's life consumes about $80,000 for medical services, since Medicare pays only 53% of the bill. Dying cancer patients cost twice the average amount or about $160,000.

These costs are comparable to that charged for cryopreservation. It seems to me that it would be rational (as a cost reduction measure) to offer patients diagnosed with a likely terminal illness the voluntary option of being cryopreserved. At worst, if cryonics doesn't work, this amounts to an "assisted suicide", something that many progressive groups are already lobbying for.

Comment author: byrnema 28 May 2010 11:59:55AM *  7 points [-]

It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

Also, depending upon advances in psychology, there could be the opportunity for real rehabilitation in the future. A remorseful criminal afraid they cannot change may prefer cryopreservation.

Comment author: cjb 28 May 2010 07:16:08PM 6 points [-]

It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

Hm, I don't think that works -- the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right? If you want to suspend the inmate before those appeals then you've curtailed their right to put together a strong defence against being killed, and if you want to suspend the inmate after those appeals then you haven't actually saved any of that money.

.. or did I miss something?

Comment author: Morendil 28 May 2010 07:48:28PM 4 points [-]

the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right?

Some of it is from more expensive incarceration, but you're right. This has one detailed breakdown:

  • Extra defense costs for capital cases in trial phase $13,180,385
  • Extra payments to jurors $224,640
  • Capital post-conviction costs $7,473,556
  • Resentencing hearings $594,216
  • Prison system $169,617

However, we're assuming that with cryonics as an option the entire process would stay the same. That needn't be the case.

Comment author: MartinB 04 November 2010 08:12:08PM 4 points [-]

As yet another media reference. I just rewatched the Star Trek TNG episode 'the neutral zone' which deals with recovery of 3 frozen humans from our time. It was really surprising to me how much disregard for human life is shown in this episode. "Why did you recover them, they were already dead". "Oh bugger, now that you revived/healed them we have to treat them as humans". Also surprising is how much insensibility in dealing with them is shown. When you awake someone from an earlier time you might send the aliens and the robots out of the room.

Comment deleted 27 May 2010 03:08:42PM *  [-]
Comment author: JoshuaZ 27 May 2010 03:37:50PM *  7 points [-]

This is a valid point, but it is slightly OT to discuss precise probability for cryonics. I think that one reason people might not be trying to reach a consensus about the actual probability of success is because it may simply require so much background knowledge that one might need to be an expert to reasonably evaluate the subject. (Incidentally, I'm not aware of any sequence discussing what the proper thing to do is when one has to depend heavily on experts. We need more discussion of that.) The fact that there are genuine subject matter experts like de Magalhaes who have thought about this issue a lot and come to the conclusion that it is extremely unlikely while others who have thought about consider it likely makes it very hard to estimate. (Consider for example if someone asks me if string theory is correct. The most I'm going to be able to do is to shrug my shoulders. And I'm a mathematician. Some issues are just really much too complicated for non-experts to work out a reliable likelyhood estimate based on their own data.)

It might however be useful to start a subthread discussing pro and anti arguments. To keep the question narrow, I suggest that we simply focus on the technical feasibility question, not on the probability that a society would decide to revive people.

I'll start by listing a few:

For:

1) Non-brain animal organs have been successfully vitrified and revived. See e.g. here

2) Humans have been revived from low-oxygen, very cold circumstances with no apparent loss of memory. This has been duplicated in dogs and other small mammals in controlled conditions for upwards of two hours. (However the temperatures reduced are still above freezing).

Against:

1) Vitrification denatures and damages proteins. This may permanently damage neurons in a way that makes their information content not recoverable. If glial cells have a non-trivial role in thought then this issue becomes even more severe. There's a fair bit of circumstantial evidence for glial cells having some role in cognition, including the fact that they often behave abnormally in severe mental illness. See for example this paper discussing glial cells and schizophrenia. We also know that in some limited circumstances glial cells can release neurotransmitters.

2) Even today's vitrification procedures do not necessarily penetrate every brain cell, so there may be severe ice-crystal formation in a lot of neurons.

3) Acoustic fracturing is still a major issue. Since acoustic fracturing occurs even when one is just preserving the head, there's likely severe macroscopic brain damage occurring. This also likely can cause permanent damage to memory and other basic functions in a non-recoverable way. Moreover, acoustic fracturing is only the fracturing from cooling that is so bad that we hear it. There's likely a lot of much smaller fracturing going on. (No one seems to have put a sensitive microphone right near a body or a neuro when cooling. The results could be disconcerting).

Comment author: PhilGoetz 27 May 2010 04:57:03PM *  5 points [-]

No interest on reaching agreement on cryo success probabilities, when this seems like an absolutely crucial consideration. Is this indicative of people who genuinely want to get to the truth of the matter?

You're trying to get to the truth of a different matter. You need to go one level meta. This post is arguing that either position is plausible. There's no need to refine the probabilities beyond saying something like "The expected reward/cost ratio of signing up for cryonics is somewhere between .1 and 10, including opportunity costs."

Comment author: cousin_it 26 May 2010 12:27:57PM *  4 points [-]

Not signing up for cryonics is a rationality error on my part. What stops me is an irrational impulse I can't defeat: I seem to subsonsciously value "being normal" more than winning in this particular game. It is similar to byrnema's situation with religion a while ago. That said, I don't think any of the enumerated arguments against cryonics actually work. All such posts feel like they're writing the bottom line in advance.

Comment author: Will_Newsome 26 May 2010 12:45:33PM 10 points [-]

Quite embarrassingly, my immediate reaction was 'What? Trying to be normal? That doesn't make sense. Europeans can't be normal anyway.' I am entirely unsure as to what cognitive process managed to create that gem of an observation.

Comment author: cousin_it 26 May 2010 12:57:11PM *  8 points [-]

I'm a Russian living in Moscow, so I hardly count as a European. But as perceptions of normality go, the most "normal" people in the world to me are those from the poor parts of Europe and the rich parts of the 3rd world, followed by richer Europeans (internal nickname "aliens"), followed by Americans (internal nickname "robots"). So if the scale works both ways, I'd probably look even weirder to you than the average European.

Comment author: Blueberry 26 May 2010 02:05:01PM 3 points [-]

followed by Americans (internal nickname "robots")

I would love to hear more about how you see the behavior of Americans, and why you see us as "robots"!

Comment author: cousin_it 26 May 2010 02:15:36PM *  8 points [-]

I feel that Americans are more "professional": they can perform a more complete context-switch into the job they have to do and the rules they have to follow. In contrast, a Russian at work is usually the same slacker self as the Russian at home, or sometimes the same unbalanced work-obsessed self.

Comment author: Will_Newsome 26 May 2010 01:14:43PM 3 points [-]

What is your impression of the 'weirdness' of the Japanese culture? 'Cuz it's pretty high up there for me.

Comment author: cousin_it 26 May 2010 01:30:33PM *  1 point [-]

I'm not judging culture, I'm judging people. Don't personally know anyone from Japan. Know some Filipinos and they seemed very "normal" and understandable to me, moreso than Americans.

Comment author: Will_Newsome 26 May 2010 01:54:12PM 2 points [-]

I wanted to visit Russia and Ukraine anyway, but this conversation has made me update in favor of the importance of doing so. I've never come into contact with an alien before. I've heard, however, that ex-Soviets tend to have a more live-and-let-live style of interacting with people who look touristy than, for example, Brazil or Greece, so perhaps it will take an extra effort on my part to discover if there really is a tangible aspect of alienness.

Comment author: NaN 26 May 2010 10:07:45AM *  4 points [-]

I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.

Comment author: Blueberry 26 May 2010 02:03:37PM 3 points [-]

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice

If I didn't explicitly say so before: signing up for cryonics is the obvious correct choice.

Comment author: ShardPhoenix 26 May 2010 01:05:48PM 2 points [-]

At one point Eliezer was accusing literally people who don't sign their kids up for Cyronics of "child abuse".

Comment author: timtyler 26 May 2010 01:37:03PM *  9 points [-]

"If you don't sign up your kids for cryonics then you are a lousy parent." - E.Y.

Comment author: ShardPhoenix 27 May 2010 07:17:57AM 1 point [-]

Yeah looks like I misremembered, but it's essentially the same thing for purposes of illustrating to the OP that some people apparently do think that cryonics is the obvious correct choice.

Comment author: cupholder 26 May 2010 07:44:26PM *  4 points [-]
Comment author: Will_Newsome 26 May 2010 08:02:51PM *  5 points [-]

Um, why would anyone vote this down? It's bad juju to put quote marks around things someone didn't actually say, especially when you disagree with the person you're mischaracterizing. Anyway, thanks for the correction, cupholder.

Comment author: ShardPhoenix 27 May 2010 07:19:21AM *  2 points [-]

Oops, I knew I should have actually looked that up. The difference between "lousy parent" and "child abuse" is only a matter of degree though - Eliezer is still claiming that cryonics is obviously right, which was the point of contention.

Comment author: NancyLebovitz 27 May 2010 09:04:44AM 2 points [-]

It's a difference of degree which matters, especially since people are apt to remember insults and extreme statements.

Comment author: Rain 26 May 2010 04:42:33PM *  15 points [-]

An interesting comparison I mentioned previously: the cost to Alcor of preserving one human (full-body) is $150,000. The recent full annual budget of SIAI is on the order of (edit:) $500,000.

Comment author: Robin 27 May 2010 04:52:52PM 6 points [-]

That's a very good point. It seems there is some dispute about the numbers but the general point is that it would be a lot cheaper to fund SIAI which may save the world than to cryogenically freeze even a small fraction of the world's population.

The point about life insurance is moot. Life insurance companies make a profit so having SIAI as your beneficiary upon death wouldn't even make that much sense. If you just give whatever you'd be paying in life insurance premiums directly to SIAI, you're probably doing much more overall good than paying for a cryonics policy.

Comment author: alyssavance 26 May 2010 05:39:04PM *  6 points [-]

Cryonics Institute is a factor of 5 cheaper than that, the SIAI budget is larger than that, and SIAI cannot be funded through life insurance while cryonics can. And most people who read this aren't actually substantial SIAI donors.

Comment author: Rain 26 May 2010 05:49:05PM *  3 points [-]

You can't assign a life insurance policy to a non-profit organization?

Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?

Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?

Do people who aren't donors not want to know potential cost ratios regarding the arguments specifically made by the top level post?

Comment author: alyssavance 26 May 2010 05:58:55PM 2 points [-]

"You can't assign a life insurance policy to a non-profit organization?"

You can, but it probably won't pay out until relatively far into the future, and because of SIAI's high discount rate, money in the far future isn't worth much.

"Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?"

Yes. The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable.

"Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?"

Probably not, he just wasn't being precise. SIAI's financial data for 2008 is available here (guidestar.org) for anyone who doesn't believe me.

Comment author: Rain 26 May 2010 07:27:01PM 6 points [-]

The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable.

Please provide evidence for this claim. I've heard contradictory statements to the effect that even $150,000 likely isn't enough for long term viability.

Probably not, he just wasn't being precise.

I'm curious how the statement, "our annual budget is in the $200,000/year range", may be considered "imprecise" rather than outright false when compared with data from the source he cited.

SIAI Total Expenses (IRS form 990, line 17):

  • 2006: $395,567
  • 2007: $306,499
  • 2008: $614,822
Comment author: CarlShulman 26 May 2010 09:27:01PM 5 points [-]

I sent Anissimov an email asking him to clarify. He may have been netting out Summit expenses (matching cost of venue, speaker arrangements, etc against tickets to net things out). Also note that 2008 was followed by a turnover of all the SIAI staff except Eliezer Yudkowsky, and Michael Vassar then cut costs.

Comment author: mranissimov 26 May 2010 10:08:31PM 15 points [-]

Hi all,

I was completely wrong on my budget estimate, I apologize. I wasn't including the Summit, and I was just estimating the cost on my understanding of salaries + misc. expenses. I should have checked Guidestar. My view of the budget also seems to have been slightly skewed because I frequently check the SIAI Paypal account, which many people use to donate, but I never see the incoming checks, which are rarer but sometimes make up a large portion of total donations. My underestimate of money in contributing to my underestimating monies out.

Again, I'm sorry, I was not lying, just a little confused and a few years out of date on my estimate. I will search over my blog to modify any incorrect numbers I can find.

Comment author: Rain 27 May 2010 05:37:01PM 1 point [-]

Thank you for the correction.

Comment author: FraserOrr 27 May 2010 02:27:42PM 3 points [-]

Question for the advocates of cryonics: I have heard talk in the news and various places that organ donor organizations are talking about giving priority to people who have signed up to donate their organs. That is to say, if you sign up to be an organ donor, you are more likely to receive a donated organ from someone else should you need one. There is some logic in that in the absence of a market in organs; free riders have their priority reduced.

I have no idea if such an idea is politically feasible (and, let me be clear, I don't advocate it), however, were it to become law in your country, would that tilt the cost benefit analysis away from cryonics sufficiently that you would cancel your contract? (There is a new cost imposed by cryonics: namely that the procedure prevents you from being an organ donor, and consequently, reduces your chance of a life saving organ transplant.)

Comment author: gregconen 28 May 2010 02:38:57AM 1 point [-]

In most cases, signing up for cryonics and signing up as an organ donor are not mutually exclusive. The manner of death most suited to organ donation (rapid brain death with (parts of) the body still in good condition, generally caused by head trauma) is not well suited to cryonic preservation. You'd probably need a directive in case the two do conflict, but such a conflict is unlikely.

Alternatively, neuropreservation can, at least is theory, occur with organ donation.

Comment deleted 27 May 2010 09:40:15PM [-]
Comment author: FraserOrr 28 May 2010 01:32:07AM 1 point [-]

The 15 year gain may be enough to get you over the tipping point where medicine can cure all your ails, which is to say, 15 years might buy you 1000 years.

I think you are being pretty optimistic if you think the probability of success of cryonics is 10%. Obviously, no one has any data to go on for this, so we can only guess. However, there is a lot of strikes against cryonics, especially so if only your head gets frozen. In the future will they be able to recreate a whole body from head only? In the future will your cryogenic company still be in business? If they go out of business does your frozen head have any rights? If technology is designed to restore you, will it be used? Will the government allow it to be used? Will you be one of the first guinea pigs to be tested, and be one of the inevitable failures? Will anyone want an old fuddy duddy from the far past to come back to life? In the interim has there been an accident, war, malicious action by eco terrorists, that unfroze your head? And so forth.

It seems to me that preserving actual life as long as possible is the best bet.

Comment author: utilitymonster 27 May 2010 11:19:04AM *  3 points [-]

Thanks for this post. I tend to lurk, and I had some similar questions about the LW enthusiasm for cryo.

Here's something that puzzles me. Many people here, it seems to me, have the following preference order:

pay for my cryo > donation: x-risk reduction (through SIAI, FHI, or SENS) > paying for cryo for others

Of course, for the utilitarians among us, the question arises: why pay for my cryo over risk reduction? (If you just care about others way less than you care about yourself, fine.) Some answer by arguing that paying for your own cryo maximizes x-risk reduction better than the other alternatives because of its indirect effects. This reeks of wishful thinking and doesn't fit well with the preference order above. There are plenty of LWers, I assume, who haven't signed up for cryo, but would if someone else would pay the life insurance policy. If you really think that paying for your own cryo maximizes x-risk reduction, shouldn't you also think that getting others signed up for cryo does as well? (There are some differences, sure. Maybe the indirect effects aren't as substantial if others don't pay their own way in full. But I doubt this justifies the preference.) If so, it would seem that rather than funding x-risk reduction through donating to these organizations, you should fund the cryo preservation of LWers and other willing people.

So which is it utilitarians: you shouldn't pay for your own cryo or you should be working on paying for the cryo of others as well?

If you think paying for cryo is better, want to pay for mine first?

Comment author: Baughn 28 May 2010 12:13:01PM 3 points [-]

I care more about myself than about others. This is what would be expected from evolution and - frankly - I see no need to alter it. Well, I wouldn't.

I suspect that many people who claim they don't are mistaken, as the above preference ordering seems to illustrate. Maximize utility, yes; but utility is a subjective function, as my utility function makes explicit reference to myself.

Comment author: Will_Newsome 26 May 2010 02:49:55PM *  3 points [-]

EDIT: Nick Tarleton makes a good point in reply to this comment, which I have moved to be footnote 2 in the text.

Comment author: Nick_Tarleton 26 May 2010 08:13:38PM 1 point [-]

This distinction might warrant noting in the post, since it might not be clear that you're only criticizing one position, or that the distinction is really important to keep in mind.

Comment author: JoshuaZ 26 May 2010 02:34:19PM *  3 points [-]

This post seems to focus too much on Singularity related issues as alternative arguments. Thus, one might think that if one assigns the Singularity a low probability one should definitely take cryonics. I'm going to therefore suggest a few arguments against cryonics that may be relevant:

First, there are other serious existential threats to humans. Many don't even arise from our technology. Large asteroids would be an obvious example. Gamma ray bursts and nearby stars going supernova are other risks. (Betelgeuse is a likely candidate for a nearby supernova making our lives unpleasant. If current estimates are correct there will be substantial radiation from Betelgeuse in that situation but not so much as to wipe out humanity. But we could be wrong.)

Second, one may see a high negative utility if one gets cryonics and one's friends and relatives do not. The abnormal after death result could substantially interfere with their grieving processes. Similarly, there's a direct opportunity cost to paying and preparing for cryonics.

The above argument about lost utility is normally responded to by claiming that the expected utility for cryonics is infinite. If this were actually the case, this would be a valid response.

This leads neatly to my third argument: The claim that my expected utility from cryonics is infinite fails. Even in the future, there will be some probability that I die at any given point. If that probability is never reduced below a certain fixed amount, then my expected life-span is still finite even if I assume cryonics succeeds. (Fun little exercise, suppose that my probability of dying is x on any given day. What is my expected number of days of life? Note that no matter how small x is, as long as x>0, you still get a finite number). Thus, even if one agrees that an infinite lifespan can give infinite utility, it doesn't follow that cryonics gives an expected value that is infinite. (Edit: What happens in a MWI situation is more complicated but similar arguments can be made as the fraction of universes where you exist declines at a geometric rate so the total sum of utility over all universes is still finite)

Fourth, it isn't even clear that one can meaningfully talk about infinite utility. For example, consider the situation where you are given two choices (probably given to you by Omega because that's the standard genie equivalent on LW). In one of them, you are guaranteed immortality with no costs. In the other you are guaranteed immortality but are first tortured for a thousand years. The expected utility for both is infinite, but I'm pretty sure that no one is indifferent to the two choices. This is closely connected to the fact that economists when using utility make an effort to show that their claims remain true under monotonic transformations of total utility. This cannot hold when one has infinite utility being bandied about (it isn't even clear that such transformations are meaningful in such contexts). So much of what we take for granted about utility breaks down.

Comment author: orthonormal 27 May 2010 01:42:55AM 4 points [-]

And if the expected utility of cryonics is simply a very large yet finite positive quantity?

Comment author: JoshuaZ 27 May 2010 02:00:13AM 2 points [-]

In that case, arguments that cryonics is intrinsically the better choice become much more dependent on specific estimates of utility and probability.

Comment author: Vladimir_Nesov 27 May 2010 09:42:44AM 5 points [-]

In that case, arguments that cryonics is intrinsically the better choice become much more dependent on specific estimates of utility and probability.

And so they should.

Comment author: PhilGoetz 26 May 2010 08:08:11PM *  6 points [-]

The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong.

Thus triggering the common irrational inference, "If something is attacked with many spurious arguments, especially by religious people, it is probably true."

(It is probably more subtle than this - When you make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)

Comment author: Mardonius 26 May 2010 09:19:02PM 7 points [-]

Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)

I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.

Comment author: Nick_Tarleton 26 May 2010 09:24:52PM 3 points [-]

Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)

Yes. This is discussed well in the comments on What Evidence Filtered Evidence?.

Comment author: PhilGoetz 27 May 2010 05:11:22PM *  1 point [-]

No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren't even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence).

This isn't confined to religion; it's a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with.

Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.

Comment author: JoshuaZ 27 May 2010 05:32:51PM 2 points [-]

I don't think many religious people spend "tremendous effort" arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren't prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter's arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death).

But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death.

I don't think however that this issue can be exclusively explained by Christianity, since I've encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn't hold too much ground given how much neopaganism seems to be a reaction against Christianity).

Comment author: ShardPhoenix 26 May 2010 12:51:43PM 6 points [-]

Probably my biggest concern with cryonics is that if I was to die at my age (25), it would probably be in a way where I would be highly unlikely to be preserved before a large amount of decay had already occurred. If there was a law in this country (Australia) mandating immediate cryopreservation of the head for those contracted, I'd be much more interested.

Comment author: Jordan 26 May 2010 10:18:57PM 9 points [-]

Agreed. On the other hand, in order to get laws into effect it may be necessary to first have sufficient numbers of people signed up for cryonics. In that sense, signing up for cryonics might not only save your life, it might spur changes that will allow others to be preserved better (faster), potentially saving more lives.

Comment author: CronoDAS 26 May 2010 09:29:56PM 8 points [-]

Reason #6 not to sign up: Cryonics is not compatible with organ donation. If you get frozen, you can't be an organ donor.

Comment author: magfrump 26 May 2010 10:54:30PM *  2 points [-]

There was a short discussion previously about how cryonics is most useful in cases of degenerative diseases, whereas organ donation is most successful in cases of quick deaths such as due to car accidents; which is to say that cryonics and organ donation are not necessarily mutually exclusive preparations because they may emerge from mutually exclusive deaths.

Though maybe not, which is why I had asked about organ donation in the first place.

Comment author: Sniffnoy 26 May 2010 10:31:38PM 2 points [-]

Is that true in general, or only for organizations that insist on full-body cryo?

Comment author: CronoDAS 27 May 2010 02:59:02AM 1 point [-]

AFACT (from reading a few cryonics websites), it seems to be true in general, but the circumstances under which your brain can be successfully cryopreserved tend to be ones that make you not suitable for being an organ donor anyway.

Comment author: Gabriel 27 May 2010 03:12:45AM 1 point [-]

Could you elaborate on that? Is cryonic suspension inherently incompatible with organ donation, even when you are going with the neuro option or does the incompatibility stem from current obscurity of cryonics? I imagine that organ harvesting could be combined with early stages of cryonic suspension if the latter was more widely practiced.

Comment author: Matt_Duing 27 May 2010 06:00:58AM 6 points [-]

The cause of death of people suitable to be organ donors is usually head trauma.

Comment author: Blueberry 27 May 2010 03:30:28AM 2 points [-]

Alternatively, that's a good reason not to sign up for organ donation. Organ donation won't increase my well-being or happiness any, while cryonics might.

In addition, there's the problem that being an organ donor creates perverse incentives for your death.

Comment author: Jack 05 June 2010 10:37:02AM 3 points [-]

You get no happiness knowing there is a decent chance your death could save the lives of others?

Would you turn down a donated organ if you needed one?

Comment author: taw 26 May 2010 10:32:21PM -3 points [-]

This is my reason I wouldn't sign up for free (and I am registered organ donor). If it wasn't for that, it would still be too expensive, all bullshit creative accounting I've seen on this site notwithstanding.

Comment author: Will_Newsome 27 May 2010 10:15:37PM 1 point [-]

Would you consider Alicorn trustworthy enough to determine whether or not the accounting is actually bullshit? She's going through the financial stuff right now, and I could ask her about any hidden fees the cryonauts on Less Wrong have been quiet about.

Comment author: Alicorn 27 May 2010 10:21:22PM 1 point [-]

Um, I'm not a good person to go to for financial advice of any kind. Mostly I'm going to shop around until I find an insurance agent who isn't creepy and wants a non-crippling sum of money.

Comment author: alyssavance 26 May 2010 05:37:26PM 6 points [-]

I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).

"Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are."

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death". This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .

"If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying:"

If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I'm only eighteen.

"A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI3 than by spending that money on pursuing a small chance of eternal life."

If you already donate more than 5% of your income or time to one of these organizations, I'll buy that. Otherwise (and this "otherwise" will apply to the vast majority of LW commenters), it's invalid. You can't say "alternative X would be better than Y, therefore we shouldn't do Y" if you're not actually doing X.

"Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere"

Why? Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn't signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.

"People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious "

This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a "fringe subject" seriously are two entirely separate and to some extent mutually exclusive goals. Maintaining high epistemic standards internally requires that you call people on it if you think they are being stupid. Becoming friends with a person who sees you as a kook requires not telling them about every time they're being stupid.

"Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone."

If people are having kids who they can't afford (cryonics is extremely cheap; someone who can't afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it's a stupid decision is a separate question, but it probably is.

"One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways."

99% of the world's population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can't see a credible argument for why it wouldn't be the case a large percentage of the time.

Comment author: Will_Newsome 26 May 2010 07:34:52PM *  5 points [-]

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death".

No. It more accurately reduces to "we don't really know what the heck existence is, so we should worry even more about these fundamental questions and not presume their answers are inconsequential; taking precautions like signing up for cryonics may be a good idea, but we should not presume our philosophical conclusions will be correct upon reflection."

If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I'm only eighteen.

Alright, but I would argue that a date of 2050 is pretty damn late. I'm very much in the 'singularity is near' crowd among SIAI folk, with 2050 as an upper bound. I suspect there are many who would also assign a date much sooner than 2050, but perhaps this was simply typical mind fallacy on my part. At any rate, your 13% is my 5%, probably not the biggest consideration in the scheme of things; but your implicit point is correct that people who are much older than us should give more pause before dismissing this very important conditional probability as irrelevant.

If you already donate more than 5% of your income or time to one of these organizations, I'll buy that. Otherwise (and this "otherwise" will apply to the vast majority of LW commenters), it's invalid. You can't say "alternative X would be better than Y, therefore we shouldn't do Y" if you're not actually doing X.

Maybe, but a major point of this post is that it is bad epistemic hygiene to use generalizations like 'the vast majority of LW commenters' in a rhetorical argument. You and I both know many people who donate much more than 5% of their income to these kinds of organizations.

Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn't signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.

But I'm talking specifically about assuming that any given argument against cryonics is stupid. Yes, correct people when they're wrong about something, and do so emphatically if need be, but do not assume because weak arguments against your idea are more common that there do not exist strong arguments that you should presume your audience does not possess.

This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a "fringe subject" seriously are two entirely separate and to some extent mutually exclusive goals.

If the atmosphere is primarily based on memetics and rhetoric, than yes; but if it is founded in rationality, then the two should go hand in hand. (At least, my intuitions say so, but I could just be plain idealistic about the power of group epistemic rationality here.)

If people are having kids who they can't afford (cryonics is extremely cheap; someone who can't afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it's a stupid decision is a separate question, but it probably is.

It's not a separate question, it's the question I was addressing. You raised the separate question. :P

99% of the world's population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can't see a credible argument for why it wouldn't be the case a large percentage of the time.

What about 99% of Less Wrong readers? 99% of the people you're trying to reach with your rhetoric? What about the many people I know at SIAI that have perfectly reasonable arguments against signing up for cryonics and yet consistently contribute to or read Less Wrong? You're not actually addressing the world's population when you write a comment on Less Wrong. You're addressing a group with a reasonably high standard of thinking ability and rationality. You should not assume their possible objections are stupid! I think it should be the duty of the author not to generalize when making in-group out-group distinctions; not to paint things as black and white, and not to fall into (or let readers unnecessarily fall into) groupthink.

Comment author: Gavin 26 May 2010 06:54:42PM *  2 points [-]

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death". This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .

Death is bad. The question is whether being revived is good. I'm not sure whether or not I particularly care about the guy who gets unfrozen. I'm not sure how much more he matters to me than anyone else. Does he count as "me?" Is that a meaningful question?

I'm genuinely unsure about this. It's not a decisive factor (it only adds uncertainty), but to me it is a meaningful one.

Comment author: Violet 27 May 2010 08:58:20AM *  4 points [-]

I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don't place a large value on there being a "Violet" in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible "fixing" of brain is very high priority 5) Thus I don't want to be revived by far-future and death without cryonics seems a safe way for that

Comment author: Jowibou 29 May 2010 11:25:51AM 2 points [-]

Is it so irrational to not fear death?

Comment author: mistercow 31 May 2010 03:43:33AM 9 points [-]

Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.

Comment author: ciphergoth 29 May 2010 02:08:55PM 3 points [-]

No, that could be perfectly rational, but many who claim not to fear death tend to look before crossing the road, take medicine when sick and so on.

Comment author: ata 29 May 2010 11:46:24AM *  2 points [-]

As with most (all?) questions of whether an emotion is rational, it depends on what you value and what situation you're facing. If you can save a hundred lives by risking yours, and there's no less risky way nor (hypothetically) any way for you to save more people by other means while continuing to live, and you want to save lives, and if fear of death would stop you from going through with it, then it's irrational to fear death in that case. But in general, when you're not in a situation like that, you should feel as strongly as necessary whatever emotion best motivates you to keep living and avoid things that would stop you from living (assuming you like living). Whether that's fear of death or love of life or whatever else, feel it.

If you're talking about "fear of death" as in constant paranoia over things that might kill you, then that's probably irrational for most people's purposes. Or if you're not too attached to being alive, then it's not too irrational to not fear death, though that's an unfortunate state of affairs. But for most people, generally speaking, I don't see anything irrational about normal levels of fear of death.

Comment author: Vladimir_Nesov 29 May 2010 12:09:20PM 3 points [-]

Or if you're not too attached to being alive

(Keeping in mind the distinction between believing that you are not too attached to being alive and actually not having a strong preference for being alive, and the possibility of the belief being incorrect.)

Comment author: dripgrind 27 May 2010 08:41:58AM *  2 points [-]

Here's another possible objection to cryonics:

If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.

"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:

Suppose the Singularity develops from an AI that was initially based on a human upload. When it becomes clear that there is a real possibility of uploading and gaining immortality in some sense, many people will compete for upload slots. The winners will likely be the rich and powerful. Billionaires tend not to be known for their public-spirited natures - in general, they lobby to reorder society for their benefit and to the detriment of the rest of us. So, the core of the AI is likely to be someone ruthless and maybe even frankly sociopathic.

Imagine being revived into a world controlled by a massively overclocked Dick Cheney or Vladimir Putin or Marquis De Sade. You might well envy the dead.

Unless you are certain that no Singularity will occur before cryonics patients can be revived, or that Friendly AI will be developed and enforced before the Singularity, cryonics might be a ticket to Hell.

Comment author: humpolec 27 May 2010 10:23:42AM *  3 points [-]

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

Comment author: dripgrind 27 May 2010 11:01:26AM 2 points [-]

An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.

I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Comment author: humpolec 27 May 2010 05:30:59PM 4 points [-]

Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever.

I have to admit I haven't thought much about this, though.

Comment author: Baughn 28 May 2010 12:20:31PM 6 points [-]

Paperclipping is a relatively simple failure. The difference between paperclipping and evil is mainly just that - a matter of complexity. Evil is complex, turning the universe into tuna is decidedly not.

On the scale of friendliness, I ironically see an "evil" failure (meaning, among other things, that we're still in some sense around to notice it being evil) becoming more likely as friendliness increases. As we try to implement our own values, failures become more complex, and less likely to be total - thus letting us stick around to see them.

Comment author: wedrifid 02 June 2012 03:21:40AM *  1 point [-]

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

"Where in this code do I need to put this "-ve" sign again?"

The two are approximately equal in difficulty, assuming equivalent flexibility in how "Evil" or "Friendly" it would have to be to qualify for the definition.

Comment deleted 26 May 2010 06:34:07PM *  [-]
Comment author: Will_Newsome 26 May 2010 06:43:55PM *  2 points [-]

Correction: not 'you', me specifically. I'm young, phyisically and psychologically healthy, and rarely find myself in situations where my life is in danger (the most obvious danger is of course car accidents). It should also be noted that I think a singularity is a lot nearer than your average singularitarian, and think the chance of me dying a non-accidental/non-gory death is really low.

I'm afraid that 'this discussion' is not the one I originally intended with this post: do you think it is best to have it here? I'm afraid that people are reading my post as taking a side (perhaps due to a poor title choice) when in fact it is making a comment about the unfortunate certainty people seem to consistently have on both sides of the issue. (Edit: Of course, this post does not present arguments for both sides, but simply attempts to balance the overall debate in a more fair direction.)

Comment author: PhilGoetz 26 May 2010 08:06:40PM 1 point [-]

I don't think so - the points in the post stand regardless of the probability Will assigns. Bringing up other beliefs of Will is an ad hominem argument. Ad hominem is a pretty good argument in the absence of other evidence, but we don't need to go there today.

Comment deleted 26 May 2010 11:54:02PM *  [-]
Comment author: Will_Newsome 27 May 2010 12:12:41AM *  3 points [-]

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Once again, my probability estimate was for myself. There are important subjective considerations, such as age and definition of identity, and important sub-disagreements to be navigated, such as AI takeoff speed or likelihood of Friendliness. If I was 65 years old, and not 18 like I am, and cared a lot about a very specific me living far into the future, which I don't, and believed that a singularity was in the distant future, instead of the near-mid future as I actually believe, then signing up for cryonics would look a lot more appealing, and might be the obviously rational decision to make.

Comment deleted 27 May 2010 10:53:27AM *  [-]
Comment author: Will_Newsome 27 May 2010 09:54:22PM 1 point [-]

What?! Roko, did you seriously not see the two points I had directly after the one about age? Especially the second one?! How is my lack of a strong preference to stay alive into the distant future a false preference? Because it's not a false belief.

Comment deleted 27 May 2010 10:04:30PM *  [-]
Comment author: Will_Newsome 27 May 2010 01:04:38AM 2 points [-]

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Similar to what I think JoshuaZ was getting at, signing up for cryonics is a decently cheap signal of your rationality and willingness to take weird ideas seriously, and it's especially cheap for young people like me who might never take advantage of the 'real' use of cryonics.

Comment author: JoshuaZ 27 May 2010 12:05:38AM 1 point [-]

Really? Even if you buy into Will's estimate, there are at least three arguments that are not weak:

1) The expected utility argument (I presented above arguments for why this fails, but it isn't completely clear that those rebuttals are valid)

2) One might think that buying into cryonics helps force people (including oneself) to think about the future in a way that produces positive utility.

3) One gets a positive utility from the hope that one might survive using cryonics.

Note that all three of these are fairly standard pro-cryonics arguments that all are valid even with the low probability estimate made by Will.

Comment deleted 27 May 2010 10:55:55AM *  [-]
Comment author: Kevin 26 May 2010 10:39:09AM *  2 points [-]

I think cryonics is a great idea and should be part of health care. However, $50,000 is a lot of money to me and I'm reluctant to spend money on life insurance, which except in the case of cryonics is almost always a bad bet.

I would like my brain to be vitrified if I am dead, but I would prefer not to pay $50,000 for cryonics in the universes where I live forever, die to existential catastrophe, or where cryonics just doesn't work.

What if I specify in my (currently non-existent) cryonics optimized living will that up to $100,000 from my estate is to be used to pay for cryonics? It's not nearly as secure as a real cryonics contract, but it has the benefit of not costing $50,000.

Comment author: khafra 26 May 2010 02:39:40PM *  4 points [-]

Alcor recommends not funding out of your estate, because in the current legal system any living person with the slightest claim will take precedence over the decedent's wishes. Even if the money eventually goes to Alcor, it'll be after 8 months in probate court; and your grey matter's unlikely to be in very good condition for preservation at that point.

Comment author: Kevin 26 May 2010 10:41:39PM 3 points [-]

I know they don't recommend this, but I suspect a sufficiently good will and trust setup would have a significant probability of working, and the legal precedent set by that would be beneficial to other potential cryonauts.

Comment author: Will_Newsome 26 May 2010 10:48:12AM *  1 point [-]

This sounds like a great practical plan if you can pull it off, and, given your values, possibly an obviously correct course of action. However, it does not answer the question of whether being vitrified after death will be seen as correct upon reflection. The distinction here is important.

Comment deleted 26 May 2010 11:59:44AM [-]
Comment author: timtyler 26 May 2010 01:46:45PM *  4 points [-]

Most people already have a reason to care about the future - since it contains their relatives and descendants - and those are among the things that they say they care about.

If you are totally sterile - and have no living relatives - cryonics might seem like a reasonable way of perpetuating your essence - but for most others, there are more conventional options.

Comment deleted 26 May 2010 06:16:41PM [-]
Comment author: taw 26 May 2010 10:28:01PM 3 points [-]

Interest rates over the past 20 years have been about 7%, implying that people's half-life of concern for the future is only about 15 years.

This is plain wrong. Most of these rates is inflation premium (premium for inflation you need to pay is higher than actual inflation because you also bear entire risk if inflation gets higher than predicted, and it cannot really get lower than predicted - it's not normally distributed).

Inflation-adjusted US treasury bonds have rates like 1.68% a year over last 12 years., and never really got much higher than 3%.

For most interest rates like the UK ones you quote there's non-negligible currency exchange risk and default risk in addition to all that.

Comment author: Vladimir_M 26 May 2010 11:04:28PM 1 point [-]

taw:

Inflation-adjusted US treasury bonds have rates like 1.68% a year over last 12 years., and never really got much higher than 3%.

Not to mention that even these figures are suspect. There is no single obvious or objectively correct way to calculate the numbers for inflation-adjustment, and the methods actually used are by no means clear, transparent, and free from political pressures. Ultimately, over a longer period of time, these numbers have little to no coherent meaning in any case.

Comment author: timtyler 26 May 2010 08:53:38PM *  3 points [-]

Levels of concern about the future vary between individuals - whereas interest rates are a property of society. Surely these things are not connected!

High interest rates do not reflect a lack of concern about the future. They just illustrate how much money your government is printing. Provided you don't invest in that currency, that matters rather little.

I agree that cryonics would make people care about the future more. Though IMO most of the problems with lack of planning are more to do with the shortcomings of modern political systems than they are to do with voters not caring about the future.

The problem with cryonics is the cost. You might care more, but you can influence less - because you no longer have the cryonics money. If you can't think of any more worthwhile things to spend your money on, go for it.

Comment author: Will_Newsome 26 May 2010 12:13:32PM *  3 points [-]

Good point: mainstream cryonics would be a big step towards raising the sanity waterline, which may end up being a prerequisite to reducing various kinds of existential risk. However, I think that the causal relationship goes the other way, and that raising the sanity waterline comes first, and cryonics second: if you can get the average person across the inferential distance to seeing cryonics as reasonable, you can most likely get them across the inferential distance to seeing existential risk as really flippin' important. (I should take the advice of my own post here and note that I am sure there are really strong arguments against the idea that working to reduce existential risk is important, or at least against having much certainty that reducing existential risk will have been the correct thing to do upon reflection, at the very least on a personal level.) Nonetheless, I agree further analysis is necessary, though difficult.

Comment deleted 26 May 2010 01:15:58PM *  [-]
Comment author: Will_Newsome 26 May 2010 01:41:34PM *  4 points [-]

Your original point was that "getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned", in which case your above comment is interesting, but tangential to what we were discussing previously. I agree that getting people to sign up for cryonics will almost assuredly get more people to sign up for cryonics (barring legal issues becoming more salient and thus potentially more restrictive as cryonics becomes more popular, or bad stories publicized whether true or false), but "because then the public at large would have a reason to care about the future" does not seem to be a strong reason to expect existential risk reduction as a result (one counterargument being the one raised by timtyler in this thread). You have to connect cryonics with existential risk reduction, and the key isn't futurism, but strong epistemic rationality. Sure, you could also get interest sparked via memetics, but I don't think the most cost-effective way to do so would be investment in cryonics as opposed to, say, billboards proclaiming 'Existential risks are even more bad than marijuana: talk to your kids.' Again, my intuitions are totally uncertain about this point, but it seems to me that the option a) 10 million dollars -> cryonics investment -> increased awareness in futurism -> increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars -> any other memetic strategy -> increased awareness in existential risk reduction.

Comment deleted 26 May 2010 02:15:50PM *  [-]
Comment author: Will_Newsome 26 May 2010 02:25:17PM *  5 points [-]

And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?

I think the correct question here is instead "Do you really value a very, very small chance at you having been signed up for cryonics leading to huge changes in your expected utility in some distant future across unfathomable multiverses more than an assured small amount of utility 30 minutes from now?" I do not think the answer is obvious, but I lean towards avoiding long-term commitments until I better understand the issues. Yes, a very very very tiny amount of me is dying everyday due to freak kitchen accidents, but that much of my measure is so seemingly negligible that I don't feel too horrible trading it off for more thinking time and half a Hershey's bar.

The reasons you gave for spending a dollar a day on cryonics seem perfectly reasonable and I have spent a considerable amount of time thinking about them. Nonetheless, I have yet to be convinced that I would want to sign up for cryonics as anything more than a credible signal of extreme rationality. From a purely intuitive standpoint this seems justified. I'm 18 years old and the singularity seems near. I have measure to burn.

Comment deleted 26 May 2010 02:28:57PM [-]
Comment author: Will_Newsome 26 May 2010 02:40:30PM *  3 points [-]

Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.

So basically... (far) less than a 1% chance of saving 'me', but even then, I don't have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I'm 18, these arguments do not hold equally well for people who are older than me.)

Comment author: Airedale 26 May 2010 07:16:48PM 5 points [-]

a disposition to be reasonably careful with my life

When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person's "reasonably careful" is another person's "needlessly risky."

Comment author: Will_Newsome 27 May 2010 10:40:28PM 2 points [-]

This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is 'worth it'.

Comment author: Will_Newsome 26 May 2010 07:47:07PM 1 point [-]

Ha, good times. :) But being careful with one's life and being careful with one's limb are too very different things. I may be stupid, but I'm not stupid.

Comment deleted 26 May 2010 02:49:36PM *  [-]
Comment author: CarlShulman 26 May 2010 09:37:05PM 2 points [-]

If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying

This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you've signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we're talking about insurance policies.

Comment deleted 26 May 2010 11:48:00PM [-]
Comment author: Will_Newsome 27 May 2010 12:27:55AM *  1 point [-]

Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I'm going to be blunt here, all arguments against the feasibility of a singularity that I've seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn't change whether or not an AGI in such a universe looking at its source code can go FOOM.

Comment author: CarlShulman 27 May 2010 03:52:45AM *  2 points [-]

Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.

With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.

Comment author: Will_Newsome 27 May 2010 04:07:25AM 2 points [-]

Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.

I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn't all that important when you're talking about multiverse-affecting technologies; no, really, I'm not sure 5% of my measure is worth having to give up half a Hershey's bar everyday, when we're talking crazy post-singularity decision theoretic scenarios from one of Escher's worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.

Comment author: steven0461 27 May 2010 01:30:45AM 1 point [-]

I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies

How serious 0-10, and what's a decision theoretic zombie?

Comment author: Will_Newsome 27 May 2010 01:39:53AM *  1 point [-]

A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer's idea of optimization power. This is all probably stupid and wrong but it's interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)

I'm going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to 'normality', whatever your average rationalist thinks 'normality' is. Some of the implications of decision theory really are legitimately weird.

Comment author: steven0461 27 May 2010 01:45:18AM 1 point [-]

What do you mean by decision theoretic and information theoretic measure? You don't come across as ontologically fundamental IRL.

Comment author: Will_Newsome 27 May 2010 01:57:11AM *  2 points [-]

Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn't care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one's measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, 'probability as preference', and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I'm trying to convey?

"You don't come across as ontologically fundamental IRL." Ha, I was kind of trolling there, but something along the lines of 'I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse'. It's one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one's philosophical foot off. I don't take any of my ideas too seriously, but collectively, I feel like they're representative of a confusion that not only I have.

Comment author: steven0461 26 May 2010 07:15:39PM 2 points [-]

Good post. People focus only on the monetary cost of cryonics, but my impression is there are also substantial costs from hassle and perceived weirdness.

Comment author: Torben 27 May 2010 09:28:52AM 1 point [-]

Really? I may be lucky, but I have quite the opposite experience. Of course, I haven't signed up due to my place of residence but I have mentioned it to friends and family and they don't seem to think much about it.

Comment author: cjb 27 May 2010 03:38:05AM 1 point [-]

Hi, I'm pretty new here too. I hope I'm not repeating an old argument, but suspect I am; feel free to answer with a pointer instead of a direct rebuttal.

I'm surprised that no-one's mentioned the cost of cryonics in relation to the reduction in net human suffering that could come from spending the money on poverty relief instead. For (say) USD $50k, I could save around 100 lives ($500/life is a current rough estimate at lifesaving aid for people in extreme poverty), or could dramatically increase the quality of life of 1000 people (for example, cataract operations to restore sight to a blind person are around $50).

How can we say it's moral to value such a long shot at elongating my own life as being worth more than 100-1000 lives of other humans who happened to do worse in the birth wealth lottery than I did?

Comment author: knb 27 May 2010 08:23:00AM 6 points [-]

This is also an argument against going to movies, buying coffee, owning a car, or having a child. In fact, this is an argument against doing anything beyond living at the absolute minimum threshold of life, while donating the rest of your income to charity.

How can you say it's moral to value your own comfort as being worth more than 100-1000 other humans? They just did worse at the birth lottery, right?

Comment author: cjb 28 May 2010 01:49:16AM 2 points [-]

It's not really an argument against those other things, although I do indeed try to avoid some luxuries, or to match the amount I spend on them with a donation to an effective aid organization.

What I think you've missed is that many of the items you mention are essential for me to continue having and being motivated in a job that pays me well -- well enough to make donations to aid organizations that accomplish far more than I could if I just took a plane to a place of extreme poverty and attempted to help using my own skills directly.

If there's a better way to help alleviate poverty than donating a percentage of my developed-world salary to effective charities every year, I haven't found it yet.

Comment author: knb 28 May 2010 02:51:06AM 4 points [-]

Ah, I see. So when you spend money on yourself, it's just to motivate yourself for more charitable labor. But when those weird cryonauts spend money on themselves, they're being selfish!

How wonderful to be you.

Comment author: nazgulnarsil 27 May 2010 04:46:54AM *  4 points [-]

like this: I value my subjective experience more than even hundreds of thousands of other similar-but-not-me subjective experiences.

additionally, your argument applies to generic goods you choose over saving people, not just cryonics.

Comment author: Will_Newsome 27 May 2010 11:03:51PM 2 points [-]

One can expect to live a life at least 100-1000 times longer than those other poor people, or live a life that has at least 100-1000 times as much positive utility, as well as the points in the other comments.

Although this argument is a decent one for some people, it's much more often the product of motivated cognition than carefully looking at the issues, so I did not include it in the post.

Comment author: CronoDAS 26 May 2010 09:34:07PM 0 points [-]

Reason #7 not to sign up: There is a significant chance that you will suffer information-theoretic death before your brain can be subjected to the preservation process. Your brain could be destroyed by whatever it is that causes you to die (such as a head injury or massive stroke) or you could succumb to age-related dementia before the rest of your body stops functioning.

Comment author: JoshuaZ 26 May 2010 09:42:34PM 4 points [-]

In regards to dementia, it isn't at all clear that that will necessarily lead to information-theoretic death. We don't have a good enough understanding of dementia to know if the information is genuinely lost or just difficult to recover. The fact that many forms of dementia have more or less lucid periods and periods where they can remember who people are and other times where they cannot is all tentative evidence that the information is recoverable.

Also, this argument isn't that strong an argument. This isn't going to be substantially altering whether or not it makes sense to sign up by more than probably an order of magnitude at the very most (relying on chance of violent death and chance that one will have dementia late in life).

Comment author: thezeus18 27 May 2010 06:17:21AM *  1 point [-]

I'm surprised that you didn't bring up what I find to be a fairly obvious problem with Cryonics: what if nobody feels like unthawing you? Of course, not having followed this dialogue I'm probably missing some equally obvious counter to this argument.

Comment author: Bo102010 27 May 2010 07:25:36AM *  2 points [-]

If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down.

It sounds like Pascal's Wager (small chance at success, potentially infinite payoff), but it doesn't fail for the same reasons Pascal's Wager does (Pascal's gambit for one religion would work just as well for any other one.) - discussed here a while back.

Comment author: Unnamed 26 May 2010 05:24:48PM *  1 point [-]

Another argument against cryonics is just that it's relatively unlikely to work (= lead to your happy revival) since it requires several things to go right. Robin's net present value calculation of the expected benefits of cryonic preservation isn't all that different from the cost of cryonics. With slightly different estimates for some of the numbers, it would be easy to end up with an expected benefit that's less than the cost.

Comment author: Vladimir_Nesov 26 May 2010 11:08:00AM *  1 point [-]

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

Harder, not harder, but which is actually right? This is not about signaling one's ability to do the harder thing.

The reasons you listed are not ones moving most people to not sign up for cryonics. Most people, as you mention at the beginning, simply don't take the possibility seriously enough to even consider it in detail.

Comment author: Will_Newsome 26 May 2010 11:30:54AM 1 point [-]

I agree, but there exists a non-negligible amount of people that have not obviously illegitimate reasons for not being signed up: not most of the people in the world, and maybe not most of Less Wrong, but at least a sizable portion of Less Wrongers (and most of the people I interact with on a daily basis at SIAI). It seems that somewhere along the line people started to misinterpret Eliezer (or something) and group the reasonable and unreasonable non-cryonauts together.

Comment author: Vladimir_Nesov 26 May 2010 11:39:30AM 1 point [-]

Then state the scope of the claim explicitly in the post.

Comment author: Vladimir_Nesov 26 May 2010 10:59:21AM 1 point [-]

et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.

This argument from confusion doesn't shift the decision either way, so it could as well be an argument for signing up, or against signing up; similarly for immediate suicide, or against that. On the net, this argument doesn't move, because there is no default to fall off to once you get more confused.

Comment author: steven0461 26 May 2010 06:59:05PM *  3 points [-]

I'd say the argument from confusion argues more strongly against benefits that are more inferential steps away. E.g., maybe it supports eating ice cream over cryonics but not necessarily existential risk reduction over cryonics.

Comment author: Will_Newsome 26 May 2010 11:02:41AM *  1 point [-]

Correct: it is simply an argument against certainty in either direction. It is the certainty that I find worrisome, not the conclusion. Now that I look back, I think I failed to duly emphasize the symmetry of my arguments.

Comment author: zero_call 28 May 2010 06:27:46AM *  -2 points [-]

There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical, I don't mean to rain on anyone's parade, but I'm just stating the facts.

Bear in mind they don't just have to prove it will work. They also need to show you can be uploaded, reverse-aged, or whatever else that comes next. (Now awaiting hoards of flabbergasted replies and accusations.)

Comment author: CronoDAS 28 May 2010 06:38:48AM 9 points [-]