Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Error 11 June 2017 06:05:42PM 6 points [-]

Not all tail risk is created equal. Assume your remaining natural lifespan is L years, and revival tech will be invented R years after that. Refusing to kill yourself is effectively betting that no inescapable worse-than-death future will occur in the next L years; refusing cryonics is effectively betting the same, but for the next L + R years.

Assuming revival tech is invented only after you die, the probability of ending up in some variation of hell is strictly greater with cryonics than without it -- even if both chances are very small -- simply because hell has more time to get started.

It's debatable how large the difference is between the probabilities, of course. But some risk thresholds legitimately fall between the two.

(upvoting even though I disagree with your conclusion -- I think it's an interesting line of thought)

Comment author: Synaptic 11 June 2017 07:52:17PM 2 points [-]

Upvoted -- I agree that the probability is higher if you do cryonics.

However, a lot of the framing of this discussion is that "if you choose cryonics, you are opening up Pandora's box because of the possibility of worse-than-death outcomes." This triggers all sort of catastrophic cognitions and causes people to have even more of an ugh field around cryonics. So I wanted to point out that worse than death outcomes are certainly still possible even if you don't do cryonics.

Why I think worse than death outcomes are not a good reason for most people to avoid cryonics

5 Synaptic 11 June 2017 03:55PM
Content note: torture, suicide, things that are worse than death


TLDR: The world is certainly a scary place if you stop to consider all of the tail risk events that might be worse than death. It's true that there is a tail risk of experiencing one of these outcomes if you choose to undergo cryonics, but it's also true that you risk these events by choosing not to kill yourself right now, or before you are incapacitated by a TBI or neurodegenerative disease. I think these tail risk events are extremely unlikely and I urge you not to kill yourself because you are worried about them, but I also think that they are extremely unlikely in the case of cryonics and I don't think that the possibility of them occurring should stop you from pursuing cryonics. 

I

Several members of the rationalist community have said that they would not want to undergo cryonics on their legal deaths because they are worried about a specific tail risk: that they might be revived in a world that is worse than death, and that doesn't allow them to kill themselves. For example, lukeprog mentioned this in a LW comment

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

#####

In this post I'm going to explain why I think that, with a few stipulations, the risks of these worse-than-death tail events occurring are close to what you might experience by choosing to undergo your natural lifespan. Therefore, based on revealed preference, in my opinion they are not a good reason for most people to not undergo cryonics. (Although there are, of course, several other reasons for which you might choose to not pursue cryonics, which will not be discussed here.) 

II

First, some points about the general landscape of the problem, which you are welcome to disagree with: 

- In most futures, I expect that you will still be able to kill yourself. In these scenarios, it's at least worth seeing what the future world will be like so you can decide whether or not it is worth it for you.  
- Therefore, worse-than-death futures are exclusively ones in which you are not able to kill yourself. Here are two commonly discussed scenarios for this, and why I think they are unlikely:  
-- You are revived as a slave for a future society. This is very unlikely for economic reasons: a society with sufficiently advanced technology that it can revive cryonics patients can almost certainly extend lifespan indefinitely and create additional humans at low cost. If society is evil enough to do this, then creating additional humans as slaves is going to be cheaper than reviving old ones with a complicated technology that might not work. 
-- You are revived specifically by a malevolent society/AI that is motivated to torture humans. This is unlikely for scope reasons: any society/AI with sufficiently advanced technology to do this can create/simulate additional persons that will to fit their interests more precisely. For example, an unfriendly AI would likely simulate all possible human/animal/sentient minds until the heat death of the universe, using up all available resources in the universe in order to do so. Your mind, and minds very similar to yours, would already likely be included in these simulations many times over. In this case, doing cryonics would not actually make you worse off. (Although of course you would already be quite badly off and we should definitely try our best to avoid this extremely unlikely scenario!) 

If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say: 

- If a fascist government that tortures its citizens indefinitely and doesn't allow them to kill themselves seems likely to take over the world, please cremate me. 
- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me. 
- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me. 

In fact, you probably wouldn't have to ask... in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion.   

But even with such a set of stipulations or compassionate treatment by your cryonics organization, it's still possible that you could be revived in a worse-than-death scenario. As Brian Tomasik puts it

> Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

However, here I would like to point out an additional point: there's no guarantee that these bad scenarios couldn't happen too quickly for you to react today, or in the future before your legal death. 

If you're significantly worried about worse than death outcomes happening in a possible future in which you are cryopreserved, then it seems like you should also be worried about one of them happening in the relatively near term as well. It also seems that you should be anti-natalist. 

III

You might argue that this is still your true rejection, and that while it's true that a faster-than-react-able malevolent agent could take over the world now or in the near future, you would rather trust yourself to kill yourself than trust your cryonics organization take you out of preservation in these scenarios. 

This is a reasonable response, but one possibility that you might not be considering is that you might undergo a condition that renders you unable to make that decision. 

For example, people can live for decades with traumatic brain injuries, with neurodegenerative diseases, in comas, or other conditions that prevent them from making the decision to kill themselves but retain core aspects of their memories personality that make them "them" (but perhaps is not accessible because of damage to communication systems in the brain). If aging is slowed, these incapacitating conditions could last for longer periods of time. 

It's possible that while you're incapacitated by one of these unfortunate conditions, a fascist government, evil aliens, or a malevolent AI will take over. 

These incapacitating conditions are each somewhat unlikely to occur, but if we're talking about tail events, they deserve consideration. And they aren't necessarily less unlikely than being revived from cryostasis, which is of course also far from guaranteed to work.

It might sound like my point here is "cryonics: maybe not that much worse than living for years in a completely incapacitating coma?", which is not necessarily the most ringing endorsement of cryonics, I admit. 

But my main point here is that your revealed preferences might indicate that you are more willing to tolerate some very, very small probability of things going horribly wrong than you realize. 

So if you're OK with the risk that you will end up in a worse-than-death scenario even before you do cryonics, then you may also be OK with the risk that you will end up in a worse-than-death scenario after you are preserved via cryonics (both of which seem very, very small to me). Choosing cryonics doesn't "open up" this tail risk that is very bad and would never occur otherwise. It already exists. 
Comment author: shminux 21 February 2015 11:14:47PM 3 points [-]

Easy, if you are worried about worse-than-death life after revival, don't get preserved. It's not like there are too few people in the world and no way to create more. I'll take my chances, if I can. I don't expect it to be a problem to self-terminate later, should I want to. I don't put any stock in the scary scenarios where an evil Omega tortures a gazillion of my revived clones for eternity.

Comment author: Synaptic 21 February 2015 11:25:25PM 5 points [-]

Well, this is certainly a reasonable response. But if there is a mechanism to decrease the probability that a worse-than-death outcome would occur so that people who had expressed these concerns are more likely to want to do brain preservation and more people could be a part of the future, that seems like an easy win. I don't think people are particularly fungible.

Comment author: Brian_Tomasik 21 February 2015 11:16:57PM 7 points [-]

A "do not resuscitate" kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

Comment author: Synaptic 21 February 2015 11:22:51PM 5 points [-]

I think I did not explain my proposal clearly enough. What I'm claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as "too likely", then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.

Comment author: lukeprog 06 March 2013 07:52:53PM *  22 points [-]

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

This does, however, put me into disagreement with both Robin Hanson ("More likely than not, most folks who die today didn't have to die!") and Eliezer Yudkowsky ("Not signing up for cryonics [says that] you've stopped believing that human life, and your own life, is something of value").

Comment author: Synaptic 21 February 2015 11:02:00PM 2 points [-]

Can we decrease the risk of worse-than-death outcomes following brain preservation?

8 Synaptic 21 February 2015 10:58PM

Content note: discussion of things that are worse than death

Over the past few years, a few people have claimed rejection of cryonics due to concerns that they might be revived into a world that they preferred less than being dead or not existing. For example, lukeprog pointed this out in a LW comment here, and Julia Galef expressed similar sentiments in a comment on her blog here

I use brain preservation rather than cryonics here, because it seems like these concerns are technology-platform agnostic.

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated. 

Here's how it would work: you specify, prior to entering biostasis, circumstances in which you'd prefer to have your brain/body be taken out of stasis. Then, if those circumstances are realized, the organization carries out your request. 

This almost certainly wouldn't solve all of the potential bad outcomes, but it ought to help some. Also, it requires that you enumerate some of the circumstances in which you'd prefer to have your suspension terminated. 

While obvious, it seems worth pointing out that there's no way to decrease the probability of worse-than-death outcomes to 0%. Although this also is the case for currently-living people (i.e. people whose brains are not necessarily preserved could also experience worse-than-death outcomes and/or have their lifespan extended against their wishes). 

For people who are concerned about this, I have three main questions: 

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

2) If no to #1, is there some other mechanism that you could imagine which would work?

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state? (Examples: UFAI is imminent, or a malevolent dictator's army is about to take over the world.) 

In response to You Only Live Twice
Comment author: Jacobian 23 December 2014 04:55:15PM *  0 points [-]

Eliezer, thank you for writing a beautiful post. I do hope that the people of the future value my life more than the people of the present, and the fact that there is at least two people in the present who do (Eliezer and my mom ;-) ) is heartening.

I am quite convinced about cryonics in general, but I am not convinced at all that paying up right now for CI or Alcor is a smart investment. What's the downside of just setting aside enough money for cryopreservation and choosing the best option when death looms?

Consider:

  • I am 27. If I die suddenly (without regaining consciousness even for a day) in the next decade it's likely that I would die in a fashion (shot in the head, car crash) that won't leave much of my brain to be preserved.

  • The chances that I'll be in the US when I die are very far from certain (I'm a foreign citizen living in NYC currently).

  • If I decide a decade from now that I don't want to cryopreserve, the fees would have been money wasted. I can't force me-in-10-years into a decision.

  • Judging by the progress of modern medicine (advances in cancer treatment) and my family history (pretty good from a cardiovascular standpoint) it is very likely that my ticket out will be Alzheimer's or another neurodegenerative disease. In that case, cryopreservation will only make sense if I commit suicide at the very onset of the disease and am frozen right away which may not be possible. If I get Alzheimer's I may as well donate all my money to SIAI or Africa.

  • If the future is going to move in the direction we are hoping to, it's not unlikely that there would be more companies offering cryopreservation with better deals (e.g. lower fees, global coverage, eternal investment trust management).

Basically, what is the upside of signing up for one specific company and paying the fees vs. knowing that I have made the decision to spend the money on cryopreservation instead of life-prolonging treatment and trusting my future cancer-diagnosed self to be brave enough to keep it?

Comment author: Synaptic 18 January 2015 06:38:08PM 0 points [-]

it is very likely that my ticket out will be Alzheimer's or another neurodegenerative disease. In that case, cryopreservation will only make sense if I commit suicide at the very onset of the disease and am frozen right away which may not be possible. If I get Alzheimer's I may as well donate all my money to SIAI or Africa.

Consider two possibilities:

1) Alzheimers breaks long-distance communication more than it does actual information such as memories. Cf moments of lucidity. It's not clear how true this is, though.

2) It may in fact be possible to undergo controlled legal death at the onset of death in some number of years. See the Oregon laws, which are likely to start to be passed elsewhere. See also http://www.evidencebasedcryonics.org/2012/05/09/revisiting-donaldson/

I want you to live too :)

Comment author: Swimmer963 15 January 2015 07:38:52PM 0 points [-]

I actually had a nightmare recently where I was diagnosed with an aggressive cancer and would have preferred not to go through treatment, but felt pressured by other, more aggressively anti-death members of the rationality community. Was afraid people would think I didn't care about them if I didn't try to stay alive longer to be with them, etc. (I'm an ICU nurse; I have a pretty good S1 handle on how horrific a lot of life saving treatments are, and how much quality of life it's possible to lose.)

I've thought about cryonics, but haven't made a decision either way; right now, my feeling is that I don't have anything against the principle, but that it doesn't seem likely enough to work for the cost-benefit analysis to come out positive.

Comment author: Synaptic 17 January 2015 04:56:18AM 0 points [-]

Can you describe the reasons are that make you think it is not likely enough to work? Totally understandable if you can't articulate such reasons, but I'm just curious about what the benchmarks are that you might find useful in informing your probability estimate.

That is to say, it's unlikely that actual reversible cryopreservation would be possible; if it were, the technique probably wouldn't be called cryonics anymore. So, other more intermediate steps that'd you'd find informative might be good to know about.

Comment author: [deleted] 17 December 2012 09:30:43PM *  1 point [-]

"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.

The cryonics process is often analoguously compared the the event of a harddrive being broken, and the data being retrievable, but brains and harddrives store information in very different ways and this problem always strikes me as very unnerving. Without going into too much detail, it's very easy to see how something that is be mostly truthful for harddrives might turn out not to be true at all for brains.

Personally I would already be signed up for cryonics if I only had the money for that, and I think it's very important to discuss the topic. This is very much related because when I've had those few discussions around cryonics it has usually stumbled on this particular detail. Can the information of the brain really be preserved via cryonics? Does the brain not deteriorate before the actual event of cryopreservation?

Considering how microinfarctions seem to be an irreversible problem even with live human beings for the time being, I'm very skeptical about frequency of the tissue surviving to the point where it's finally frozen.

<EDIT> Just to point out the cited paragraph in the main post did not cover this area exactly, but instead focused on the process of the cryopreservation, and personally I completely disagree with the skepticism of that neuroscientist. If you're interested in why: With his current knowledge the neuroscientist might be underestimating the capacity of future technologies and he is just concentrating on his view of not being able to solve the problem with present technology. As long as the information is stored well enough to be reconstructed in theory, I think it's plausible to say it will be possible in practice later. And the neuroscientist did not seem to (from my extremely layman perspective) concentrate on the issue from the aspect of information theoretic loss, but rather from a practical aspect of extracting that warped information. I think the cryopreservation process is kind of a stable environment where changes to the brain can be traced back and the damage caused by the process potentially reversible.

Meanwhile I think occurring chemical reactions, damage from microbes, etc. prior to cryopreservation pose the threat of the information being completely lost, degradation of the brain prior to preservation being they key problem.Something missing as opposed to something being distorted.

That's what I think anyway - which is not much in terms of reliability </EDIT>

Could anyone please be nice and elaborate on this?

In response to comment by [deleted] on More Cryonics Probability Estimates
Comment author: Synaptic 18 December 2012 09:32:14PM 3 points [-]

"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.

Yes, good intuition. This is what Mike Darwin considers the largest problem in cryonics: http://chronopause.com/index.php/2011/02/23/does-personal-identity-survive-cryopreservation/

Comment author: Larks 18 December 2012 03:41:28AM 18 points [-]

Science has moved away from considering memories to be simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity" (Sacktor, 2007). Enzyme activity ceases after death, which could lead to memory destruction.

For instance, in a slightly unnerving study, Sacktor and colleagues taught mice to avoid the taste of saccharin before injecting them with a PKMzeta-blocking drug called ZIP into the insular cortex. PKM, an enzyme, has been associated with increasing receptors between synapses that fire together during memory recollection. Within hours, the mice forgot that saccharin made them nauseous and began guzzling it again. It seems blocking the activity of PKM destroys memories. Since PKM activity (like all enzyme activity) also happens to be blocked following death, a possible extension of this research is that the brain automatically "forgets" everything after death, so a simulation of your brain after death would not be very similar to you.

http://www.nimh.nih.gov/science-news/2007/memory-sustaining-enzyme-may-help-treat-ptsd-cognitive-decline.shtml

Comment author: Synaptic 18 December 2012 09:29:20PM *  2 points [-]

simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity"

Long-term structural maintenance requires continuous enzymatic activity. For example, the average AMPA receptor lasts only around one day: http://www.ncbi.nlm.nih.gov/pubmed/18320299. The actin cytoskeleton, made up of molecules which largely specify the structure of synapses, also requires continuous remodeling. If a structure is visibly the same after vitrification (not trivial), that means the molecules specifying it are likely to not have changed much.

View more: Next