Content note: torture, suicide, things that are worse than death

Follow-up tohttp://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/

TLDR: The world is certainly a scary place if you stop to consider all of the tail risk events that might be worse than death. It's true that there is a tail risk of experiencing one of these outcomes if you choose to undergo cryonics, but it's also true that you risk these events by choosing not to kill yourself right now, or before you are incapacitated by a TBI or neurodegenerative disease. I think these tail risk events are extremely unlikely and I urge you not to kill yourself because you are worried about them, but I also think that they are extremely unlikely in the case of cryonics and I don't think that the possibility of them occurring should stop you from pursuing cryonics. 

I

Several members of the rationalist community have said that they would not want to undergo cryonics on their legal deaths because they are worried about a specific tail risk: that they might be revived in a world that is worse than death, and that doesn't allow them to kill themselves. For example, lukeprog mentioned this in a LW comment

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

#####

In this post I'm going to explain why I think that, with a few stipulations, the risks of these worse-than-death tail events occurring are close to what you might experience by choosing to undergo your natural lifespan. Therefore, based on revealed preference, in my opinion they are not a good reason for most people to not undergo cryonics. (Although there are, of course, several other reasons for which you might choose to not pursue cryonics, which will not be discussed here.) 

II

First, some points about the general landscape of the problem, which you are welcome to disagree with: 

- In most futures, I expect that you will still be able to kill yourself. In these scenarios, it's at least worth seeing what the future world will be like so you can decide whether or not it is worth it for you.  

- Therefore, worse-than-death futures are exclusively ones in which you are not able to kill yourself. Here are two commonly discussed scenarios for this, and why I think they are unlikely:  

-- You are revived as a slave for a future society. This is very unlikely for economic reasons: a society with sufficiently advanced technology that it can revive cryonics patients can almost certainly extend lifespan indefinitely and create additional humans at low cost. If society is evil enough to do this, then creating additional humans as slaves is going to be cheaper than reviving old ones with a complicated technology that might not work. 

-- You are revived specifically by a malevolent society/AI that is motivated to torture humans. This is unlikely for scope reasons: any society/AI with sufficiently advanced technology to do this can create/simulate additional persons that will to fit their interests more precisely. For example, an unfriendly AI would likely simulate all possible human/animal/sentient minds until the heat death of the universe, using up all available resources in the universe in order to do so. Your mind, and minds very similar to yours, would already likely be included in these simulations many times over. In this case, doing cryonics would not actually make you worse off. (Although of course you would already be quite badly off and we should definitely try our best to avoid this extremely unlikely scenario!) 

If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say: 

- If a fascist government that tortures its citizens indefinitely and doesn't allow them to kill themselves seems likely to take over the world, please cremate me. 

- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me. 

- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me. 

In fact, you probably wouldn't have to ask... in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion.   

But even with such a set of stipulations or compassionate treatment by your cryonics organization, it's still possible that you could be revived in a worse-than-death scenario. As Brian Tomasik puts it

> Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

However, here I would like to point out an additional point: there's no guarantee that these bad scenarios couldn't happen too quickly for you to react today, or in the future before your legal death. 

If you're significantly worried about worse than death outcomes happening in a possible future in which you are cryopreserved, then it seems like you should also be worried about one of them happening in the relatively near term as well. It also seems that you should be anti-natalist. 

III

You might argue that this is still your true rejection, and that while it's true that a faster-than-react-able malevolent agent could take over the world now or in the near future, you would rather trust yourself to kill yourself than trust your cryonics organization take you out of preservation in these scenarios. 

This is a reasonable response, but one possibility that you might not be considering is that you might undergo a condition that renders you unable to make that decision. 

For example, people can live for decades with traumatic brain injuries, with neurodegenerative diseases, in comas, or other conditions that prevent them from making the decision to kill themselves but retain core aspects of their memories personality that make them "them" (but perhaps is not accessible because of damage to communication systems in the brain). If aging is slowed, these incapacitating conditions could last for longer periods of time. 

It's possible that while you're incapacitated by one of these unfortunate conditions, a fascist government, evil aliens, or a malevolent AI will take over. 

These incapacitating conditions are each somewhat unlikely to occur, but if we're talking about tail events, they deserve consideration. And they aren't necessarily less unlikely than being revived from cryostasis, which is of course also far from guaranteed to work.

It might sound like my point here is "cryonics: maybe not that much worse than living for years in a completely incapacitating coma?", which is not necessarily the most ringing endorsement of cryonics, I admit. 

But my main point here is that your revealed preferences might indicate that you are more willing to tolerate some very, very small probability of things going horribly wrong than you realize. 

So if you're OK with the risk that you will end up in a worse-than-death scenario even before you do cryonics, then you may also be OK with the risk that you will end up in a worse-than-death scenario after you are preserved via cryonics (both of which seem very, very small to me). Choosing cryonics doesn't "open up" this tail risk that is very bad and would never occur otherwise. It already exists. 

New to LessWrong?

New Comment
29 comments, sorted by Click to highlight new comments since: Today at 2:19 AM

But killing oneself has a tail risk too. One is that the hell exist in our simulation, and suicide is a sin :)

Another is that quantum immortality is true AND that you will survive any attempt of the suicide but seriously injured. Personally, I don't think it is the tail outcome, but give it high probability, but most people give it the very low probability.

Paradoxically, only cryonics could protect you against these tail outcomes, as even small chance that you will successfully cryopreserved and return to life (say 1 per cent) dominates over the chance that you will be infinitely dying but not able to die (say 0.00001 per cent) between all branches of muliverse, so you will experience 100 000 more often that you returned from cryostasis than you suffering infinitely because of quantum immortality.

Chances that you will be resurrected by evil AI only to torture you are much smaller, say 1 per cent of all cryobranches of multiverse.

It means that if you choose suicide (and believes in QI) you have 100 time more chances on eternal suffering than if you chose cryonics.

We already know that most people who attempt suicide survive. This is true even from the standard viewpoint of an external observer. This is already a good reason not to attempt suicide.

[Citation needed]

Do most only survive their first attempt and try again, or do most live a natural lifespan after a failed attempt? What proportion of these suffer debilitating injuries for their efforts?

Without citation, I remember that there are two types of suicides. One is "rational" and well planned, and they typically succeed. Other are emotional shows intended to fail, and they could happen many times as manipulative instrument. And they distort statistic.

One is that the hell exist in our simulation, and suicide is a sin :)

Pascal's mugging. One could just as easily imagine a simulation such that suicide is necessary to be saved from hell. Which is more probable? We cannot say.

Another is that quantum immortality is true AND that you will survive any attempt of the suicide but seriously injured. Personally, I don't think it is the tail outcome, but give it high probability, but most people give it the very low probability.

I also think this is more likely than not. Subjective Immortality doesn't even require Many Worlds. A Tegmark I multiverse is sufficient. Assuming we have no immortal souls and our minds are only patterns in matter, then "you" are simultaneously every instantiation of your pattern throughout the multiverse. Attempting suicide will only force you into living only in the bad outcomes where you don't have control over your life anymore, and thus cannot die. But this is exactly what the suicidal are trying to avoid.

Agreed about big world immortality. In the case of Pascal mugging, there are a lot of messages implanted in our culture that suicide is bad, so it increases chances that owners of simulations actually think so.

Also even if one is not signed for cryonics, but has rationalists friends, there is 0.01 percent chance of cryopreservation against his will, which dominates chances of other infinite regressions because of big world immortality. In other words, QI increases chances of your cryopreservation to almost 1.

See also my long comment below about acasual war between evil and benevolent AIs.

Not all tail risk is created equal. Assume your remaining natural lifespan is L years, and revival tech will be invented R years after that. Refusing to kill yourself is effectively betting that no inescapable worse-than-death future will occur in the next L years; refusing cryonics is effectively betting the same, but for the next L + R years.

Assuming revival tech is invented only after you die, the probability of ending up in some variation of hell is strictly greater with cryonics than without it -- even if both chances are very small -- simply because hell has more time to get started.

It's debatable how large the difference is between the probabilities, of course. But some risk thresholds legitimately fall between the two.

(upvoting even though I disagree with your conclusion -- I think it's an interesting line of thought)

Upvoted -- I agree that the probability is higher if you do cryonics.

However, a lot of the framing of this discussion is that "if you choose cryonics, you are opening up Pandora's box because of the possibility of worse-than-death outcomes." This triggers all sort of catastrophic cognitions and causes people to have even more of an ugh field around cryonics. So I wanted to point out that worse than death outcomes are certainly still possible even if you don't do cryonics.

I think the argument is more "if I'm going to consider beneficial but unlikely outcomes such as successful cryonic revival, then harmful but unlikely outcomes also come on to the table".

A normal life may have a small probability of a worse-than-death scenario, but we're not told to consider small probabilities when considering how good a normal life is.

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality. The problem of identity of a copy and original is not solved, but AI may be able to solve it somehow.

However, similar to different cardinalities of infinities, there are different types of infinite sufferings. Evil AI could constantly upgrade its victim, so it subjective experiences of sufferings will increase million times a second forever, and it could convert half a galaxy into suffertronium.

Quantum immortality in a constantly dying body is not optimised for aggressive growth of sufferings, so could be more "preferable".

Unfortunately, such timelines in the space of all possible minds could merge, that is after death you will appear in a very improbable universe, where you are resurrected for having sufferings. (I also use here and in the next sentence a thesis that if two observer-moments are identical, their timelines merge, which may require a longer discussion.)

But benevolent AI could create an enormous amount of positive observer-moments following any possible painful observer-moment, that it will effectively rescue any conscious beings from jail of evil AI. So any painful moment will have million positive continuations with much higher measure than a measure of universes owned by evil AI. (I also assume that benevolent AIs will dominate over suffering-oriented AIs, and will wage acasual war against them to have more observer-moments of human beings.)

After I imagined such acasual war between evil and benevolent AI, I stopped worry about infinite suffering from evil AI.

I'm glad I'm not the only one who thinks about this kind of stuff.

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.

Enough of what makes me me hasn't and won't make into digital expression by accident short of post-singularity means, that I wouldn't identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.

You may also not identify with you the person who will get up in your body after night sleep tomorrow. There will be large informational and biochemical changes in your brain, as well as a discontinuity of the stream of consciousness during deep sleep.

I mean that an attempt to deny identity with your copies will result in even larger paradoxes.

I don't buy it. Why don't you wake up as Britney Spears instead? Clearly there's some information in common between your mind patterns. She is human after all (at least I'm pretty sure).

Clearly there is a sufficient amount of difference that would make your copy no longer you.

I think it is probable that cryonics will preserve enough information, but I think it is nigh impossible that my mere written records could be reconstructed into me, even by a superintelligence. There is simply not enough data.

But given Many Worlds, a superintelligence certainly could attempt to create every possible human mind by using quantum randomization. Only a fraction of these could be realized in any given Everett branch, of course. Most possible human minds are insane, of course, since their memories would make no sense.

Given the constraint of "human mind" this could be made more probable than Boltzmann Brains. But if the Evil AI "upgrades" these minds, then they'd no longer fit that constraint.

We still don't know, how much information is needed for informational identity. It could be a rather small set of core data, which helps me to find a difference between me and Britney Spits.

Superintelligence also could exceed us in information gathering about the past and have radically new ideas. Quantum randomization in MWI is one possible approach, but such randomization may be done only about "unknown important facts". For example, if it is unknown, if I love my grandmother, it could be replaced by a random bit. So in the two branches of the multiverse there will be two people, but none of them will be insane.

Also, there could take place something like acasul trade between different resurrection engines in different parts of the iniverse. Because if me-how-didn't-like grandmother is not real me, it could correspond to another person, that need to be resurrected, but existed in the branch of MWI that splitted around 1900. As a result, all the dead will be easily resurrected, and no combinational explosion, measure problems, or insane minds will appear.

Another option for superintelligence is to run resurrection simulations, which recreate all the world from the beginning, and sideloading in it all available data.

Avoiding cryonics because of possible worse than death outcomes sounds like a textbook case of loss aversion.

You might argue that this is still your true rejection

The idea of "true rejection" should die. There are many things that people believe because of the cumulative weight of evidence or arguments; no single item is a true rejection.

I agree with this. The purpose of the concept is polemical: get people to imagine that there is one single reason for their opinion, so that if you can change that thing, they have to change their opinion.

This is a bit more broad than cryonics, but let's consider more specific possible causes of extreme torture. Here're the ones that occurred to me:

An AI runs or threatens to run torture simulations as a disincentive. This is entirely a manipulation technique and is instrumental to whatever goals it has, whether benevolent or neutral.

  • The programmers may work specifically to prevent this. However -- MIRI's current stance is that it is safer to let the AI design a utility function for itself. I think that this is the most likely and least worrisome way torture simulations could happen (small in scope and for the best).

An AI is programmed to be benevolent, but finds for some reason that suffering is terminally valuable, perhaps due to following a logical and unforeseen conclusion of a human-designed utility function.

  • I think this is a problematic scenario, and much worse than most AI design failures, because it ends with humans being tortured eternally, spending 3% of their existence in hell, or whatever, rather than just paperclips.

An AI is programmed to be malevolent.

  • This seems very very unlikely, given the amount of resources and people required to create an AI and the immense and obvious disutility in such a project.

An AI is programmed to obey someone who is malevolent.

  • Hopefully this will be prevented by, like, MIRI. Ethics boards and screening processes too.

Aliens run torture simulations of humans as punishment for defecting in an intergalactic acausal agreement.

  • This is the bloody AI's problem, not ours.

A country becomes a dystopia that tortures people.

  • Possible but very unlikely for political and economic reasons.

Thoughts? Please add.

There was an error in AI goal system, and + is now -

  • Incredibly unlikely: an AI is not going to structure itself so it works to fulfill the inverse of its utility function as the result of a single bit flip.

Sure it is if it's the right bit, but averting this sort of bug when it's important is a solved problem of software engineering, not value-alignment-complete.

However, if we assume that everything possible exists, such AI exists somewhere in the universe and is torturing a copy of me. And it is disturbing thought.

If everything that's possible exists, so do Boltzmann brains. We need some way to quantify existence, such as by likelyhood.

Don't see any problem with BB existence. For each BB exists a real world there the same observer-moment is justified.

As I have said before, the BB is a scattered being in your model, but you yourself might be a scattered being in the BB's model. So there are not two worlds, a real one and a BB one. There are just two real ones. A better way to think about it might be like special relativity, where each observer has a resting frame and might be moving relative to other ones. In the same way each observer has a reference frame where they are real.

If there are two AIs, and one is paperclip maximiser, and the other is benevolent AI, the paperclip maximiser msy start to torture humans to get bargain power over benevolent AI. The human torture becomes currency.

  • Possible, seems unlikely. Requires two AI with different alignments, requires benevolent AI to respond to that sort of threat. Also falls under the first point.