Content note: discussion of things that are worse than death

Over the past few years, a few people have claimed rejection of cryonics due to concerns that they might be revived into a world that they preferred less than being dead or not existing. For example, lukeprog pointed this out in a LW comment here, and Julia Galef expressed similar sentiments in a comment on her blog here

I use brain preservation rather than cryonics here, because it seems like these concerns are technology-platform agnostic.

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated. 

Here's how it would work: you specify, prior to entering biostasis, circumstances in which you'd prefer to have your brain/body be taken out of stasis. Then, if those circumstances are realized, the organization carries out your request. 

This almost certainly wouldn't solve all of the potential bad outcomes, but it ought to help some. Also, it requires that you enumerate some of the circumstances in which you'd prefer to have your suspension terminated. 

While obvious, it seems worth pointing out that there's no way to decrease the probability of worse-than-death outcomes to 0%. Although this also is the case for currently-living people (i.e. people whose brains are not necessarily preserved could also experience worse-than-death outcomes and/or have their lifespan extended against their wishes). 

For people who are concerned about this, I have three main questions: 

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

2) If no to #1, is there some other mechanism that you could imagine which would work?

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state? (Examples: UFAI is imminent, or a malevolent dictator's army is about to take over the world.) 

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 4:15 PM

A "do not resuscitate" kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they're so unlikely or nonsensical it's not even worth acknowledging them. However, I don't know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it's worth considering them, but I'm not claiming it's a big enough deal that nobody should sign up for cryonics.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky's notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to "make something else". If the future turns into the sort of Malthusian trap Hanson predicts, it doesn't seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us.

I'm curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won't object to or suffer through whatever tribulations they must labor through?

Addendum: shminux reasons through it here, concluding it's a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn't explain why Omega would bother reviving us of all minds to do it.

I'm not saying its inevitable, but it's a failure of imagination if you can't think of any way the future can go horribly wrong like that.

My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.

E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren't as altruistic as possible, or something we can't even conceive of.

Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It's terrifying to think of someone like that in control of an AGI.

Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it's not like they have a monopoly on revenge or psychopathy either.

I think sociopaths are about 4% of the population, so your scenario isn't really that implausible. I just meant if all of societies' values change over time. Or just the FAI extracting out "true" utility function which includes all the negative stuff, like desire for revenge.

Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.

However, I could be overestimating the likelihood Yudkowsky's predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going 'foom', and instead being based upon human brain-emulations (HBEs). Based on related topics, I've assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven't read it yet. However, others more knowledgeable than I apparently notice merit it Hanson's position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn't discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don't claim nobody can tell which type of singularity is more likely. I merely mean I'm agnostic on the subject until I (can) examine it better.

Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I'm not confident now the likelihood of such scenarios is high enough so myself and others shouldn't sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don't know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of 'getting froze', or whatever.

More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we'd like.

I'm curious why or what minds would want to resuscitate us without caring about our wishes.

Experimental material for developing resuscitation technology. Someone has to be the first attempted revival.

I think I did not explain my proposal clearly enough. What I'm claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as "too likely", then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.

Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

3) Digital blueprints of preserved brains are made available for anyone to download. Large numbers of simulations are run by kids learning how to use the simulation APIs, folks testing poker bots, web search companies making me read every page on the Internet to generate a ranking signal, etc. etc.

Easy, if you are worried about worse-than-death life after revival, don't get preserved. It's not like there are too few people in the world and no way to create more. I'll take my chances, if I can. I don't expect it to be a problem to self-terminate later, should I want to. I don't put any stock in the scary scenarios where an evil Omega tortures a gazillion of my revived clones for eternity.

Well, this is certainly a reasonable response. But if there is a mechanism to decrease the probability that a worse-than-death outcome would occur so that people who had expressed these concerns are more likely to want to do brain preservation and more people could be a part of the future, that seems like an easy win. I don't think people are particularly fungible.

I don't put any stock in the scary scenarios where an evil Omega tortures a gazillion of my revived clones for eternity.

Could you elaborate on this? I'd be curious to hear your reasoning.

Does "don't put any stock" mean P(x) = 0? 0.01? 1e-10?

It means the noise level, down there with Pascal's Wager/Mugger and fairy tales coming true. Assigning a number to it would mean giving in to Pascal's Mugging.

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

Yes. In principle, you should better achieve your outcomes if you have more precise instructions. In principle, this seems sort of obvious.

In practice, I see two problems:

1) If the agency misinterprets your instructions.

2) Ambiguous instructions could facilitate corruption. In the same way that ambiguous laws do.

I'm not sure whether the upsides (more precise instructions allow you to better achieve your outcome) outweigh the downsides (1 and 2). My intuition is that I'm ~65% sure that the upsides outweigh the downsides, but that doesn't reflect much thought.

Practical problem #3: The agency successfully understands your intentions, and is willing to implement them, but not able to implement them.

For example, a fast intelligence explosion removes their capability of doing so before they can pull the plug. Or a change in their legal environment makes it illegal for them to pull the plug (and they aren't willing to put themselves at legal risk to do so).

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated.

This runs into a thorny ethical problem. It's like assisted suicide, except you're neither terminally ill, nor in a vegetative state, nor in extreme pain. Since you don't have anything more than a vague idea of the future, you're unable to provide the kind of informed consent necessary for this sort of thing. A friendly future is more likely to revive you and provide you with the appropriate psychiatric resources.

I think that is an unnecessarily limited idea of informed consent. Shouldn't knowing a probability distribution be enough for the consent to be informed?

Shouldn't knowing a probability distribution be enough for the consent to be informed?

You don't know the probability distribution.

A future can bring unexpected benefits. What if some positive event happened that would offset any kind of literal terminal condition? For example in a future that had nuclear war but everybody is telekinetic, how can you have a standing will on what to do if you never seriously considered telekinesis being possible?

The circumstances under which I would opt to be killed are extremely specific. Namely, I would want not to be revived if I were to be tortured indefinitely. This is actually more specific than it sounds: in order for this to occur, there must exist an entity which would soon possess the ability to revive me, and an incentive to do so rather than just allowing me to die. I find this to be such an extreme edge case that I'm actually uncomfortable with the characterization of the conversation. Instead, I'd turn around the result in question: under what circumstances do you want to be revived?

Trivially, we should want to be revived into a civilization which possesses the technology to revive us at all, and subsequently extend our lives. If circumstances are bad on Earth, we should prefer to defer our revival until those circumstances improve. If they never do, the overwhelming probability is that cryonic remains will simply be forgotten, turned off, and the frozen are never revived. But building a terminal death condition which might be triggered denies us the probability of waiting out those bad circumstances.

tl;dr Don't choose death, choose deferment.

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state?

Any sort of Hansonian future, where your mind has positive economic value.

With 1): This may be an obvious problem, but if the singularity for instance occurs thousands of years in the future, then whatever language you write your "do not revive" order in, the future civilization may not be able to understand it and therefore might not necessarily respect your wishes.

With 3) Perhaps if future civilizations who were not interested in revival for its own sake (why would we want another person from so-many-years ago?) would only revive when there is a substantial depopulation crisis (e.g after nuclear war, asteroid strike etc.). If so, these conditions are unlikely to be very pleasant. However, one could argue that in those cases you have a moral imperative to stay alive and reproduce and not commit suicide, because if the crisis is temporary, then all the future utilions of your possible descendants are lost and the human species as a whole is more likely to die out.