Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 03 June 2008 09:30:59AM 2 points [-]

Roland, I do not know. There is an organization in Russia. The Cryonics Institute accepts bodies shipped to them packed in ice. I'm not sure about Alcor, which tries to do on-scene suspension. Alcor lists a $25K surcharge (which would be paid out of life insurance) for suspension outside the US/UK/Canada, but I'm not sure how far abroad they'd go. Where are you?

Mitchell: You may be able to individuate atoms within structures by looking at their quantum correlations; you won't be able to say 'this atom has property X, that atom has property Y' but you'll be able to say 'there's an atom with property X, and there's an atom with property Y'.

Certainly. That's how we distinguish Eliezer from Mitchell.

Comment author: someonewrongonthenet 15 August 2012 07:21:23PM *  2 points [-]

Eliezer...the main issue that keeps me from cryonics is not whether the "real me" wakes up on the other side.

The first question is about how accurate the reconstruction will be. When you wipe a hard drive with a magnet, you can recover some of the content, but usually not all of it. Recovering "some" of a human, but not all of it, could easily create a mentally handicapped, broken consciousness.

But lets set that aside, as it is a technical problem. There is an second issue. If and when immortality and AI are achieved, what value would my revived consciousness contribute to such a society?

You've thus far established that death isn't a bad thing when a copy of the information is preserved and later revived. You've explained that you are willing to treat consciousness much like you would a computer file - you've explained that you would be willing to destroy one of two redundant duplicates of yourself.

Tell me, why exactly is it okay to destroy a redundant duplicate of yourself? You can't say that it's okay to destroy it simply because it is redundant, because that also destroys the point of cryonics. There will be countless humans and AIs that will come into existence, and each of those minds will require resources to maintain. Why is it so important that your, or my, consciousness be one among this swarm? Is that not similarly redundant?

For the same reasons that you would be willing to destroy one of two identical copies of yourself because having two copies is redundant, I am wondering just how much I care that my own consciousness survives forever. My mind is not exceptional among all the possible consciousnesses that resources could be devoted to. Keeping my mind preserved through the ages seems to me just as redundant as making twenty copies of yourself and carefully preserving each one.

I'm not saying I don't want to live forever...I do want to. I'm saying that I feel one aught to have a reason for preserving ones consciousness that goes beyond the simple desire for at least one copy of ones consciousness to continue existing.

When we deconstruct the notion of consciousness as thoroughly as we are doing in this discussion, the concept of "life" and "death" become meaningless over-approximations, much like "free will". Once society reaches that point, we are going to have to deconstruct those ideas and ask ourselves why it is so important that certain information never be deleted. Otherwise, it's going to get a little silly...a "21st century human brain maximizer" is not that much different from a paperclip maximizer, in the grand scheme of things.

Comment author: [deleted] 30 September 2013 04:21:55PM -1 points [-]

The main issue that keeps me from cryonics is not whether the "real me" wakes up on the other side.

How do you go to sleep at night, not knowing if it is the "real you" that wakes up on the other side of consciousness?

Comment author: TheOtherDave 30 September 2013 05:32:15PM 1 point [-]

Your comment would make more sense to me if I removed the word "not" from the sentence you quote. (Also, if I don't read past that sentence of someonewrongonthenet's comment.)

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

Comment author: [deleted] 30 September 2013 06:37:14PM 0 points [-]

Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently. And yes, the rest of the post addresses something completely different. But if I can continue for a moment on the tangent, expanding my comment above (even if it doesn't apply to the OP):

You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.

More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.

Comment author: TheOtherDave 30 September 2013 07:01:54PM 3 points [-]

Well, if the idea is unimportant to the OP, presumably that also helps explain how they can sleep at night.

WRT the tangent... my own position wrt preservation of personal identity is that while it's difficult to articulate precisely what it is that I want to preserve, and I'm not entirely certain there is anything cogent I want to preserve that is uniquely associated with me, I'm pretty sure that whatever does fall in that category has nothing to do with either continuity of computation or similarity of physical substrate. I'm about as sanguine about continuing my existence as a software upload as I am about continuing it as this biological system or as an entirely different biological system, as long as my subjective experience in each case is not traumatically different.

Comment author: [deleted] 01 October 2013 05:03:19PM 0 points [-]

I wrote up about a page-long reply, then realized it probably deserves its own posting. I'll see if I can get to that in the next day or so. There's a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It's not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...

Comment author: TheOtherDave 01 October 2013 05:08:09PM 1 point [-]

When you write up the post, you might want to say a few words about what it means for one of these views to be "correct" or "incorrect."

Comment author: [deleted] 01 October 2013 05:58:35PM 0 points [-]

Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.

Comment author: TheOtherDave 01 October 2013 06:56:39PM *  2 points [-]

Mm. I'm not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it's me, knows what I know, etc. This is, as you say, an issue of great moral concern... I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we've just screwed up big time.

Conversely, if it turns out that pattern or causal continuity is the correct answer, then there's no problem.

Therefore it's important to discover which of those facts is true of the world.

Yes? This follows from your view? (If not, I apologize; I don't mean to put up strawmen, I'm genuinely misunderstanding.)

If so, your view is also that if we want to know whether that's the case or not, we should look for the simplest answer to the question "what does my personal identity comprise?" that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)

Yes?

EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.

Comment author: pengvado 01 October 2013 06:09:33PM 2 points [-]

What relevance does personal identity have to TDT? TDT doesn't depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.

Comment author: [deleted] 01 October 2013 06:33:08PM 0 points [-]

It has relevance for the basilisk scenario, which I'm not sure I should say any more about.

Comment author: shminux 30 September 2013 11:00:59PM *  0 points [-]

I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity.

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage. I suspect that a faithful simulation of just my Brodmann area 10 coupled with a large chunk of my memories would restore enough of my self-awareness to be considered "me". This sim-me would probably lose most of my emotions without the rest of the brain, but it is still infinitely better than none.

Comment author: TheOtherDave 01 October 2013 12:12:21AM 0 points [-]

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage.

There's a very wide range of possible minds I consider to preserve my identity; I'm not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.

Comment author: shminux 01 October 2013 12:53:14AM -1 points [-]

Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.

Comment author: TheOtherDave 01 October 2013 02:18:51AM 0 points [-]

At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I'm pretty confident that the result is nothing I would identify with. (I'm also pretty sure this scenario is internally inconsistent.)

At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines... then maybe. I'd have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.

That said, I'm certainly not comfortable treating the result of that process as preserving "me."

Then again I'm also not comfortable treating the result of living a thousand years as preserving "me."

Comment author: someonewrongonthenet 01 October 2013 03:03:07AM *  0 points [-]

a large chunk of my memories

You'll need the rest of the brain because these other memories would be distributed throughout the rest of your cortex. The hippocampus only contains recent episodic memories.

If you lost your temporal lobe, for example, you'd lose all non-episodic knowledge concerning what the names of things are, how they are categorized, and what the relationships between them are.

Comment author: TheOtherDave 01 October 2013 03:07:43AM 0 points [-]

That said, I'm not sure why I should care much about having my non-episodic knowledge replaced with an off-the-shelf encyclopedia module. I don't identify with it much.

Comment author: someonewrongonthenet 01 October 2013 03:30:51AM *  0 points [-]

If you only kept the hippocampus, you'd lose your non-recent episodic memories too. But technical issues aside, let me defend the "encyclopedia":

Episodic memory is basically a cassette reel of your life, along with a few personalized associations and maybe memories of thoughts and emotions. Everything that we associate with the word knowledge is non-episodic. It's not just verbal labels - that was just a handy example that I happened to know the brain region for. I'd actually care about that stuff more about non-episodic memories than the episodic stuff.

Things like "what is your wife's name and what does her face look like" are non-episodic memory. You don't have to think back to a time when you specifically saw your wife to remember what her name and face is, and that you love her - that information is treated as a fact independent of any specific memory, indelibly etched into your model of the world. Cognitively speaking, "I love my wife stacy, she looks like this" is as much of a fact as "grass is a green plant" and they are both non-episodic memories. Your episodic memory reel wouldn't even make sense without that sort of information. I'd still identify someone with memory loss, but retaining my non-episodic memory, as me. I'd identify someone with only my episodic memories as someone else, looking at a reel of memory that does not belong to them and means nothing to them.

(Trigger Warning: link contains writing in diary which is sad, horrifying, and nonfiction.): This is what complete episodic memory loss looks like. Patients like this can still remember the names of faces of people they love.

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

Comment author: TheOtherDave 01 October 2013 03:46:42AM 0 points [-]

I take it you're assuming that information about my husband, and about my relationship to my husband, isn't in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.

If that's true, then sure, I'd prefer not to lose that information.

Comment author: shminux 01 October 2013 06:28:16AM -1 points [-]

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

What's the difference between personality and individuality?

Comment author: [deleted] 01 October 2013 12:14:43AM *  0 points [-]

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

There are things that when I go to bed to wake up eight hours later are very nearly preserved but if I woke up sixty years later wouldn't be, e.g. other people's memories of me (see I Am a Strange Loop) or the culture of the place where I live (see Good Bye, Lenin!).

(I'm not saying whether this is one of the main reasons why I'm not signed up for cryonics.)

Comment author: TheOtherDave 01 October 2013 01:48:07AM 0 points [-]

Point.

Comment author: someonewrongonthenet 01 October 2013 02:51:03AM *  0 points [-]

Because the notion of "me" is not an ontologically basic category and the question of whether the "real me" wakes up is a question that aught to be un-asked.

I'm a bit confused at the question...you articulated my intent with that sentence perfectly in your other post.

Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently.

and, as TheOtherDave said,

presumably that also helps explain how they can sleep at night.

EDIT: Nevermind, I now understand which part of my statement you misunderstood.

I'm not accepting-but-not-elevating the idea that the 'Real me" doesn't wake up on the other side. Rather, I'm saying that the questions of personal identity over time do not make sense in the first place. It's like asking "which color is the most moist"?

You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.

More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.

The root of your philosophical dilemma is that "personal identity" is a conceptual substitution for soul - a subjective thread that connects you over space and time.

No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only "thread' connecting you to your past experiences is your current subjective experience of remembering them. That's all.

Comment author: [deleted] 01 October 2013 05:15:55PM 0 points [-]

The root of your philosophical dilemma is that "personal identity" is a conceptual substitution for soul - a subjective thread that connects you over space and time.

No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only "thread' connecting you to your past experiences is your current subjective experience of remembering them. That's all.

I have a strong subjective experience of moment-to-moment continuity, even if only in the fleeting present. Simply saying “no such thing exists” doesn't do anything to resolve the underlying confusion. If no such thing as personal identity exists, then why do I experience it? What is the underlying insight that eliminates the question?

This is not an abstract question either. It has huge implications for the construction of timeless decision theory and utilitarian metamorality.

Comment author: shminux 01 October 2013 05:24:29PM *  -1 points [-]

"a strong subjective experience of moment-to-moment continuity" is an artifact of the algorithm your brain implements. It certainly exists in as much as the algorithm itself exists. So does your personal identity. If in the future it becomes possible to run the same algorithm on a different hardware, it will still produce this sense of personal identity and will feel like "you" from the inside.

Comment author: [deleted] 01 October 2013 05:53:06PM *  0 points [-]

Yes, I'm not questioning whether a future simulation / emulation of me would have an identical subjective experience. To reject that would be a retreat to epiphenomenalism.

Let me rephrase the question, so as to expose the problem: if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Yes, I realize that in both cases result in a computer simulation of Mark in 2063 claiming to have just woken up in the brain scanner, with a subjective feeling of continuity. But is that belief true? In the two situations there's a very different outcome for the Mark of 2013. If you can't see that, then I think we are talking about different things, and maybe we should taboo the phrase “personal/subjective identity”.

Comment author: shminux 01 October 2013 06:32:09PM *  0 points [-]

if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer". You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

Comment author: [deleted] 01 October 2013 06:55:52PM *  0 points [-]

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer".

Pretty much correct. To be specific, if computational continuity is what matters, then Mark!2063 has my memories, but was in fact “born” the moment the simulation started, 50 years in the future. That's when his identity began, whereas mine ended when I died in 2013.

This seems a little more intuitive when you consider switching on 100 different emulations of me at the same time. Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

No, that would make little difference as it's pretty clear that physical continuity is an illusion. If pattern or causal continuity were correct, then it'd be fine, but both theories introduce other problems. If computational continuity is correct, then a reconstructed brain wouldn't be me any more than a simulation would. However it's possible that my cryogenically vitrified brain would preserve identity, if it were slowly brought back online without interruption.

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table (until then, it scares the crap out of me). Likewise, a AI or emulation running on a computer that is powered off and then later resumed would also break identity, but depending on the underlying nature of computation & subjective experience, task switching and online suspend/resume may or may not result in cycling identity.

I'll stop there because I'm trying to formulate all these thoughts into a longer post, or maybe a sequence of posts.

Comment author: TheOtherDave 01 October 2013 07:02:09PM 0 points [-]

Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

I would say that yes, at T1 there's one of me, and at T2 there's 100 of me.
I don't see what makes "there's 101 of me, one of which terminated at T1" more straightforward than that.

Comment author: lavalamp 01 October 2013 07:13:06PM 2 points [-]

Can you taboo "personal identity"? I don't understand what important thing you could lose by going under general anesthesia.

Comment author: shminux 01 October 2013 08:13:50PM -1 points [-]

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table

Hmm, what about across dreamless sleep? Or fainting? Or falling and hitting your head and losing consciousness for an instant? Would these count as killing one person and creating another? And so be morally net-negative?

Comment author: TheOtherDave 01 October 2013 07:09:33PM *  0 points [-]

Clearly, your subjective experience today is HONK-screech-bam-oblivion, since all the subjective experiences that come after that don't happen today in this example... they happen 50 years later.

It is not in the least bit clear to me that this means those subjective experiences aren't your subjective experiences. You aren't some epiphenomenal entity that dissipates in the course of those 50 years and therefore isn't around to experience those experiences when they happen... whatever is having those subjective experiences, whenever it is having them, that's you.

maybe we should taboo the phrase “personal/subjective identity”.

Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?

EDIT: Ah, you did so elsethread. Cool. Replied there.

Comment author: lavalamp 01 October 2013 07:17:20PM 0 points [-]

Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

Comment author: [deleted] 01 October 2013 07:25:01PM 1 point [-]

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

No disagreement here - that's what I meant by oblivion.

Comment author: lavalamp 01 October 2013 07:31:37PM 2 points [-]

OK, cool, but now I'm confused. If we're meaning the same thing, I don't understand how it can be a question-- "not running" isn't a thing an algorithm can experience; it's a logical impossibility.

Comment author: AFinerGrain 03 October 2017 01:54:37AM 0 points [-]

I always wonder how I should treat my future self if I reject the continuity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a degree? Invest in stock for him? Give him another child?

Comment author: Elo 03 October 2017 07:48:49AM 0 points [-]

I think it matters in so far as assisting your present trajectory. Otherwise it might as well be an unfeeling entity.

Comment author: hairyfigment 30 September 2013 04:55:39PM 1 point [-]

It seems you place less value on your life than I do on mine. I'm glad we've reached agreement.

Comment author: someonewrongonthenet 01 October 2013 03:14:40AM *  0 points [-]

I agree, it's quite possible that someone might deconstruct "me" and "life" and "death" and "subjective experience" to the same extent that I have and still value never deleting certain information that is computationally descended from themselves more than all the other things that would be done with the resources that are used to maintain them.

Hell, I might value it to that extent. This isn't something I'm certain about. I'm still exploring this. My default answer is to live forever - I just want to make sure that this is really what I want after consideration and not just a kicking, screaming survival instinct (AKA a first order preference)

Comment author: CynicalOptimist 17 November 2016 08:29:45PM *  0 points [-]

This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).

You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?

(Incidentally, i know that utilitarianism generally favours the second option. But I would never blame anyone for choosing the first option if the money was offered to them.)

I would accept an end to my existence if it allowed everyone else on earth to live for as long as they wished, and experience an existentially fulfilling form of happiness. I wouldn't accept an end to my existence if it allowed one stranger to enjoy an ice cream. There are scenarios where I would think it was worth using resources to maintain my existence, and scenarios where I would accept that the resources should be used differently. I think this is true when we consider cryonics, and equally true if we don't.

The cryonics question is quite different.

For the sake of argument, I'll assume that you're alive and that you intend to keep on living, for at least the next 5 years. I'll assume that If you experienced a life-threatening situation tomorrow, and someone was able to intervene medically and grant you (at least) 5 more years of life, then you would want them to.

There are many different life-threatening scenarios, and many different possible interventions. But for decision making purposes, you could probably group them into "interventions which extend my life in a meaningful way" and interventions that don't. For instance, an intervention that kept your body alive but left you completely brain-dead would probably go in the second category. Coronary bypass surgery would probably go in the first.

The cryonics question here is simply: "If a doctor offered to freeze you, then revive you 50 years later" would you put this in the same category as other "life-saving" interventions? Would you consider it an extension of your life, in the same way as a heart transplant would be? And would you value it similarly in your considerations?

And of course, we can ask the same question for a different intervention, where you are frozen, then scanned, then recreated years later in one (or more) simulations.