Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mark_Friedenbach comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 01 October 2013 05:15:55PM 0 points [-]

The root of your philosophical dilemma is that "personal identity" is a conceptual substitution for soul - a subjective thread that connects you over space and time.

No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only "thread' connecting you to your past experiences is your current subjective experience of remembering them. That's all.

I have a strong subjective experience of moment-to-moment continuity, even if only in the fleeting present. Simply saying “no such thing exists” doesn't do anything to resolve the underlying confusion. If no such thing as personal identity exists, then why do I experience it? What is the underlying insight that eliminates the question?

This is not an abstract question either. It has huge implications for the construction of timeless decision theory and utilitarian metamorality.

Comment author: shminux 01 October 2013 05:24:29PM *  -1 points [-]

"a strong subjective experience of moment-to-moment continuity" is an artifact of the algorithm your brain implements. It certainly exists in as much as the algorithm itself exists. So does your personal identity. If in the future it becomes possible to run the same algorithm on a different hardware, it will still produce this sense of personal identity and will feel like "you" from the inside.

Comment author: [deleted] 01 October 2013 05:53:06PM *  0 points [-]

Yes, I'm not questioning whether a future simulation / emulation of me would have an identical subjective experience. To reject that would be a retreat to epiphenomenalism.

Let me rephrase the question, so as to expose the problem: if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Yes, I realize that in both cases result in a computer simulation of Mark in 2063 claiming to have just woken up in the brain scanner, with a subjective feeling of continuity. But is that belief true? In the two situations there's a very different outcome for the Mark of 2013. If you can't see that, then I think we are talking about different things, and maybe we should taboo the phrase “personal/subjective identity”.

Comment author: shminux 01 October 2013 06:32:09PM *  0 points [-]

if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer". You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

Comment author: [deleted] 01 October 2013 06:55:52PM *  0 points [-]

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer".

Pretty much correct. To be specific, if computational continuity is what matters, then Mark!2063 has my memories, but was in fact “born” the moment the simulation started, 50 years in the future. That's when his identity began, whereas mine ended when I died in 2013.

This seems a little more intuitive when you consider switching on 100 different emulations of me at the same time. Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

No, that would make little difference as it's pretty clear that physical continuity is an illusion. If pattern or causal continuity were correct, then it'd be fine, but both theories introduce other problems. If computational continuity is correct, then a reconstructed brain wouldn't be me any more than a simulation would. However it's possible that my cryogenically vitrified brain would preserve identity, if it were slowly brought back online without interruption.

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table (until then, it scares the crap out of me). Likewise, a AI or emulation running on a computer that is powered off and then later resumed would also break identity, but depending on the underlying nature of computation & subjective experience, task switching and online suspend/resume may or may not result in cycling identity.

I'll stop there because I'm trying to formulate all these thoughts into a longer post, or maybe a sequence of posts.

Comment author: TheOtherDave 01 October 2013 07:02:09PM 0 points [-]

Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

I would say that yes, at T1 there's one of me, and at T2 there's 100 of me.
I don't see what makes "there's 101 of me, one of which terminated at T1" more straightforward than that.

Comment author: [deleted] 01 October 2013 07:47:11PM 0 points [-]

I don't see what makes "there's 101 of me, one of which terminated at T1" more straightforward than that.

It's wrapped up in the question over what happened to that original copy that (maybe?) terminated at T1. Did that original version of you terminate completely and forever? Then I wouldn't count it among the 100 copies that were created later.

Comment author: TheOtherDave 01 October 2013 07:54:44PM 2 points [-]

Sure, obviously if it terminated then it isn't around afterwards.
Equally obviously, if it's around afterwards, it didn't terminate.

You said your metric for determining which description is accurate was (among other things) simplicity, and you claimed that the "101 - 1" answer is more straightforward (simpler?) than the "100" answer.
You can't now turn around and say that the reason it's simpler is because the "101-1" answer is accurate.

Either it's accurate because it's simpler, or it's simpler because it's accurate, but to assert both at once is illegitimate.

Comment author: [deleted] 01 October 2013 08:29:23PM *  0 points [-]

I'll address this in my sequence, which hopefully I will have time to write. The short answer is that what matters isn't which explanation of this situation is simpler, requires fewer words, a smaller number, or whatever. What matters is: which general rule is simpler?

Pattern or causal continuity leads to all sorts of weird edge cases, some of which I've tried to explain in my examples here, and in other cases fails (mysterious answer) to provide a definitive prediction of subjective experience. There may be other solutions, but computational continuity at the very least provides a simpler model, even if it results in the more "complex" 101-1 answer.

It's sorta like wave collapse vs many-worlds. Wave collapse is simpler (single world), right? No. Many worlds is the simpler theory because it requires fewer rules, even though it results in a mind-bogglingly more complex and varied multiverse. In this case I think computational continuity in the way I formulated it reduces consciousness down to simple general explanation that dissolves the question with no residual problems.

Kinda like how freewill is what a decision algorithm feels like from the inside, consciousness / subjective experience is what any computational process feels like from the inside. And therefore, when the computational process terminates, so too does the subjective experience.

Comment author: lavalamp 01 October 2013 07:13:06PM 2 points [-]

Can you taboo "personal identity"? I don't understand what important thing you could lose by going under general anesthesia.

Comment author: [deleted] 01 October 2013 07:40:55PM *  0 points [-]

It's easier to explain in the case of multiple copies of yourself. Imagine the transporter were turned into a replicator - it gets stuck in a loop reconstructing the last thing that went through it, namely you. You step off and turn around to find another version of you just coming out. And then another, and another, etc. Each one of you shares the same memories, but from that moment on you have diverged. Each clone continues life with their own subjective experience until that experience is terminated by that clone's death.

That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones. You are welcome to suggest a better term.

The replicator / clone thought experiment shows that “subjective experience of identity” is something different from the information pattern that represents your mind. There is something, although at this moment that something is not well defined, which makes you the same “you” that will exist five minutes in the future, but which separates you from the “you”s that walked out of the replicator, or exist in simulation, for example.

The first step is recognizing this distinction. Then turn around and apply it to less fantastical situations. If the clone is “you” but not you (meaning no shared identity, and my apologies for the weak terminology), then what's to say that a future simulation of “you” would also be you? What about cryonics, will your unfrozen brain still be you? That might depend on what they do to repair damage from vitrification. What about general anesthesia? Again, I need to learn more about how general anesthesia works, but if they shut down your processing centers and then restart you later, how is that different from the teleportation or simulation scenario? After all we've already established that whatever provides personal identity, it's not physical continuity.

Comment author: TheOtherDave 01 October 2013 07:48:45PM 1 point [-]

That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones.

Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says "yes".

If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity... doesn't it?

Comment author: [deleted] 01 October 2013 08:17:54PM *  0 points [-]

Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says "yes".

If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity... doesn't it?

Not quite. If You!Mars gave it thought before answering, his thinking probably went like this: “I have memories of going into the transporter, just a moment ago. I have a continuous sequence of memories, from then until now. Nowhere in those memories does my sense of self change. Right now I am experiencing the same sense of self I always remember experiencing, and laying down new memories. Ergo, proof by backwards induction I am the same person that walked into the teleporter.” However for that - or any - line of meta reasoning to hold, (1) your memories need to accurately correspond with the true and full history of reality and (2) you need trust that what occurs in the present also occurred in the past. In other words, it's kinda like saying “my memory wasn't altered because I would have remembered that.” It's not a circular argument per se, but it is a meta loop.

The map is not the territory. What happened to You!Earth's subjective experience is an objective, if perhaps not empirically observable fact. You!Mars' belief about what happened may or may not correspond with reality.

Comment author: TheOtherDave 01 October 2013 09:04:40PM 0 points [-]

What if me!Mars, after giving it thought, shakes his head and says "no, that's not right. I say I'm the same person because I still have a sense of subjective experience, which is separate from memories or shared history, which gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and which separates me from my clones"?

Do you take his word for it?
Do you assume he's mistaken?
Do you assume he's lying?

Comment author: lavalamp 01 October 2013 08:21:59PM 0 points [-]

That experiment shows that "personal identity", whatever that means, follows a time-tree, not a time-line. That conclusion also must hold if MWI is true.

So I get that there's a tricky (?) labeling problem here, where it's somewhat controversial which copy of you should be labeled as having your "personal identity". The thing that isn't clear to me is why the labeling problem is important. What observable feature of reality depends on the outcome of this labeling problem? We all agree on how those copies of you will act and what beliefs they'll have. What else is there to know here?

Comment author: [deleted] 01 October 2013 08:44:31PM 0 points [-]

Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn't know your wishes, but had to extrapolate? Under what conditions would it be okay?

Also, take the more vile forms of Pascal's mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?

Comment author: lavalamp 01 October 2013 08:59:11PM 0 points [-]

Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn't know your wishes, but had to extrapolate? Under what conditions would it be okay?

I don't see how any of that depends on the question of which computations (copies of me) get labeled with "personal identity" and which don't.

Also, take the more vile forms of Pascal's mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?

Depending on specifics, yes. But I don't see how this depends on the labeling question. This just boils down to "what do I expect to experience in the future?" which I don't see as being related to "personal identity".

Comment author: shminux 01 October 2013 08:13:50PM -1 points [-]

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table

Hmm, what about across dreamless sleep? Or fainting? Or falling and hitting your head and losing consciousness for an instant? Would these count as killing one person and creating another? And so be morally net-negative?

Comment author: [deleted] 01 October 2013 08:38:04PM 0 points [-]

If computational continuity is what matters, then no. Just because you have no memory doesn't mean you didn't experience it. There is in fact a continuous experience throughout all of the examples you gave, just no new memories being formed. But from the last point you remember (going to sleep, fainting, hitting your head) to when you wake up, you did exist and were running a computational process. From our understanding of neurology you can be certain that there was no interruption of subjective experience of identity, even if you can't remember what actually happened.

Whether this is also true of general anesthesia depends very much on the biochemistry going on. I admit ignorance here.

Comment author: shminux 01 October 2013 08:46:51PM *  -1 points [-]

OK, I guess I should give up, too. I am utterly unable to relate to whatever it is you mean by "because you have no memory doesn't mean you didn't experience it" or "subjective experience of identity, even if you can't remember what actually happened".

Comment author: TheOtherDave 01 October 2013 07:09:33PM *  0 points [-]

Clearly, your subjective experience today is HONK-screech-bam-oblivion, since all the subjective experiences that come after that don't happen today in this example... they happen 50 years later.

It is not in the least bit clear to me that this means those subjective experiences aren't your subjective experiences. You aren't some epiphenomenal entity that dissipates in the course of those 50 years and therefore isn't around to experience those experiences when they happen... whatever is having those subjective experiences, whenever it is having them, that's you.

maybe we should taboo the phrase “personal/subjective identity”.

Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?

EDIT: Ah, you did so elsethread. Cool. Replied there.

Comment author: lavalamp 01 October 2013 07:17:20PM 0 points [-]

Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

Comment author: [deleted] 01 October 2013 07:25:01PM 1 point [-]

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

No disagreement here - that's what I meant by oblivion.

Comment author: lavalamp 01 October 2013 07:31:37PM 2 points [-]

OK, cool, but now I'm confused. If we're meaning the same thing, I don't understand how it can be a question-- "not running" isn't a thing an algorithm can experience; it's a logical impossibility.