Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheOtherDave comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 30 September 2013 05:32:15PM 1 point [-]

Your comment would make more sense to me if I removed the word "not" from the sentence you quote. (Also, if I don't read past that sentence of someonewrongonthenet's comment.)

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

Comment author: [deleted] 30 September 2013 06:37:14PM 0 points [-]

Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently. And yes, the rest of the post addresses something completely different. But if I can continue for a moment on the tangent, expanding my comment above (even if it doesn't apply to the OP):

You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.

More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.

Comment author: TheOtherDave 30 September 2013 07:01:54PM 3 points [-]

Well, if the idea is unimportant to the OP, presumably that also helps explain how they can sleep at night.

WRT the tangent... my own position wrt preservation of personal identity is that while it's difficult to articulate precisely what it is that I want to preserve, and I'm not entirely certain there is anything cogent I want to preserve that is uniquely associated with me, I'm pretty sure that whatever does fall in that category has nothing to do with either continuity of computation or similarity of physical substrate. I'm about as sanguine about continuing my existence as a software upload as I am about continuing it as this biological system or as an entirely different biological system, as long as my subjective experience in each case is not traumatically different.

Comment author: [deleted] 01 October 2013 05:03:19PM 0 points [-]

I wrote up about a page-long reply, then realized it probably deserves its own posting. I'll see if I can get to that in the next day or so. There's a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It's not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...

Comment author: TheOtherDave 01 October 2013 05:08:09PM 1 point [-]

When you write up the post, you might want to say a few words about what it means for one of these views to be "correct" or "incorrect."

Comment author: [deleted] 01 October 2013 05:58:35PM 0 points [-]

Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.

Comment author: TheOtherDave 01 October 2013 06:56:39PM *  2 points [-]

Mm. I'm not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it's me, knows what I know, etc. This is, as you say, an issue of great moral concern... I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we've just screwed up big time.

Conversely, if it turns out that pattern or causal continuity is the correct answer, then there's no problem.

Therefore it's important to discover which of those facts is true of the world.

Yes? This follows from your view? (If not, I apologize; I don't mean to put up strawmen, I'm genuinely misunderstanding.)

If so, your view is also that if we want to know whether that's the case or not, we should look for the simplest answer to the question "what does my personal identity comprise?" that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)

Yes?

EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.

Comment author: [deleted] 01 October 2013 07:16:05PM 0 points [-]

Yes, that is not only 100% accurate, but describes where I'm headed.

I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.

What is there to predict here?

If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there's not competing equally simple theories.

Comment author: TheOtherDave 01 October 2013 07:43:57PM 0 points [-]

What is there to predict here?
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Well, you certainly won't experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.

But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it's about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.

So, let me rephrase the question: what observation is there to predict here?

Comment author: [deleted] 01 October 2013 07:58:06PM 0 points [-]

So, let me rephrase the question: what observation is there to predict here?

That's not the direction I was going with this. It isn't about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.

Is it moral to use a teleporter? From what I can tell, that depends on whether the person's subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same - you've murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn't want to die, or wouldn't have wanted his clone to die, then it's a net negative.

As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal's mugging and the basilisk.

Comment author: Eliezer_Yudkowsky 01 October 2013 09:51:37PM 3 points [-]

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

I don't know what "computation" or "computational continuity" means if it's considered to be separate from causal continuity, and I'm not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow 'computations' right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it's distinct from causal continuity.

Comment author: TheOtherDave 01 October 2013 10:29:46PM 5 points [-]

(shrug) It's Mark's term and I'm usually willing to make good-faith efforts to use other people's language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn't. I have no idea why he considers that distinction important to personal identity, though... as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven't confirmed that, though.

Comment author: [deleted] 02 October 2013 01:30:10AM *  -2 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.

Importantly, it is a property of the interacting system, which can have various degrees of coherence - a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”

I know this sounds like making thinking an ontologically basic concept. It's rather the reverse - I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I'm not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it's problems which make me suspect it as not being the final, ultimate answer...

Comment author: shminux 02 October 2013 03:53:47AM *  -1 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.

I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.

Comment author: RichardKennaway 02 October 2013 07:00:12AM 0 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.
...
consciousness is the experience of organized physical interactions.

How do you explain the existence of the phenomenon of "feeling like" and of "experience"?

Comment author: pengvado 01 October 2013 06:09:33PM 2 points [-]

What relevance does personal identity have to TDT? TDT doesn't depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.

Comment author: [deleted] 01 October 2013 06:33:08PM 0 points [-]

It has relevance for the basilisk scenario, which I'm not sure I should say any more about.

Comment author: shminux 30 September 2013 11:00:59PM *  0 points [-]

I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity.

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage. I suspect that a faithful simulation of just my Brodmann area 10 coupled with a large chunk of my memories would restore enough of my self-awareness to be considered "me". This sim-me would probably lose most of my emotions without the rest of the brain, but it is still infinitely better than none.

Comment author: TheOtherDave 01 October 2013 12:12:21AM 0 points [-]

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage.

There's a very wide range of possible minds I consider to preserve my identity; I'm not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.

Comment author: shminux 01 October 2013 12:53:14AM -1 points [-]

Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.

Comment author: TheOtherDave 01 October 2013 02:18:51AM 0 points [-]

At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I'm pretty confident that the result is nothing I would identify with. (I'm also pretty sure this scenario is internally inconsistent.)

At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines... then maybe. I'd have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.

That said, I'm certainly not comfortable treating the result of that process as preserving "me."

Then again I'm also not comfortable treating the result of living a thousand years as preserving "me."

Comment author: someonewrongonthenet 01 October 2013 03:03:07AM *  0 points [-]

a large chunk of my memories

You'll need the rest of the brain because these other memories would be distributed throughout the rest of your cortex. The hippocampus only contains recent episodic memories.

If you lost your temporal lobe, for example, you'd lose all non-episodic knowledge concerning what the names of things are, how they are categorized, and what the relationships between them are.

Comment author: TheOtherDave 01 October 2013 03:07:43AM 0 points [-]

That said, I'm not sure why I should care much about having my non-episodic knowledge replaced with an off-the-shelf encyclopedia module. I don't identify with it much.

Comment author: someonewrongonthenet 01 October 2013 03:30:51AM *  0 points [-]

If you only kept the hippocampus, you'd lose your non-recent episodic memories too. But technical issues aside, let me defend the "encyclopedia":

Episodic memory is basically a cassette reel of your life, along with a few personalized associations and maybe memories of thoughts and emotions. Everything that we associate with the word knowledge is non-episodic. It's not just verbal labels - that was just a handy example that I happened to know the brain region for. I'd actually care about that stuff more about non-episodic memories than the episodic stuff.

Things like "what is your wife's name and what does her face look like" are non-episodic memory. You don't have to think back to a time when you specifically saw your wife to remember what her name and face is, and that you love her - that information is treated as a fact independent of any specific memory, indelibly etched into your model of the world. Cognitively speaking, "I love my wife stacy, she looks like this" is as much of a fact as "grass is a green plant" and they are both non-episodic memories. Your episodic memory reel wouldn't even make sense without that sort of information. I'd still identify someone with memory loss, but retaining my non-episodic memory, as me. I'd identify someone with only my episodic memories as someone else, looking at a reel of memory that does not belong to them and means nothing to them.

(Trigger Warning: link contains writing in diary which is sad, horrifying, and nonfiction.): This is what complete episodic memory loss looks like. Patients like this can still remember the names of faces of people they love.

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

Comment author: TheOtherDave 01 October 2013 03:46:42AM 0 points [-]

I take it you're assuming that information about my husband, and about my relationship to my husband, isn't in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.

If that's true, then sure, I'd prefer not to lose that information.

Comment author: someonewrongonthenet 01 October 2013 04:05:26AM *  1 point [-]

I take it you're assuming

Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?

If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn't be a "off-the-shelf encyclopedia module" anymore. It would be an index containing individual people's non-episodic knowledge. At that point, it's just an index of partial uploads. We can't standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there - I'm not sure how that would even work).

(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory

Comment author: TheOtherDave 01 October 2013 05:07:07AM *  0 points [-]

Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn't seem terribly likely to me.

Your concept of an omelette probably isn't exactly isomorphic to mine, but there's probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone's omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.

And, in addition, there's a bunch of individualizing episodic memory on top of that... memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime's memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)

Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn't seem terribly likely to me. Your relationship with your husband isn't exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.

As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father's son and his stepmother's stepson and my mom's son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).

So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It's just a passive data structure.

Incidentally and unrelatedly, I'm not nearly as committed as you sound to preserving our current ignorance of one another's perspective in this new architecture.

Comment author: someonewrongonthenet 01 October 2013 09:04:39AM *  0 points [-]

I'm really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn't even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we're just using the parametric functions to model the properties of individual neurons - that's fine).

Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?

Incidentally and unrelatedly, I'm not nearly as committed as you sound to preserving our current ignorance of one another's perspective in this new architecture.

I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another's perspectives would overlap. Also, that brings us back to the original question I was raising about living forever - what exactly is it that we value and want to preserve?

Comment author: CynicalOptimist 17 November 2016 08:08:01PM *  0 points [-]

I think I've got a good response for this one.

My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).

Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend's memory.

That's the reason why "off the shelf" doesn't sound suitable in this context.

Comment author: TheOtherDave 18 November 2016 10:22:01PM *  0 points [-]

So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]... I agree that there's a personal relationship with BtVS, just like there's a personal relationship with my husband, that we'd want to preserve if we wanted to perfectly preserve me.

I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there's a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.

The personal relationship remains local and private, but it takes up way less space than your mind currently does.

That said... coming back to this conversation after three years, I'm finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.

I mean, when you try to recall a BtVS episode, your memory is imperfect... if you watch it again, you'll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS -- no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you'd have changed your mind if you watched it again on TV, too) -- would you take it?

I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory... but so what? That's not enough to make it worth preserving.

And for all I know, maybe you agree with me... maybe you don't want to preserve your private "facts" about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private "facts" about how good the show was. Which is fine, you care about what you care about.

But if you told me right now that I'm actually an upload with reconstructed memories, and that there was a glitch such that my current "facts" about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre... well, so what?

I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.

Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I'm concerned.

I suspect there's a million other things like that.

Comment author: shminux 01 October 2013 06:28:16AM -1 points [-]

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

What's the difference between personality and individuality?

Comment author: someonewrongonthenet 01 October 2013 09:29:20AM *  1 point [-]

In my head:

Personality is a set of dichotomous variables plotted on a bell curve. "Einstein was extroverted, charismatic, nonconforming, and prone to absent-mindedness" describes his personality. We all have these traits in various amounts. You can some of these personality nobs really easily with drugs. I can't specify Einstein out of every person in the world using only his personality traits - I can only specify individuals similar to him.

Individuality is stuff that's specific to the person. "Einstein's second marriage was to his cousin and he had at least 6 affairs. He admired Spinoza, and was a contemporary of Tagore. He was a socialist and cared about civil rights. He had always thought there was something wrong about refrigerators." Not all of these are dichotomous variables - you either spoke to Tagore or you didn't. And it makes no sense to put people on a "satisfaction with Refrigerators" spectrum, even though I suppose you could if you wanted to. And all this information together specifically points to Einstein, and no one else in the world. Everyone in the world a set of unique traits like fingerprints - and it doesn't even make sense to ask what the "average" is, since most of the variables don't exist on the same dimension.

And...well, when it comes to Area 10, just intuitively, do you really want to define yourself by a few variables that influence your executive function? Personally I define myself partially by my ideas, and partially by my values...and the former is definitely in the "individuality" territory.

Comment author: shminux 01 October 2013 04:49:14PM -1 points [-]

OK, I understand what you mean by personality vs individuality. However, I doubt that the functionality of BA10 can be described "by a few variables that influence your executive function". Then again, no one knows anything definite about it.

Comment author: [deleted] 01 October 2013 12:14:43AM *  0 points [-]

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

There are things that when I go to bed to wake up eight hours later are very nearly preserved but if I woke up sixty years later wouldn't be, e.g. other people's memories of me (see I Am a Strange Loop) or the culture of the place where I live (see Good Bye, Lenin!).

(I'm not saying whether this is one of the main reasons why I'm not signed up for cryonics.)

Comment author: TheOtherDave 01 October 2013 01:48:07AM 0 points [-]

Point.