TheOtherDave comments on Timeless Identity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (234)
Your comment would make more sense to me if I removed the word "not" from the sentence you quote. (Also, if I don't read past that sentence of someonewrongonthenet's comment.)
That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.
Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently. And yes, the rest of the post addresses something completely different. But if I can continue for a moment on the tangent, expanding my comment above (even if it doesn't apply to the OP):
You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.
More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.
Well, if the idea is unimportant to the OP, presumably that also helps explain how they can sleep at night.
WRT the tangent... my own position wrt preservation of personal identity is that while it's difficult to articulate precisely what it is that I want to preserve, and I'm not entirely certain there is anything cogent I want to preserve that is uniquely associated with me, I'm pretty sure that whatever does fall in that category has nothing to do with either continuity of computation or similarity of physical substrate. I'm about as sanguine about continuing my existence as a software upload as I am about continuing it as this biological system or as an entirely different biological system, as long as my subjective experience in each case is not traumatically different.
I wrote up about a page-long reply, then realized it probably deserves its own posting. I'll see if I can get to that in the next day or so. There's a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It's not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...
When you write up the post, you might want to say a few words about what it means for one of these views to be "correct" or "incorrect."
Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.
Mm. I'm not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.
Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.
If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it's me, knows what I know, etc. This is, as you say, an issue of great moral concern... I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we've just screwed up big time.
Conversely, if it turns out that pattern or causal continuity is the correct answer, then there's no problem.
Therefore it's important to discover which of those facts is true of the world.
Yes? This follows from your view? (If not, I apologize; I don't mean to put up strawmen, I'm genuinely misunderstanding.)
If so, your view is also that if we want to know whether that's the case or not, we should look for the simplest answer to the question "what does my personal identity comprise?" that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)
Yes?
EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.
Yes, that is not only 100% accurate, but describes where I'm headed.
I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.
Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there's not competing equally simple theories.
Well, you certainly won't experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.
But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it's about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.
So, let me rephrase the question: what observation is there to predict here?
That's not the direction I was going with this. It isn't about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.
Is it moral to use a teleporter? From what I can tell, that depends on whether the person's subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same - you've murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn't want to die, or wouldn't have wanted his clone to die, then it's a net negative.
As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal's mugging and the basilisk.
I don't know what "computation" or "computational continuity" means if it's considered to be separate from causal continuity, and I'm not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow 'computations' right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it's distinct from causal continuity.
(shrug) It's Mark's term and I'm usually willing to make good-faith efforts to use other people's language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn't. I have no idea why he considers that distinction important to personal identity, though... as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven't confirmed that, though.
Hypothesis: consciousness is what a physical interaction feels like from the inside.
Importantly, it is a property of the interacting system, which can have various degrees of coherence - a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”
I know this sounds like making thinking an ontologically basic concept. It's rather the reverse - I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I'm not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it's problems which make me suspect it as not being the final, ultimate answer...
I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.
How do you explain the existence of the phenomenon of "feeling like" and of "experience"?
What relevance does personal identity have to TDT? TDT doesn't depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.
It has relevance for the basilisk scenario, which I'm not sure I should say any more about.