Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mark_Friedenbach comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 01 October 2013 05:58:35PM 0 points [-]

Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.

Comment author: TheOtherDave 01 October 2013 06:56:39PM *  2 points [-]

Mm. I'm not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it's me, knows what I know, etc. This is, as you say, an issue of great moral concern... I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we've just screwed up big time.

Conversely, if it turns out that pattern or causal continuity is the correct answer, then there's no problem.

Therefore it's important to discover which of those facts is true of the world.

Yes? This follows from your view? (If not, I apologize; I don't mean to put up strawmen, I'm genuinely misunderstanding.)

If so, your view is also that if we want to know whether that's the case or not, we should look for the simplest answer to the question "what does my personal identity comprise?" that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)

Yes?

EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.

Comment author: [deleted] 01 October 2013 07:16:05PM 0 points [-]

Yes, that is not only 100% accurate, but describes where I'm headed.

I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.

What is there to predict here?

If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there's not competing equally simple theories.

Comment author: TheOtherDave 01 October 2013 07:43:57PM 0 points [-]

What is there to predict here?
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Well, you certainly won't experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.

But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it's about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.

So, let me rephrase the question: what observation is there to predict here?

Comment author: [deleted] 01 October 2013 07:58:06PM 0 points [-]

So, let me rephrase the question: what observation is there to predict here?

That's not the direction I was going with this. It isn't about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.

Is it moral to use a teleporter? From what I can tell, that depends on whether the person's subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same - you've murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn't want to die, or wouldn't have wanted his clone to die, then it's a net negative.

As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal's mugging and the basilisk.

Comment author: TheOtherDave 01 October 2013 08:00:48PM 0 points [-]

OK. I'm tapping out here. Thanks for your time.

Comment author: Eliezer_Yudkowsky 01 October 2013 09:51:37PM 3 points [-]

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

I don't know what "computation" or "computational continuity" means if it's considered to be separate from causal continuity, and I'm not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow 'computations' right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it's distinct from causal continuity.

Comment author: TheOtherDave 01 October 2013 10:29:46PM 5 points [-]

(shrug) It's Mark's term and I'm usually willing to make good-faith efforts to use other people's language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn't. I have no idea why he considers that distinction important to personal identity, though... as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven't confirmed that, though.

Comment author: [deleted] 02 October 2013 01:30:10AM *  -2 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.

Importantly, it is a property of the interacting system, which can have various degrees of coherence - a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”

I know this sounds like making thinking an ontologically basic concept. It's rather the reverse - I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I'm not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it's problems which make me suspect it as not being the final, ultimate answer...

Comment author: shminux 02 October 2013 03:53:47AM *  -1 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.

I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.

Comment author: RichardKennaway 02 October 2013 07:00:12AM 0 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.
...
consciousness is the experience of organized physical interactions.

How do you explain the existence of the phenomenon of "feeling like" and of "experience"?

Comment author: Kawoomba 02 October 2013 07:54:43AM 0 points [-]

I agree that the grandparent has circumvented addressing the crux of the matter, however I feel (heh) that the notion of "explain" often comes with unrealistic expectations. It bears remembering that we merely describe relationships as succinctly as possible, then that description is the "explanation".

While we would e.g. expect/hope for there to be some non-contradictory set of descriptions applying to both gravity and quantum phenomena (for which we'd eat a large complexity penalty, since complex but accurate descriptions always beat out simple but inaccurate descriptions; Occam's Razor applies only to choosing among fitting/not yet falsified descriptions), as soon as we've found some pinned-down description in some precise language, there's no guarantee -- or strictly speaking, need -- of an even simpler explanation.

A world running according to currently en-vogue physics, plus a box which cannot be described as an extension of said physics, but only in some other way, could in fact be fully explained, with no further explanans for the explanandum.

It seems pretty straightforward to note that there's no way to "derive" phenomena such as "feeling like" in the current physics framework, except of course to describe which states of matters/energy correspond to which qualia.

Such a description could be the explanation, with nothing further to be explained:

If it empirically turned out that a specific kind matter needs to be arranged in the specific pattern of a vertebrate brain to correlate to qualia, that would "explain" consciousness. If it turned out (as we all expect) that the pattern alone sufficies, then certain classes of instantiated algorithms (regardless of the hardware/wetware) would be conscious. Regardless, either description (if it turned out to be empirically sound) would be the explanation.

I also wonder, what could any answer within the current physics framework possibly look like, other than an asterisk behind the equations with the addendum of "values n1 ... nk for parameters p1 ... pk correlate with qualia x"?

Comment author: [deleted] 02 October 2013 07:56:23AM -1 points [-]

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc. But ultimately all of that reduces down to a big collection of quarks, each taking part in mostly random interactions on the scale of femtoseconds. The apparent organization of the brain is in the map, not the territory. So if subjective experience reduces down to neurons, and neurons reduce down to molecules, and molecules reduce to quarks and leptons, where then does the consciousness reside? "Information patterns" alone is an inadequate answer - that's at the level of the map, not the territory. Quarks and leptons combine into molecules, molecules into neural synapses, and the neurons connect into the 3lb information processing network that is my brain. Somewhere along the line, the subjective experience of "consciousness" arises. Where, exactly, would you propose that happens?

We know (from our own subjective experience) that something we call "consciousness" exists at the scale of the entire brain. If you assume that the workings of the brain is fully explained by its parts and their connections, and those parts explained by their sub-components and designs, etc. you eventually reach the ontologically basic level of quarks and leptons. Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons. So what is the precise interaction of fundamental particles is the basic unit of consciousness? What level of complexity is required before simply organic matter becomes a conscious mind?

It sounds ridiculous, but if you assume that quarks and leptons are "conscious," or rather that consciousness is the interaction of these various ontologically primitive, fundamental particles, a remarkably consistent theory emerges: one which dissolves the mystery of subjective consciousness by explaining it as the mere aggregation of interdependent interactions. Besides being simple, this is also predictive: it allows us to assert for a given situation (e.g. a teleporter or halted simulation) whether loss of personal identity occurs, which has implications for morality of real situations encountered in the construction of an AI.

Comment author: RichardKennaway 02 October 2013 08:52:12AM 0 points [-]

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc.

I indeed have a reductionist background, but I offer no explanation, because I have none. I do not even know what an explanation could possibly look like; but neither do I take that as proof that there cannot be one. The story you tell surrounds the central mystery with many physical details, but even in your own accont of it the mystery remains unresolved:

Somewhere along the line, the subjective experience of "consciousness" arises.

However much you assert that there must be an explanation, I see here no advance towards actually having one. What does it mean to attribute consciousness to subatomic particles and rocks? Does it predict anything, or does it only predict that we could make predictions about teleporters and simulations if we had a physical explanation of consciousness?

Comment author: lavalamp 02 October 2013 05:51:50PM 2 points [-]

The apparent organization of the brain is in the map, not the territory.

What do you mean by this? Are fMRIs a big conspiracy?

Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons.

This description applies equally to all objects. When you describe the brain this way, you leave out all its interesting characteristics, everything that makes it different from other blobs of interacting quarks and leptons.

Comment author: [deleted] 02 October 2013 07:57:33PM -1 points [-]

What I'm saying is that the high-level organization is not ontologically primitive. When we talk about organizational patterns of the brain, or the operation of neural synapses, we're taking about very high level abstractions. Yes, they are useful abstractions primarily because they ignore unnecessary detail. But that detail is how they are actually implemented. The brain is soup of organic particles with very high rates of particle interaction due simply to thermodynamic noise. At the nanometer and femtosecond scale, there is very little signal to noise, however at the micrometer and millisecond scale general trends start to emerge, phenomenon which form the substrate of our computation. But these high level abstractions don't actually exist - they are just average approximations over time of lower level, noisy interactions.

I assume you would agree that a normal adult brain in a human experiences a subjective feeling of consciousness that persists from moment-to-moment. I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

Speaking of gradations, certain animals can't recognize themselves in a mirror. If you use self-awareness as a metric as was argued elsewhere, does that mean they're not conscious? What about insects, which operate with a more distributed neural system. Dung beetles seem to accomplish most tasks by innate reflex response. Do they have at least a little, tiny subjective experience of consciousness? Or is their existence no more meaningful than that of a stapler?

Yes, this objection applies equally to all objects. That's precisely my point. Brains are not made of any kind of “mind stuff” - that's substance dualism which I reject. Furthermore, minds don't have a subjective experience separate from what is physically explainable - that's epiphenomenalism, similarly rejected. "Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

I see only two reductionist paths forward to take: (1) posit a new, fundamental law by which at some aggregate level of complexity or organization, a computational substrate becomes conscious. How & why is not explained, and as far as I can tell there is no experimental way to determine where this cutoff is. But assume it is there. Or, (2) accept that like everything else in the universe, consciousness reduces down to the properties of fundamental particles and their interactions (it is the interaction of particles). A quark and a lepton exchanging a photon is some minimal quantum Plank-level of conscious experience. Yes, that means that even a rock and a stapler experience some level of conscious experience - barely distinguishable from thermal noise, but nonzero - but the payoff is a more predictive reductionist model of the universe. In terms of biting bullets, I think accepting many-worlds took more gumption than this.

Comment author: lavalamp 02 October 2013 08:19:41PM 2 points [-]

I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

This is a Wrong Question. Consciousness, whatever it is, is (P=.99) a result of a computation. My computer exhibits a microsoft word behavior, but if I zoom in to the electrons and transistors in the CPU, I see no such microsoft word nature. It is silly to zoom in to quarks and leptons looking for the true essence of microsoft word. This is the way computations work-- a small piece of the computation simply does not display behavior that is like the entire computation. The CPU is not the computation. It is not the atoms of the brain that are conscious, it is the algorithm that they run, and the atoms are not the algorithm. Consciousness is produced by non-conscious things.

"Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

Minds exist in some algorithms ("information pattern" sounds too static for my taste). Your desire to reduce things to forces on elementary particles is misguided, I think, because you can do the same computation with many different substrates. The important thing, the thing we care about, is the computation, not the substrate. Sure, you can understand microsoft word at the level of quarks in a CPU executing assembly language, but it's much more useful to understand it in terms of functions and algorithms.

Comment author: [deleted] 02 October 2013 09:20:41PM 1 point [-]

You've completely missed / ignored my point, again. Microsoft Word can be functionally reduced to electrons in transistors. The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.

just as computation can be brought down to the atomic scale (or smaller, with quantum computing), so too can conscious experiences be constructed out of such computational events. Indeed they are one and the same thing, just viewed from different perspectives.