RichardKennaway comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (234)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 02 October 2013 01:30:10AM *  -2 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.

Importantly, it is a property of the interacting system, which can have various degrees of coherence - a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”

I know this sounds like making thinking an ontologically basic concept. It's rather the reverse - I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I'm not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it's problems which make me suspect it as not being the final, ultimate answer...

Comment author: RichardKennaway 02 October 2013 07:00:12AM 0 points [-]

Hypothesis: consciousness is what a physical interaction feels like from the inside.
...
consciousness is the experience of organized physical interactions.

How do you explain the existence of the phenomenon of "feeling like" and of "experience"?

Comment author: Kawoomba 02 October 2013 07:54:43AM 0 points [-]

I agree that the grandparent has circumvented addressing the crux of the matter, however I feel (heh) that the notion of "explain" often comes with unrealistic expectations. It bears remembering that we merely describe relationships as succinctly as possible, then that description is the "explanation".

While we would e.g. expect/hope for there to be some non-contradictory set of descriptions applying to both gravity and quantum phenomena (for which we'd eat a large complexity penalty, since complex but accurate descriptions always beat out simple but inaccurate descriptions; Occam's Razor applies only to choosing among fitting/not yet falsified descriptions), as soon as we've found some pinned-down description in some precise language, there's no guarantee -- or strictly speaking, need -- of an even simpler explanation.

A world running according to currently en-vogue physics, plus a box which cannot be described as an extension of said physics, but only in some other way, could in fact be fully explained, with no further explanans for the explanandum.

It seems pretty straightforward to note that there's no way to "derive" phenomena such as "feeling like" in the current physics framework, except of course to describe which states of matters/energy correspond to which qualia.

Such a description could be the explanation, with nothing further to be explained:

If it empirically turned out that a specific kind matter needs to be arranged in the specific pattern of a vertebrate brain to correlate to qualia, that would "explain" consciousness. If it turned out (as we all expect) that the pattern alone sufficies, then certain classes of instantiated algorithms (regardless of the hardware/wetware) would be conscious. Regardless, either description (if it turned out to be empirically sound) would be the explanation.

I also wonder, what could any answer within the current physics framework possibly look like, other than an asterisk behind the equations with the addendum of "values n1 ... nk for parameters p1 ... pk correlate with qualia x"?

Comment author: [deleted] 02 October 2013 07:56:23AM -1 points [-]

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc. But ultimately all of that reduces down to a big collection of quarks, each taking part in mostly random interactions on the scale of femtoseconds. The apparent organization of the brain is in the map, not the territory. So if subjective experience reduces down to neurons, and neurons reduce down to molecules, and molecules reduce to quarks and leptons, where then does the consciousness reside? "Information patterns" alone is an inadequate answer - that's at the level of the map, not the territory. Quarks and leptons combine into molecules, molecules into neural synapses, and the neurons connect into the 3lb information processing network that is my brain. Somewhere along the line, the subjective experience of "consciousness" arises. Where, exactly, would you propose that happens?

We know (from our own subjective experience) that something we call "consciousness" exists at the scale of the entire brain. If you assume that the workings of the brain is fully explained by its parts and their connections, and those parts explained by their sub-components and designs, etc. you eventually reach the ontologically basic level of quarks and leptons. Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons. So what is the precise interaction of fundamental particles is the basic unit of consciousness? What level of complexity is required before simply organic matter becomes a conscious mind?

It sounds ridiculous, but if you assume that quarks and leptons are "conscious," or rather that consciousness is the interaction of these various ontologically primitive, fundamental particles, a remarkably consistent theory emerges: one which dissolves the mystery of subjective consciousness by explaining it as the mere aggregation of interdependent interactions. Besides being simple, this is also predictive: it allows us to assert for a given situation (e.g. a teleporter or halted simulation) whether loss of personal identity occurs, which has implications for morality of real situations encountered in the construction of an AI.

Comment author: RichardKennaway 02 October 2013 08:52:12AM 0 points [-]

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc.

I indeed have a reductionist background, but I offer no explanation, because I have none. I do not even know what an explanation could possibly look like; but neither do I take that as proof that there cannot be one. The story you tell surrounds the central mystery with many physical details, but even in your own accont of it the mystery remains unresolved:

Somewhere along the line, the subjective experience of "consciousness" arises.

However much you assert that there must be an explanation, I see here no advance towards actually having one. What does it mean to attribute consciousness to subatomic particles and rocks? Does it predict anything, or does it only predict that we could make predictions about teleporters and simulations if we had a physical explanation of consciousness?

Comment author: lavalamp 02 October 2013 05:51:50PM 2 points [-]

The apparent organization of the brain is in the map, not the territory.

What do you mean by this? Are fMRIs a big conspiracy?

Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons.

This description applies equally to all objects. When you describe the brain this way, you leave out all its interesting characteristics, everything that makes it different from other blobs of interacting quarks and leptons.

Comment author: [deleted] 02 October 2013 07:57:33PM -1 points [-]

What I'm saying is that the high-level organization is not ontologically primitive. When we talk about organizational patterns of the brain, or the operation of neural synapses, we're taking about very high level abstractions. Yes, they are useful abstractions primarily because they ignore unnecessary detail. But that detail is how they are actually implemented. The brain is soup of organic particles with very high rates of particle interaction due simply to thermodynamic noise. At the nanometer and femtosecond scale, there is very little signal to noise, however at the micrometer and millisecond scale general trends start to emerge, phenomenon which form the substrate of our computation. But these high level abstractions don't actually exist - they are just average approximations over time of lower level, noisy interactions.

I assume you would agree that a normal adult brain in a human experiences a subjective feeling of consciousness that persists from moment-to-moment. I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

Speaking of gradations, certain animals can't recognize themselves in a mirror. If you use self-awareness as a metric as was argued elsewhere, does that mean they're not conscious? What about insects, which operate with a more distributed neural system. Dung beetles seem to accomplish most tasks by innate reflex response. Do they have at least a little, tiny subjective experience of consciousness? Or is their existence no more meaningful than that of a stapler?

Yes, this objection applies equally to all objects. That's precisely my point. Brains are not made of any kind of “mind stuff” - that's substance dualism which I reject. Furthermore, minds don't have a subjective experience separate from what is physically explainable - that's epiphenomenalism, similarly rejected. "Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

I see only two reductionist paths forward to take: (1) posit a new, fundamental law by which at some aggregate level of complexity or organization, a computational substrate becomes conscious. How & why is not explained, and as far as I can tell there is no experimental way to determine where this cutoff is. But assume it is there. Or, (2) accept that like everything else in the universe, consciousness reduces down to the properties of fundamental particles and their interactions (it is the interaction of particles). A quark and a lepton exchanging a photon is some minimal quantum Plank-level of conscious experience. Yes, that means that even a rock and a stapler experience some level of conscious experience - barely distinguishable from thermal noise, but nonzero - but the payoff is a more predictive reductionist model of the universe. In terms of biting bullets, I think accepting many-worlds took more gumption than this.

Comment author: lavalamp 02 October 2013 08:19:41PM 2 points [-]

I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

This is a Wrong Question. Consciousness, whatever it is, is (P=.99) a result of a computation. My computer exhibits a microsoft word behavior, but if I zoom in to the electrons and transistors in the CPU, I see no such microsoft word nature. It is silly to zoom in to quarks and leptons looking for the true essence of microsoft word. This is the way computations work-- a small piece of the computation simply does not display behavior that is like the entire computation. The CPU is not the computation. It is not the atoms of the brain that are conscious, it is the algorithm that they run, and the atoms are not the algorithm. Consciousness is produced by non-conscious things.

"Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

Minds exist in some algorithms ("information pattern" sounds too static for my taste). Your desire to reduce things to forces on elementary particles is misguided, I think, because you can do the same computation with many different substrates. The important thing, the thing we care about, is the computation, not the substrate. Sure, you can understand microsoft word at the level of quarks in a CPU executing assembly language, but it's much more useful to understand it in terms of functions and algorithms.

Comment author: [deleted] 02 October 2013 09:20:41PM 1 point [-]

You've completely missed / ignored my point, again. Microsoft Word can be functionally reduced to electrons in transistors. The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.

just as computation can be brought down to the atomic scale (or smaller, with quantum computing), so too can conscious experiences be constructed out of such computational events. Indeed they are one and the same thing, just viewed from different perspectives.

Comment author: lavalamp 02 October 2013 09:37:51PM 1 point [-]

The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.

I thought dualism meant you thought that there was ontologically basic conciousness stuff separate from ordinary matter?

I think the mind should be reduced to algorithms, and biochemistry is an implementation detail. This may make me a dualist by your usage of the word.

I think that it's equally silly to ask, "where is the microsoft-word-ness" about a subset of transistors in your CPU as it is to ask "where is the consciousness" about a subset of neurons in your brain. I see this as describing how non-ontologically-basic consciousness can be produced by non-conscious stuff.

You've completely missed / ignored my point, again.

Apologies; does the above address your point? If not I'm confused about your point.

Comment author: [deleted] 02 October 2013 10:04:16PM *  -1 points [-]

I'm arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.

The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don't think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer - you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”

The experience of having a single, unified me directing my conscious experience is an illusion - it's what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I've tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.

To assert that "this level of algorithmic complexity is a mind, and below that is mere machines" is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.

Comment author: lavalamp 02 October 2013 10:31:33PM 1 point [-]

I think we have really different models of how algorithms and their sub-components work.

it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts.

Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It's just madness to say that, e.g., your language processing center is 57% conscious.

The experience of having a single, unified me directing my conscious experience is an illusion...

I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you'll get to an algorithm that is conscious while none of its subroutines are.

If this makes me a dualist then I'm a dualist, but that doesn't feel right. I mean, the only way you can really explain a thing is to show how it arises from something that's not like it in the first place, right?