Mitchell_Porter comments on It's not like anything to be a bat - Less Wrong

15 Post author: Yvain 27 March 2010 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (189)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 08 January 2011 03:34:45AM 2 points [-]

If I've understood you, you claim the <red> is due in part to color qualia in some way associated with O, which are distinct from the set of things happening inside my skull.

No. I think that in reality, <red> is in the head. But our current physical ontology contains no such entity. That is why I say that if you accept our current physical ontology, you're either an eliminativist or a dualist.

Comment author: TheOtherDave 08 January 2011 07:50:24AM 0 points [-]

I'm not in the least bit interested in the labels. But yes, if we're agreed that <red> is constructed by my brain, rather than being a property of my environment, then I don't understand what grounds you have for believing that <red> isn't explicable by entities in our current physical ontology.

Comment author: Mitchell_Porter 08 January 2011 09:19:29AM 3 points [-]

Just imagine if you were having a discussion with someone who said that the world is made of numbers. And you picked up a rock and said, so, this rock is made of numbers? And they said, sure. And you said, that's absurd. How could a rock be equal to 1+1, for example? They're completely different kinds of things. And they went off on a riff about how science has shown that all is number, and whenever you tried to point out the non-numerical aspects of reality, they'd just subsume that back into the all-is-number reductionism, and they'd stubbornly insist that, even if the rock was not equal to 1+1, it might be equal to some other numbers, and besides, what other sort of things could there be, besides numbers?

For me, the idea that <red> is identical to some arrangement of particles in space is just like saying that 1+1 is a rock. The gulf between the nature of the allegedly identical entities is so great that the problem with the assertion ought to be obvious. In a sprinkling of point objects throughout space, where is the color? It's really that simple. It's just not there. It's not intrinsically there, anyway. You might propose that redness is a property of certain special configurations, but when you say that, you've embarked upon a form of dualism, property dualism. It's a dualism because on the one side, you have properties which are intrinsic to a geometrically defined situation, like distances and angles and shapes; and on the other side, you have properties which are logically independent of the geometric facts and have to be posited separately. For example, the existence of color experiences, or indeed any kind of experiences, in a brain.

In other words, the onus is on you to explain just what you think the connection is between arrangements of particles in space (e.g. a brain), and experiences of color. I have my own answer, but I want to hear yours first.

Comment author: TheOtherDave 08 January 2011 10:24:54AM 2 points [-]

You won't find my answer interesting, but since you asked: I think experiences of color are among the states that particles in space can get into, just as the impulse to blink is a state particles in space can get into, just as a predisposition to generate meaningful English but not German sentences is a state that particles in space can get into, just as an appreciation for 17th-century Romanian literature is a state that particles in space can get into, just as a contagious head cold is a state that particles in space can get into. (Which is not to say that all of those are the same kinds of states.)

We can certainly populate our ontologies with additional entities related to those various things if we wish... color qualia and motor-impulse qualia and English qualia and German qualia and 17th-century Romanian literary qualia and contagious head cold qualia and so forth. I have no problem with that in and of itself, if positing these entities is useful for something.

But before I choose to do so, I want to understand what use those entities have to offer me. Populating my ontology with useless entities is silly.

I understand that this hesitation seems to you absurd, because you believe it ought to seem obvious to me that arrangements of matter simply aren't the kind of thing that can be an experience of color, just like it should seem obvious that numbers aren't the kind of thing that can be a rock, just as it seems obvious to Searle that formal rules aren't the kind of thing that can be an understanding of Chinese, just as it seemed obvious to generations of thinkers that arrangements of matter aren't the kind of thing that can be an infectious living cell.

These things aren't, in fact, obvious to me. If you have reasons for believing any of them other than their obviousness, I might find those reasons compelling, but repeated assertions of their obviousness are not.

Comment author: Mitchell_Porter 08 January 2011 12:04:06PM -1 points [-]

An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they're changing position in space.

Generating meaningful sentences - here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I'll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states - the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code - and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.)

Anyway, you say it's not obvious to you that "arrangements of matter simply aren't the kind of thing that can be an experience of color". Okay. Let's suppose there is an arrangement of matter in space which is an experience of color. Maybe it's a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles - subtracting one particle at a time from the scenario, if necessary... progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs - where we go from "arrangement of particles that's an experience of color" to "arrangement of particles that's not an experience of color"?

I could also simply ask for you to indicate where in the magic arrangement of particle the color is. That is, assuming that you agree that one aspect of the existence of an experience of color is that something somewhere actually is that color. If it turns out that, according to you, brain state X is an experience of <red> only because the brain in question outputs the word "red" when queried, or only because a neural network somewhere is making the categorization "red" - then that is eliminativism. There's no actual <red>, no actual color, just color words or color categories.

The reason it is obvious that there is no color inherently inhabiting an arrangement of particles in space is because it's easy to see what the available ontological ingredients are, and it's easy to see what you can and cannot make by combining them. If we include dynamics and a notion of causality, then the ingredients are position, time, and causal dependence. What can you construct from such ingredients? You can make complicated structures; you can make complicated motions; you can make complicated causal dependencies among structures and motions. As you can see, it's no mystery that such an ontological scheme can encompass something like a blink reflex, which is a type of motion with a specified causal dependency.

With respect to the historical case of vitalism, it's interesting that what the vitalists posited was a "vital force". That's not an objection to the logical possibility of reducing life, and especially replication, to matter in motion. They just didn't believe that the known forces were capable of producing the right sort of motion, so they felt the need to postulate a new, complicated form of causal interaction, capable of producing the complexly orchestrated motion which must be occurring for living things to take shape. As it turned out, there was no need to postulate a special vital force to do that; the orchestration can be produced by the same forces which are at work in nonliving matter.

I'm emphasizing the way in which the case of vitalism differs from the case of qualia, because it is so often cited as a historical precedent. The vitalists - at least, the ones who talked about vital forces - were not saying that life is not material. They just postulated an extra force; in that respect, they were proposing only a conservative extension to the physical ontology of their time. But the observation that consciousness presents a basic ontological problem, in a universe consisting of nothing but matter in motion through space, has been around for a very long time. Democritus took note of this objection. I think Leibniz stated it in a recognizably modern form. It is an old insight, and it has not gone away just because the physical sciences have been so successful. Celia Green writes that this success actually sharpens the problem: the clearer our conception of material ontology and our causal account of the world becomes, the more obvious it becomes that this concept and this account do not contain the "secondary qualities" like your <red>.

Even at the dawn of modern physical science, in the time of Galileo, there was some discussion as to how these qualities were being put aside, in favor of an exclusive focus on space, time, motion, extension. It's quite amazing that from humble beginnings like Kepler's laws, we've come as far as quantum mechanics, string theory, molecular biology, all the time maintaining that exclusion. Some new ontological factors did enter the set of ingredients that physical ontology can draw upon, especially probability, but those elementary sensory qualities remain absent from the physical conception of reality. The 20th-century revolution in thought regarding information, communication, and computation goes just a little way towards bringing them back, but in the end it's nowhere near enough, because when you ask, what are these information states really, you end up having to reduce them to statistical properties of particles in space, because that's still all that the physical ontology gives you to work with.

I'm probably an idiot for responding at such length on this topic, because all my experience to date suggests that doing so changes nothing fundamentally. Some people get that there's a problem, but don't know how to solve it and can only hope that the future does so, or they embrace a fuzzy idea like emergence dualism or panpsychism out of intellectual desperation. Some people don't get that there's a problem - don't perceive, for example, that "what it feels like to be a bat" is an extra new property on top of all the ordinary physical properties that make up a bat - and are happy with a philosophical formula like "thought is computation".

I believe there is a problem to be solved, a severe problem, a problem of the first order, whose solution will require a change of perspective as big as the one which introduced us to the problem. Once, we had naive realism. The full set of objects and properties which experience reveals to us were considered equally real. They all played a part in the makeup of reality, to which the human mind had a partial but mysteriously direct access. Now, we have physics; ontological atomism, plus calculus. Amazingly, it predicts the behavior of matter with incredible precision, so it's getting something right. But mind, and everything that is directly experienced, has vanished from the model of reality. It hasn't vanished in reality; everything we know still comes to us through our minds, and through that same multi-sensory experience which was once naively identified with the world itself, and which we now call conscious experience. The closest approximation within the physical ontology to all of that is computation within the nervous system. But when you ask what neural computation are, physically, it once again reduces to matter in motion through space, and the same mismatch between the apparent character of experience, and the physical character of the brain, recurs. Since denying that experience does have this distinct character is false and therefore hopeless, the only way out must be to somehow reconceive physical ontology so that it contains, by construction, consciousness as it actually is, and so that it preserves the causal structural relations (between fundamental entities whose inner nature is opaque and therefore undetermined by the theory) responsible for the success of quantitative predictions.

I imagine my manifesto there is itself opaque, if you're one of those people who don't get the problem to begin with. Nonetheless, I believe that is the principle which has to be followed in order to solve the problem of consciousness. It's still only the barest of beginnings, you still have to step into darkness and guess which way to turn, many times over, in order to get anywhere, and if my private ideas about how to proceed are right, then you have to take some really big leaps in the darkness. But that's the kernel of my answer.

Comment author: Will_Sawin 10 January 2011 02:14:34AM 0 points [-]

Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.

Let's try to communicate through intuition pumps:

Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.

Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap <red> and <blue> in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn't be able to tell the difference - your behavior would be the same either way.

Two meditations on an optical illusion: I heard, possibly on lesswrong, that in illusions like this one: http://www.2dorks.com/gallery/2007/1011-illusions/12-kanizsatriangle.jpg your edge-detecting neurons fire at both the real and the fake edges.

  1. Doesn't that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?

  2. Doesn't the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?

Comment author: Mitchell_Porter 10 January 2011 11:35:33AM 1 point [-]

My latest comment might clarify a few things. Meanwhile,

Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.

No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.

Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.

Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.

In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.

As I've attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--

Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap <red> and <blue> in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn't be able to tell the difference - your behavior would be the same either way.

--and so obviously I'm going to object to the assumption that I'm not aware of my qualia. If you performed the swap as described, I wouldn't know that it had occurred, but I'd still know that <red> and <blue> are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.

Doesn't that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?

A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".

Doesn't the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?

It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.

Comment author: Will_Sawin 10 January 2011 10:13:09PM 0 points [-]

No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.

Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.

However, my new response to your argument is that, if you're not denying current physics, but just ontologically reorganizing it., then you're vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We're all in the same boat.

Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.

  1. Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.

  2. Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.

In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.

Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?

I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.

No you wouldn't. People can't tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can't have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.

A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".

My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I'm typing in is based on regularities the size of a transistor. I wouldn't expect to notice if my images were, really, fundamentally, completely different. I wouldn't expect to notice if something physical happened - the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.

It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.

Uniform color and edgeness are as different as night and day.

Comment author: Mitchell_Porter 14 January 2011 11:37:57AM *  1 point [-]

(part 1 of reply)

No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.

Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.

However, my new response to your argument is that, if you're not denying current physics, but just ontologically reorganizing it., then you're vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We're all in the same boat.

This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping - many exact physical states correspond to the same conscious state - then that's property dualism.

When you say, later on, that your consciousness "is a computation based mainly or entirely on regularities the size of a single neuron or bigger", that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you're a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn't really exist, even as appearance), and you're an eliminativist. This is because a many-to-one mapping isn't an identity.

"Degrees of existence", by the way, only makes sense insofar as it really means "degrees of something else". Existence, like truth, is absolute.

Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.

  1. Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.

  2. Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.

My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely "based on regularities the size of a transistor", I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--

Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.

--maybe I should have gone right away to the question of whether these "perceptions" are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be "made of qualia" or "be a state of a physical ontological unity" - both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of <red>, there has to be some <red> actually there. If there isn't, you just have a simulation.

Just for completeness, I'll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn't how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.

Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat's Cyc program. The anecdote (as conveyed, for example, in Eliezer's old essay "GISAI") is that, according to Lenat, Cyc knows about Cyc, but it doesn't know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don't know the finer details of his philosophical position.

What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn't define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed "thoughts", "desires" and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements - it may even have led Lenat to disagree with himself.

In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no <red> in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can't be an identity - these facts are simple but they have major implications for theorizing about consciousness.

So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there's a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position - a casual version of identity theory - is not tenable. Either it's dualism, or it's a monism made possible by exotic neurophysics.

(continued)

Comment author: Will_Sawin 14 January 2011 03:38:52PM *  0 points [-]

This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping - many exact physical states correspond to the same conscious state - then that's property dualism.

Since there's a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)

[this point has low relevance]

Believe what I quoted, and that qualia exist, and you're a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn't really exist, even as appearance), and you're an eliminativist.

It seems like we can cash out the statement "It appears to X that Y" as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.

"Degrees of existence", by the way, only makes sense insofar as it really means "degrees of something else". Existence, like truth, is absolute.

Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.

My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming.

Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.

But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness.

My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.

If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I've made a judgement about an ontology both at a logical and an empirical level. That's what I was talking about, when I said that if you swapped <red> and <blue>, I couldn't detect the swap, but I'd still know empirically that color is real, and I'd still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.

A: "The universe is made out of nothing but love"

B: "What are the properties of ontologically fundamental love?"

A: "[The equations that define the standard model of quantum mechanics]"

B: "I have no evidence to falsify that theory."

A: "Or balloons. It could be balloons."

B: "What are the properties of ontologically fundamental balloons?"

A: "[the standard model of quantum theory expressed using different equations]"

B: "There is no evidence that can discriminate between those theories."

... if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.

I'm a reductive materialist for statements - I don't see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I'm an eliminative materialist.

Comment author: Mitchell_Porter 14 January 2011 11:38:45AM *  0 points [-]

(part 2 of reply)

In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.

Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?

See next section.

I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.

No you wouldn't. People can't tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can't have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.

We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code - whatever that corresponds to, in a human being.

If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I've made a judgement about an ontology both at a logical and an empirical level. That's what I was talking about, when I said that if you swapped <red> and <blue>, I couldn't detect the swap, but I'd still know empirically that color is real, and I'd still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.

Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but...

A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".

My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I'm typing in is based on regularities the size of a transistor. I wouldn't expect to notice if my images were, really, fundamentally, completely different. I wouldn't expect to notice if something physical happened - the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.

... if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.

It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.

Uniform color and edgeness are as different as night and day.

They are, but I was actually talking about the difference between colorness/edgeness and neuronness.

Comment author: TheOtherDave 09 January 2011 06:33:30AM 0 points [-]

A few thoughts in response:

  • I agree with you that if my experience of red can't be constructed of matter, then my understanding of a sentence also can't be. And I agree with you that we don't have a reliable account of how to construct such things out of matter, and without such an account we can't rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.

  • I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of "sentientism" you are talking about. That said, I think that's a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.

  • I don't equate the experience of red with the tendency to output the word "red" when queried, both in the sense that it's easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it's easy for me to imagine a system that outputs the word "red" when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.

  • I don't equate the experience of red with categorization... it is easy to imagine categorization without experience. It's harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn't sufficient, for experience.

  • Like you, I can't come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn't easy for me to see what one can and can't make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there's a problem, but it isn't obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I'm wasting your time. (I did warn you in the grandparent comment that you won't find my answer interesting.)

  • If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure... well, at that point, I'm pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it's not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.

  • At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I'd be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.

Comment author: Mitchell_Porter 10 January 2011 11:08:02AM 0 points [-]

Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn't talk about the mind as something "nonphysical". That's why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.

So I'm actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it's got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally - this - moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don't believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I'd favor property dualism rather than the reverse monism I just advocated. But I'm going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.

If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too - its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of "being" but not as islands of "experiencing" (at least, not as islands of reflective experiencing).

I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation - so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction - it's something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it's independent it's no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex - with waking states of consciousness being extremely complex examples of these - and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.

An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn't borne out, I'll probably fall back on some form of property dualism, in which there's a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I'm very far from giving up on the monistic quantum theory of mind.

Comment author: TheOtherDave 10 January 2011 03:12:49PM 0 points [-]

So, getting back to my original question about what your alternate ontology has to offer...

If I'm understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn't follow to do with quantum entanglement.

Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.

Anything else?

Comment author: Mitchell_Porter 14 January 2011 11:43:02AM *  0 points [-]

I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.

Comment author: TheOtherDave 14 January 2011 04:43:24PM 1 point [-]

(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be... that is, what benefits there are to be gained from channeling my interest as you recommend.

Put another way: let us suppose you're right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.

Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.

What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?

Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won't show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?

The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.

I'm not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.

Thus far, I've gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I'm wondering what else.

Because if it's just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.

Comment author: Mass_Driver 08 January 2011 09:27:21AM 0 points [-]

I find this argument irresistably compelling, and would appreciate a post or a private message letting me know what your answer is. I don't have one; it's all I can do here to notice that I am confused.