(ETA: I've created three threads - color, computation, meaning - for the discussion of three questions posed in this article. If you are answering one of those specific questions, please answer there.)

I don't know how to make this about rationality. It's an attack on something which is a standard view, not only here, but throughout scientific culture. Someone else can do the metalevel analysis and extract the rationality lessons.

The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness. I took this line before, but people struggled to understand my own speculations and this complicated the discussion. So the focus is going to be much more on what other people think - like you, dear reader. If you think consciousness can be reduced to some combination of the above, here's your chance to make your case.

The main exhibits will be color and computation. Then we'll talk about reference; then time; and finally the "unity of consciousness".

Color was an issue last time. I ended up going back and forth fruitlessly with several people. From my perspective it's very simple: where is the color in your theory? Whether your physics consists of fields and particles in space, or flows of amplitude in configuration space, or even if you think reality consists of "mathematical structures" or Platonic computer programs, or whatever - I don't see anything red or green there, and yet I do see it right now, here in reality. So if you intend to tell me that reality consists solely of physics, mathematics, or computation, you need to tell me where the colors are.

Occasionally someone says that red and green are just words, and they don't even mean the same thing for different cultures or different people. True. But that's just a matter of classification. It's a fact that the individual shades of color exist, however it is that we group them - and your ontology must contain them, if it pretends to completeness.

Then, there are various other things which have some relation to color - the physics of surface reflection, or the cognitive neuroscience of color attribution. I think we all agree that the first doesn't matter too much; you don't even need blue light to see blue, you just need the right nerves to fire. So the second one seems a lot more relevant, in the attempt to explain color using the physics we have. Somehow the answer lies in the brain.

There is one last dodge comparable to focusing on color words, namely, focusing on color-related cognition. Explaining why you say the words, explaining why you categorize the perceived object as being of a certain color. We're getting closer here. The explanation of color, if there is such, clearly has a close connection to those explanations.

But in the end, either you say that blueness is there, or it is not there. And if it is there, at least "in experience" or "in consciousness", then something somewhere is blue. And all there is in the brain, according to standard physics, is a bunch of particles in various changing configurations. So: where's the blue? What is the blue thing?

I can't answer that question. At least, I can't answer that question for you if you hold with orthodoxy here. However, I have noticed maybe three orthodox approaches to this question.

First is faith. I don't understand how it could be so, but I'm sure one day it will make sense.

Second, puzzlement plus faith. I don't understand how it could be so, and I agree that it really really looks like an insurmountable problem, but we overcame great problems in the past without having to overthrow the whole of science. So maybe if we stand on our heads, hold our breath, and think different, one day it will all make sense.

Third, dualism that doesn't notice it's dualism. This comes from people who think they have an answer. The blueness is the pattern of neural firing, or the von Neumann entropy of the neural state compared to that of the light source, or some other particular physical entity or property. If one then asks, okay, if you say so, but where's the blue... the reactions vary. But a common theme seems to be that blueness is a "feel" somehow "associated" with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.

This is the dualism which doesn't know it's dualism. We have a perfectly sensible and precise physical description of neurons firing: ions moving through macromolecular gateways in a membrane, and so forth. There's no end of things we can say about it. We can count the number of ions in a particular spatial volume, we can describe how the electromagnetic fields develop, we can say that this was caused by that... But you'll notice - nothing about feels. When you say that this feels like something, you're introducing a whole new property to the physical description. Basically, you're constructing a dual-aspect materialism, just like David Chalmers proposed. Technically, you're a property dualist rather than a substance dualist.

Now dualism is supposed to be beyond horrible, so what's the alternative? You can do a Dennett and deny that anything is really blue. A few people go there, but not many. If the blueness does exist, and you don't want to be a dualist, and you want to believe in existing physics, then you have to conclude that blueness is what the physics was about all along. We represented it to ourselves as being about little point-particles moving around in space, but all we ever actually had was mathematics and correct predictions, so it must be that some part of the mathematics was actually talking about blueness - real blueness - all along. Problem solved!

Except, it's rather hard to make this work in detail. Blueness, after all, does not exist in a vacuum. It's part of a larger experience. So if you take this path, you may as well say that experiences are real, and part of physics must have been describing them all along. And when you try to make some part of physics look like a whole experience - well, I won't say the m word here. Still, this is the path I took, so it's the one I endorse; it just leads you a lot further afield than you might imagine.

Next up, computation. Again, the basic criticism is simple, it's the attempt to rationalize things which makes the discussion complicated. People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state.

There's also a problem with semantics - saying that the state is about something - which I will come to in due course. But first up, let's just look at the problems involved in attributing a non-referential "computational state" to a physical entity. 

Physically speaking, an object, like a computer or a brain, can be in any of a large number of exact microphysical states. When we say it is in a computational state, we are grouping those microphysically distinct states together and saying, every state in this group corresponds to the same abstract high-level state, every microphysical state in this other group corresponds to some other abstract high-level state, and so on. But there are many many ways of grouping the states together. Which clustering is the true one, the one that corresponds to cognitive states? Remember, the orthodoxy is functionalism: low-level details don't matter. To be in a particular cognitive state is to be in a particular computational state. But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?

We didn't have this discussion before, so I won't try to anticipate the possible defenses of functionalism. No-one will be surprised, I suppose, to hear that I don't believe this either. Instead, I deduce from this problem that functionalism is wrong. But here's your chance, functionalists: tell the world the one true state-clustering which tells us the computation being implemented by a physical object!

I promised a problem with semantics too. Again I think it's pretty simple. Even if we settle on the One True Clustering of microstates - each such macrostate is still just a region of a physical configuration space. Thoughts have semantic content, they are "about" things. Where's the aboutness?

I also promised to mention time and unity-of-consciousness in conclusion. Time I think offers another outstanding example of the will to deny an aspect of conscious experience (or rather, to call it an illusion) for the sake of insisting that reality conforms entirely to a particular scientific ontology. Basically, we have a physics that spatializes time; we can visualize a space-time as a static, completed thing. So time in the sense of flow - change, process - isn't there in the model; but it appears to be there in reality; therefore it is an illusion.

Without trying to preempt the debate about time, perhaps you can see by now why I would be rather skeptical of attempts to deny the obvious for the sake of a particular scientific ontology. Perhaps it's not actually necessary. Maybe, if someone thinks about it hard enough, they can come up with an ontology in which time is real and "flows" after all, and which still gives rise to the right physical predictions. (In general relativity, a world-line has a local time associated with it. So if the world-line is that of an actually and persistently existing object, perhaps time can be real and flowing inside the object... in some sense. That's my suggestion.)

And finally, unity of consciousness. In the debate over physicalism and consciousness, the discussion usually doesn't even get this far. It gets stuck on whether the individual "qualia" are real. But they do actually form a whole. All this stuff - color, meaning, time - is drawn from that whole. It is a real and very difficult task to properly characterize that whole: not just what its ingredients are, but how they are joined together, what it is that makes it a whole. After all, that whole is your life. Nonetheless, if anyone has come this far with me, perhaps you'll agree that it's the ontology of the subjective whole which is the ultimate challenge here. If we are going to say that a particular ontology is the way that reality is, then it must not only contain color, meaning, and time, it has to contain that subjective whole. In phenomenology, the standard term for that whole is the "lifeworld". Even cranky mistaken reductionists have a lifeworld - they just haven't noticed the inconsistencies between what they believe and what they experience. The ultimate challenge in the science of consciousness is to get the ontology of the lifeworld right, and then to find a broader scientific ontology which contains the lifeworld ontology. But first, as difficult as it may seem, we have to get past the partial ontologies which, for all their predictive power and their seductive exactness, just can't be the whole story.

Consciousness
New Comment
232 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Projecting the ontology of your (flawed) internal representations onto reality is a bad idea. "Doing a Dennet" is also not dealt with, except by incredulity.

It's a fact that the individual shades of color exist, however it is that we group them - and your ontology must contain them, if it pretends to completeness.

This is simply not the case. The fact that we can compare two stimuli more accurately than we can identify a stimuli merely means that internally we represent reality with lesser fidelity than our senses theoretically can achieve. On a reductionist view at most you've established "greater than" and "round to nearest" are implemented in neurons. You do not need to have colour.

Let's unpack "blueness". It's a property we ascribe to objects, yet it's trivial to "concieve of" blueness independent of an object. Neurologically, we process colour, motion, edge finding and so in in parallel; the linking of them together occurs at a higher level. Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists, and case... (read more)

3PhilGoetz
Re. blueness: Mitchell is talking about qualia. Google the hard problem of consciousness.
2PhilGoetz
Just a note - I don't disagree with your point; but the claim that we can't discriminate color in our peripheral vision is simply false. I've done some informal experiments with this, because I was puzzled that textbooks say that our peripheral vision is primarily due to rods, which can't detect color; yet I see color in my peripheral vision. If I stand with my nose and forehead pressed against the wall, holding a stack of shuffled yellow and red sheets of origami paper behind my back, close my eyes, and then hold one sheet up in each of my outstretched arms, and open my eyes, so that the sheets are each 90 degrees out from my central vision and I see them both at the same time, I can distinguish the two colors 100% of the time. There's a serious problem with resolution; but color doesn't seem to be affected at all in any way that I can detect by central vs. peripheral vision.
6Joanna Morningstar
Of the same apparent intensity to a rod? If they're not, you'll guess correctly based on apparent brightness, and your brain fills in the colour based on memory of which colours of paper are around. There are low levels of cones out to the periphery, but of such level as to be unreliable sources. For example, this notes that some monochromatic light is misidentified peripherally but not foevally, and that frequency discrimination drops by a factor of 50 or so.
3Bo102010
Would be interesting to see you do this on video with a second person shuffling and displaying the cards.
0RobinZ
Noting Jonathan_Lee's remarks, a suggestion for an experiment: place a monitor in the peripheral vision of the experimental which, at regular intervals, shows a random RGB color. The subject is to press a key indicating perceived color (e.g. [R]ed, [Y]ellow, [B]lue, [O]range, [G]reen, [P]urple, [W]hite, [B]lack) each time the color changes (perhaps an audio cue?). Compare results to same experiment with monitor directly in front.
1Mitchell_Porter
It seems you agree that colors, the flow of time, meanings, and the unity of experience all appear to be there. The general import of your remarks is that reality isn't actually like that, it only appears to be like that. You state how things are, and then you state what's happening in the brain in order to create certain appearances. Color and time are external appearances, meaning and unity are internal appearances. Some of what you say about the imperfections of conscious representations is not an issue for me. The fidelity of the mapping between external states and conscious states only has an incidental bearing on the nature of the conscious states themselves. Whether color is sometimes hallucinated is not the issue. Whether a color is nothing but an equivalence class is the issue. In this regard I have observed a number of positions taken. Some people are at the stage of saying, color is a neural classification and I don't see any further problem. Some people say that color is how it "feels" to make such a classification. Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a "color feeling", though it's hard to be sure. My position is very simple. All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality. The elements and the modes of combination offered by today's scientific ontology do not suffice to generate them. Therefore, today's scientific ontology is wrong, incomplete, however you want to put it. So if we are to have a discussion, you need to say less about the imperfections of consciousness as a medium of representation, and more about the medium itself. Do you agree that color, time, meaning, unity exist in consciousness? If so, can you identify the physical or computational property which supposedly corresponds
5RobinZ
Citation or he didn't say it. Daniel Dennett coined the phrase "greedy reductionism" - partially to emphasize that he does not deny the existence of color, consciousness, etc. Unless you know of some place where he reversed his position, I would argue that you have misinterpreted his remarks. My understanding is that his position is that color is an idiosyncratic property of the human visual perception system with no simple referent in physics, not no referent at all. (I normally wouldn't make such a big deal of it, but Dennett is one of the major figures on the physicalist side of this debate, and a mischaracterization of his views impedes the ability of bystanders to perform a fair comparison.)
0PhilGoetz
In Explaining Consciousness, chapter 2, p. 28, Dennett says there is no purple in the brain when we see purple. That may be what he means. I also heard Dennett quoted as saying there is no such thing as qualia, allegedly in "The taboo of subjectivity", p. 139, which I don't have.
3anonym
Here is a full quote that makes clear in exactly what sense he doesn't believe in qualia: Originally in Quining Qualia, 1988, by Dennett, and quoted on Multiple-Drafts Model.
1anonym
The Taboo of Subjectivity is a book by B. Alan Wallace. It appears that Dennett wrote a review for that work, but I couldn't find it online. Are you referring to that review, or to something else?
0anonym
I see what you meant now. Dennett was quoted in Wallace's book, on p.139. Sorry for the misunderstanding. The quote, with some context, is: 18 . Paul M. Churchland, 1990, Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, p. 41 & 48. 19 . Daniel C. Dennett, 1991, Consciousness Explained, p. 74.
0RobinZ
Belatedly: * I think the reference on p. 28 is pointing out that the brain doesn't turn purple (and a purple brain wouldn't help anyway, as there are no eyes in the brain to see the purple). The remainder of the page is extending the example to further elaborate the problem of subjective experience. * I cannot find the reference to qualia quoted in The Taboo of Subjectivity at all - p. 74 is before Dennett even defines qualia, and p. 374 does not have those exact words - only the conclusion of a thought experiment illuminating his rejection of the concept.
0RobinZ
Thanks for the page number - I'll see if I can find it in my copy when I get home.
0Bo102010
Upvoted. I searched Google for about 15 seconds looking for the quote and didn't find it, but I remember seeing or hearing Dennett say once how flabbergasted he is about being "oh yeah, that guy who thinks we don't see color."
-1Mitchell_Porter
He does not use the expression "color feeling", but here's a direct quote from Consciousness Explained, chapter 12, part 4: He explicitly denies that there is any such thing as a "private shade of homogeneous pink" - which I would consider a reasonably apt description of the phenomenological reality. He also says there is something real, a "complex of dispositions". And, he also says that when we refer to color, we think we're referring to the former, but we're really referring to the latter. So, subjective color does not exist, but references to color do exist. That still leaves room for there to be "appearances of pink". No actual pink, but also more than a mere belief in pink; some actual phenomenon, appearance, component of experience, which people mistakenly think is pink. But I see no trace of this. The thing which he is prepared to call real, the "complex of dispositions", is entirely cognitive (in the previous paragraph he refers to "innate and learned associations and reactive dispositions"). There is no reference to appearance, experience, or any other aspect of subjectivity. Therefore, I conclude that not only does Dennett deny the existence of color (yes, I know he still uses the word, but he explicitly defines it to refer to something else), he denies that there is even an appearance of color, a "color feeling". In his account of color phenomenology, there are just beliefs about nonexistent things, and that's it.
3byrnema
The references to red together definitely form a physical network in my brain, right? I have a list of 10,000 things in my memory that are vividly red, some more vivid than others, and they're all potentially connected under this label 'red'. When that entire network is stimulated (say, by my seeing something red or imagining what "red" is), might I not also give that a label? I could call the stimulation of the entire network the "essence of red" or "redness" and have a subjective feeling about it. I'm certain this particular theory about what "redness" occurs frequently. My question is, what's missing in this explanation from the dualist point of view? Why can't the subjective experience of red just be the whole network of red associations being simultaneously excited as an entity? Above you wrote So I guess I'm just asking, what's the further problem? (If you've already answered, would you please link to it?)
2Tyrrell_McAllister
What are in those ellipses? In what you quote, I see that he's denying that it's "a private, ineffable something-or-other in your mind's eye". From what else I've read of Dennett, I'm sure that he has a problem with the "private" and "ineffable" part. Is it so clear that he has a problem with the "component of experience" part?
1Mitchell_Porter
In the book, a character called Otto advocates the position that qualia exists. The full passage is Dennett making his case to Otto once again:
0Morendil
And how would you answer that passage of Dennett's ?
-4Mitchell_Porter
"Dear Dan - the shade of pink is real. In denying its existence, you are getting things backwards. The important methodological maxim to remember is that appearances are real. This does not mean that every time there is an appearance of an apple, there is an apple. It just means that every time there is an appearance of an apple, there is an appearance of an apple. It also does not mean that every time someone thinks there is an appearance of an apple, there is one. People can be mistaken in their auto-phenomenology - but not as mistaken as you would have us believe." Husserl, who was only concerned with getting phenomenology right and not with any underlying ontology, had a "principle of principles" which expresses the first half of what I mean by "appearances are real": In Husserl, every mode of awareness is a form of intuition, including sense perception. He's saying that every appearance has an element of certainty, but only an element. Appealing to Husserl may be overkill, but the point is, there is a limit to the degree one can plausibly deny appearance. Denying the existence of color in the way Dennett appears to be doing is like saying that 0 = 1 or that nothing exists - it's only worth doing as an exercise in cognitive extremism; try believing something impossible and see what happens. However, people do end up believing weird things out of apparent philosophical necessity. I think this is what is going on with Dennett; he does understand that there is nothing like that shade of pink in standard physical ontology, so rather than engage in a spurious identification of pinkness with some neural property, he just says there is no pink. It's just a word. It's there to denote a bundle of cognitive and behavioral dispositions. But there is no pink as such, outside or inside the head. He's willing to take this drastic step because the truth of physics seems so nailed down, so indisputable. However, there is a sense in which we do not know what physics is abou
4Morendil
Husserl couldn't know what Dennett knows about the biology, psychology and evolutionary history of color perception. Time and again you sweep aside the "bundle of cognitive and behavioral dispositions" Dennett refers to in his reply to Otto, in your appeal to the primacy of "redness" or "pinkness". This has some intuitive appeal, because "red" and "pink" are short words and refer to something we experience as simple. Your position would be much harder to defend if you were looking for "the private, ineffable feeling of reading Lesswrong.com" as one commenter suggested: people would have an easier time denying the existence of that. Yet - even though I'm not entirely sure that's what this commenter had in mind - I would say there is only a difference of degree, not of kind, between "the feeling of redness" and "the feeling of reading Lesswrong". The feeling of seeing the color red really is a complex of dispositions, something cobbled together from many parts over our long evolutionary history. The more we learn about color, the more complex it turns out to be. It only feels simple because it's a human universal.
0Mitchell_Porter
The "feeling of reading LessWrong" can be analysed in great detail. There's a classic work of phenomenology, Roman Ingarden's The Literary Work of Art, which goes into the multiple "strata" of meaning which turn the examination of small black shapes on white paper into the imagination of a possible world. Participating in a discussion like this involves a stream of complex intentional experiences against a steady background of embodied sensation. Color experience is certainly not beyond further analysis, even at the phenomenological level. The three-dimensional model of hue, saturation, and intensity is a statement about the nature of subjective color. The idea that experiences are ineffable is just wrong. We're all describing them every day. No amount of intricate new knowledge about the way that color perception varies or the functions that it performs can actually abolish the phenomenon. And most materialists don't try to abolish it, they try to identify it with something material. I think Dennett is trying to abolish phenomena as realities, in favor of a cognitive behaviorism, but that is really a topic for Dennett interpreters. Instead, I want to know about your phenomenology of color. I assume that in fact you have it. But I'm curious to know, first, whether you'll admit to having it, or whether you prefer to talk about your experience in some other way; and second, how you describe it. Do you look at color and think "I'm seeing a bundle of dispositions"? Do you tell yourself "I'm not actually seeing it, I'm just associating the perceptual object with a certain abstract class"?
0Morendil
I'm not sure I ever "look at color" in isolation. There are colors and arrangements of color that I like and that I'll go out of my way to experience; I'm looking forward to an exhibition of Soulages' work in Paris, for instance. When I look at a Soulages painting my inner narrative is probably something like "Wow, this is black... a luminous black which emphasizes straight, purposive brushstrokes in a way that's quite different from any other painter's use of color I've seen; how puzzling and delightful." It's different from the reflective black of my coffee cup nearby, the matte black of my phone handset or the black I see when I close my eyes. When I see my coffee cup I'm mostly seeing the reflections, when I see the handset it's the texture that stands out, when I close my eyes the black is a background to a dance of random splotches and blobs. When I think about my perception of black in all the above instances I am certainly thinking in terms of dispositions and of abstract tags. There isn't a unitary "feeling of black" that persists after these various experiences of things I now call black.
4Joanna Morningstar
External only in that wetware is modelling something outside of the skull, rather than it's internal state. The intent was to state that merely because you perceive reality along certain ontological lines does not imply that reality has the same ontology. This should be particularly obvious when your internal sense fails to correspond to reality; if conscious states are an imperfect guide to external states then why should the apparent ontology of consciousness be accurate? None of which you refute here or in the OP, especially those who deny that "blueness" is a veridical property of reality. No; it means that something referencing them exists in some part of reality (your skull). An equivalence relation; an internal tag that this object is blue. To counter the realism, consider mathematicians, who consciously deal in infinite sets, or all theorems provable under some axioms (model theory). Just because something appears plainly to you does not mean it exists. Kant says it better than I can. Not if you mean more than perception by consciousness. Even in perception, they're just the ontology imposed by our neurology, and have neural correlates that suffice. Consciousness isn't prior to perception or action; it's after it. There isn't a homunculus in there for experience to "appear to". If anything, there's a compressed model of your own behaviour to which experience is fed into; that's the "you" in the primate - a model of that same primate for planning and conterfactual reasoning.
0Mitchell_Porter
Let's suppose I have a hallucinatory perception of a banana. So, there's no yellow object outside my skull - we can both agree on that. It seems we also agree that I'm having a yellow perception. But we part ways on the meaning of that. Apparently you think that even my hallucination isn't really yellow. Instead, there's some neural thing happening which has been tagged as yellow - whatever that means. I really wonder about how you interpret your own experience. I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it's like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you're seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
8SilasBarta
We went over this issue a bit in the previous discussion. My response (following Drescher) was: "To experience [yellow] is to feel your cognitive architecture assigning a label to sensory data." As I elaborated: The point being: I can't give a complete answer now, but I can tell you what the solution will look like. It will involve describing how a cognitive architecture works, then looking at the distinctions it has to make, then looking at what constraints these distinctions operate under (e.g. color being orthogonal to sound [unless you have synaesthesia], etc.), then identifying what parts of the process can access each other. Out of all of that, only certain data representations are possible, and one of these (perhaps, hopefully, the only one) is the one with the same qualities as our perception of color. You know you're at the solution, when you say, Aha! If I had to express what information I receive, under all those constraints, that is what qualities it would need to have. To that, you replied: Though you object to the comparison, this is the same kind of error as demanding that there be a fundamental "chess thing" in Deep Blue. There is no fundamental color, just as there is no fundamental chess. There is only a regularity the system follows, compressible by reference to the concept of color or chess.
4RobinZ
I am intrigued by your wording, here. I suppose I experience colors just like you do, but - when I think about it - I tell myself that what is, in fact seeing a yellow object is, in fact the same thing as experiencing what it's like to have a perception tagged as yellow. I believe these descriptions to be equivalent in the same sense that "breaking of hydrogen bonds between dihydrogen monoxide molecules, leading to those molecules traveling in near-independent trajectories outside the crystalline structure" is equivalent to "ice sublimating".
4Joanna Morningstar
The relevant part of the optical cortex which fires on yellow objects has fired; the rest of your brain behaves as if there were a yellow banana out in front of it. "Tagging" seemed like the best high level term for it. A collection of stimuli are being collected together as an atomic thing. There's a neural thing happening, and part of that neural thing is normally caused by yellow things in the visual field. The most obvious point where it has subjective import is when things change[1]. I probably experience colours as you do; when I introspect on colour, or time, I cannot find good cause to distinguish it from "visualising" an infinite set or a function. The only apparent different is that reality isn't under concious control. I don't assume that the naive ontology that is presented to me is a true ontology. [1] There are a pair of coloured mugs (blue and purple) that I can't distinguish in my peripheral vision, for example. When I see one in my peripheral vision, it is coloured (blue, say); when I look at it directly, there is a period in which it is both blue and purple, as best I can describe, before definitively becoming purple. Head MRI's do this too. Edit: The problem is that there isn't an easy way to introspect on the processes leading to perceptions; they are presented ex nihilo. As best I can tell, there's no good distinguisher of my senses from "experiencing what it's like to have a perception tagged as yellow"

You really need to link the previous post -- and important subthreads, 1, 2, 3 -- when you make a post like this. Other people need to be able to easily access the discussions you refer to. (Yes, that compilation may be biased toward what I was involved in.)

There were several notable replies that you also should have accounted for here, including:

1) How "where is color?" question is turned around to "where is the chess in Deep Blue?"

2) The Gary Drescher approach of equating qualia with generated symbols.

3Mitchell_Porter
I thought of linking, but I wanted a fresh start, something self-contained. It's a debatable choice. What I really wish is that when I posted this two days ago, I'd thought of creating in advance a thread for each major question. It will be difficult to migrate the discussion there now, but I would like to try. Responding briefly: 1) I think "where is color?" and "where is chess?" are just different sorts of questions. The latter is an instance of "where is meaning?" or "where is computation?". Because meaning and computation can be imputed to symbols and artefacts by convention, the where-is-chess discussion needs to keep the human original in view as well. A systematic answer should first say where is chess when humans play each other. Then we can talk about chess computers playing each other, and whether that situation contains chess only by convention, or intrinsically, or whether machine chess is a mixture of intrinsic and attributed meaning and computation. 2) If you wish to speak of there being symbols in the brain, again, you have to take a position on computation and meaning. Then, if you've managed to identify an actual physical thing or property which you think can be called a symbol, then you need to explain how to get color out of that particular physical thing.

Let me try coming at this another way. What would you not expect in a Turing-implementable Universe?

  • Life
  • Life that perceives, eg, threats (ie has organs adapted to be sensitive to things like light, and reacts adaptively when these organs get signals correlated with the presence of a threat)
  • Life that perceives threats and reacts to them in such a way that other closely related living things react to their reaction as if the threat was there (ie, some form of communication)
  • Life whose later actions are adaptively changed by earlier perceptions (in the sense of perceptions above), ie memory
  • Life that communicates the perceptions it remembers
  • Life whose communication has grammar, so it can say things like "I saw a tiger yesterday" or "I saw a red thing"
  • Life that asks what is red about red
  • something else?

EDIT NB: I'm asking what you see that you would not expect to see if you were looking into a Turing-universe from the outside. If your position is that there's nothing in this Universe visible to an external observer that shows it to be non-Turing, including our utterances, please make that explicit.

1Mitchell_Porter
By life I assume you mean replicators. Turing computability is not much of an issue for me. It amounts to asking whether the state transitions in an entity can be abstractly mimicked by the state transitions in a Turing machine. For everything in your list, the answer ought to be, yes it can. However, that is a very limited and abstract resemblance. You could represent the mood of a person, changing across time, with a binary string. But to say that their sequence of moods is a binary string is descriptive rather than definitional. It sounds like you do want to reduce all mental or conscious phenomena to strictly computational properties. Not just to say that the mind has certain computational properties, but that it has nothing but such properties; that the very definition of a mind is the possession of certain computational properties or capacities. To do this, first you will need to provide an objective criterion for the attribution of computational properties, such as states. You can chop up a physical state space in many ways, so as to define "high-level" states; which such clustering of physical states, out of all the possibilities, is the one that you will use, and why? Then, you may need to explain what is computational about these states. If you want to say that they have representational content, for example, you will need to say what they are representing and on what basis you attribute this meaning to them. And finally, if you also wish to say that sensory qualities like colors are nothing but computational properties, you will need to say which computational properties, and something about why they "feel" that way. All of this assumes that you agree that color and meaning do exist in experience. If they are there, they need to be explained. If they do not need to be explained, it can only be because they are not there.
0Paul Crowley
So your position is that there's no problem accounting for everything we observe in human behaviour, including behaviour like saying "where does subjective experience come from?", with a physics much like standard physics; but to account for the actual subjective experience itself we need a new physics? That current physics leaves us with a world of what I term M-zombies, who talk about subjective experience but don't have any?
1Larks
I imagine you would expect all those; one would simply not expect the subjective experience of the colour blue.
0Paul Crowley
You seem to have given exactly the reply that my "EDIT NB", added before your reply, was designed to forestall. Can you state that in terms of what you would see looking in from the outside? For example, do you think you would not see life that used phrases such as "the subjective experience of the colour blue"?
1Larks
I meant I did agree with you, and that externally everything would appear exactly the same. However, from what I think is Mitchell Porter's point of view, the one thing you would not expect from such a universe is the possibility of being inside it. P-Zombies, I suppose. EDIT: Also, sorry for not being clear re your NB.
0Paul Crowley
Ah, OK! I don't know whether anyone's going to try to mount a zombie-based defence of Porter's position. These are the articles it would need to reply to. M-zombies would be distinct from P-zombies in that Porter believes that physics can account for our non-zombieness, but M-zombies would still write articles asking where subjective experience comes from, even though they don't have any.
-2Paul Crowley
EDIT: commenters below have caused me to think better of my impatient tone below. Please imagine strikethrough through "I don't know" isn't an acceptable answer either. The question isn't "what will happen in such a Universe", it's "at what point to you balk at the possibility". You balk before the end of "it could be just like our Universe" and after the beginning (which is, say, the game of Life) so you have to be able to identify a balk point somewhere on the scale. EDIT: would appreciate downvote explanation - thanks! EDIT: [*] to any comments in this thread, not just to my comments - thanks Alicorn for prompting me to clarify
6Alicorn
This is an asynchronous medium, and Mitchell_Porter is not obliged to address your inquiry anyway. It's possible he hasn't even seen your comment. Perhaps you could send him a PM, which would be harder for him to miss, and ask him if he'd have a look at your question without accusing him of being rude for not having done so already. Edit: This comment also serves as your downvote explanation.
2Paul Crowley
Thanks, it's good of you to explain. Not my enquiry specifically - he's made no comments at all since posting the article. I think that if you make a top level post you do have an obligation to take part in the subsequent discussion.

If I'd written a post that'd gotten downvoted into the negative that decisively, I'd take a day or two off to avoid posting extremely defensive comments. I have no idea if that's what Mitchell is doing, but while he probably should make some attempt to field comments on his post, chiding him for being untimely is not nice.

4Paul Crowley
From the votes, it looks like people agree with you rather than me on this, which I take seriously. If anyone else wants to downvote me on this one, I'd slightly prefer they downvote the grandparent comment rather than my one above that, so I know it's the chiding rather than the argument that's getting downvoted.
5Morendil
Needling your interlocutor for a prompt reply makes it sound as you're more interested in "winning the debate" than in getting a considered reply from them. If it takes someone a couple days to let the dust settle, consider possible counter-arguments or lines of retreat, and frame a careful reply, don't begrudge them that.
1Paul Crowley
I'd like to think about this more, but what you say sounds convincing just now. I've been ill this week, which is why I've been online so much, which may be affecting my judgement.
0SilasBarta
If it makes you feel any better, in the last discussion, several posters referenced my explanation, which you would think would bump me up on his reply priority list. It didn't.
1Paul Crowley
While I'm hoping for my comments to receive a reply, I'm looking forward to all his replies. We enjoy such a high standard of debate here that it makes me impatient for more.

Asking about the blueness of blue, or anything to do with color, is deliberately misleading. You admit that seeing blue is an event caused by the firing of neurons; the fact that blue light stimulating the retina causes this firing of neurons is largely besides the point. The question is, simply, "How does neurons firing cause us to have a subjective experience?"

The best answer that I think can be given at this point is, "We don't really know, but it's extremely unlikely it involves magic, and if we knew enough to build something that worked almost exactly like our brain, it'd have subjective experience too."

As for the "type" distinction, the idea that blueness in the brain must emerge from some primal blueness, or whatever exactly you're trying to argue, seems like a serious mistake in categories. As another commenter said, there's no chess in deep blue. There's no sense of temperature in a thermostat, no concept of light in a photoreceptor, no sense of lesswrong.com in my CPU. The demand that blueness cannot be merely physical seems to be the mere protestations of a brain contemplating that which it did not evolve to contemplate.

1Technologos
I wonder if "How does neurons firing cause us to have a subjective experience?" might be unintentionally begging Mitchell_Porter's question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.

But a common theme seems to be that blueness is a "feel" somehow "associated" with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.

This is the dualism which doesn't know it's dualism.

As a reductionist who disagrees with your overall critique of reductionism, I have to say that you hit the nail on the head here. Some self-styled reductionists do seem prone to "explaining" subjective experience by saying that it's nothing more than what certain algorithms feel like from the inside. As you say, that's really a dualist account if you leave it there.

6[anonymous]
My problem is I don't see how you can avoid a "that's how an algorithm feels from the inside" explanation somewhere down the line. Even if you create some theory that purports to account for the (say) mysterious redness of red, isn't there still a gap to bridge between that account and whatever your subjective perception - your feeling - of red is? I'm confused as to what an 'explanation' for the mysterious redness of red would even look like.
8Tyrrell_McAllister
If you can't even imagine what an answer would look like, you should doubt that you've successfully asked a question. That's not supposed to be a conversation-stopper. It's just that the first step in the conversation should be to make the question clear.
1[anonymous]
This is a useful heuristic, but if anything it seems to dissolve the initial question of "Where's the qualia?" As DanArmak and RobinZ channeling Dennet point out elsewhere in the thread, questions about qualia don't appear to be answerable.
0LauraABJ
What I think Mitchell is looking for (an he can correct me if I'm wrong) as an explanation of experience is some model that describes the elements necessary for experience and how they interact in some quantitative way. For example, let's pretend that flesh brains are not the only modules capable of experience, and that we can build experiences out of other materials. A theory of experience would help to answer: what materials can be used, what processing speeds are acceptable (ie, can experience exist in stasis), what cpus/processors/algorithms must be implemented, and what outputs will convince us that experience is taking place (vs creating a Chinese letter box). Now, I don't think we will have any way of answering these questions before uploading/AI, but I can conceive of ways of testing many variables in experience once a mind has been uploaded. We could change one variable- ask the subject to describe the change- change it back and ask the subject what his memory of the experience is, etc,etc. We can run simulations that are deliberately missing normal algorithms until we find which pieces of a mind are the bare bone essentials of experience. To me this is just another question for the neuroscientists and information theorists, once our technology is advanced enough to actually experiment on it. It is only a 'problem' if you believe p-zombies are possible, and that we might create entities that describe experience without having it.
3Liron
Traceback:

I don't understand where this perceived confusion comes from (despite, or because, I read much of the relevant literature).

If we have an electronic device that emits light at 450THz and another that detects light and reports what "color" it is (red), then we can build/execute all of that without accounting for "redness" (except of course in the step where it decides what to call the "color"). Is there a problem here?

Is color a special topic here? Do we have the same issues in phenomenology of sound?

If we have an electronic d... (read more)

4MrHen
I would like to see a better reply to this comment. Why doesn't this address the problem of Color from the OP? Is it because the jump from wavelength to the label of "Blue" hasn't been defined? From the OP: I'm not trying to be tricksy or smart. I am trying to understand the question and why the above isn't an answer. In essence, all confusion would be lifted if you replaced the words "red" and "green" with something else in the following paragraph: As in, Words A, B, and C all belong to some category and that category is not "seen" in physics but is "seen" in reality. Thomblake is trying to use "wavelengths": This makes no sense to me, so something must be wrong in my translations. What is it?
1Mitchell_Porter
Physics contains waves and physics contains lengths, so if someone were to say "physics contains wavelengths!", I wouldn't object, because I can see the wavelengths in the ontology of physics. But if someone says "physics contains colors", I don't see them and I have a problem; and if someone says "colors are wavelengths", I also have a problem, because I don't see what's color-like about a wavelength. How does being 650 nanometers long make an object red? Most people here aren't saying red is a wavelength anyway. They're saying red is an aspect of a brain state normally caused by light of a certain wavelength arriving at the eye. But the problem is the same, it's just that the physical property supposedly identical with "being red" is far more complex and not completely specified.
1MrHen
Thank you for answering. I guess my only response is that if you change the wavelength, its redness disappears. If you return the wavelength to the right frequency, redness returns. Similar experiments can be done for each color in turn. Presumably, experiments can also be done to damage the eye so that it doesn't respond to certain frequencies and the ability to perceive redness disappears. Do these things imply some connection between redness and wavelengths? If not, than I feel I still am not understanding what you mean by Color. What is Color? Or is that the whole point of the question? ETA: After thinking a little more, I may have gotten closer to understanding. The relationship between wavelengths and color may only go one way. Wavelengths may turn into Colors via the eye, but not every experience of a Color implies wavelengths hitting the eye. Examples of the latter are hallucinations and dreams. So the question remains, "Where is the color?" If the answer is "Wavelengths," where are the Wavelengths when I am dreaming? Am I close? ETA2: Further thoughts on perceiving redness: If you were never able to perceive wavelengths that correlate to redness, would you know redness? If your eyes were damaged to stop seeing red you would probably continue to dream with red. But if you have never seen red, would you dream in red? This is relevant to discovering the source of redness in non-wavelength related experiences which is slightly different than the question, "Where is the color?"
4RobinZ
I don't know if colorblind people dream in color, but colorblind synesthetes can experience colors their eyes don't register.
4AdeleneDawner
I wish they had a better description of that. I'm synesthetic, with normal color vision, but sometimes get sensations of colors that seem impossible to experience. 'A kind of greenish-purple', for example - no, I don't mean blue, and it's not a purple pattern with green bits, it's green and purple at the same time. I also get 'colors' that make even less sense. For example, I'll occasionally get a color that 'looks' grey but doesn't 'feel' grey; it feels like it should have a separate label, and my mind refuses to categorize the stimulus with things that evoke 'truly grey' reactions. That makes me wonder if I'm experiencing the same effect as the one mentioned in the article.
2MrHen
All this makes me want to do is go find a way to hack eye hardware so I can experience the weird colors too...
0Kevin
Give it ~20 years and we will calculate the way to consistently hack your brain to experience the weird colors and other synesthetic sensations.
0RobinZ
That's incredibly interesting - I recall that the article mentioned colorsighted synesthetes observing synesthetic colors that felt different from similar real colors, without going into any particular detail.
0AdeleneDawner
I don't know how much more detail could be given, really. I don't think I can do any better job of describing it than I just did, and I like to think I'm pretty good at that kind of thing.
0RobinZ
This is true - and from a heterophenomenological standpoint, I don't see that more needs to be said. Your remarks were perfectly clear despite their brevity.
3MrHen
From the article linked (synesthete is a keyword explained in the article): And that, I guess, answers that question. Awesome.
0Dustin
I voted this comment up, because I too do not see how the root comment isn't an answer and I'd really like to know why the OP doesn't think it is an answer. I don't understand what he means when he says: IT seems like the first sentence answer the questions asked.
1MatthewB
There is also the fact that what we describe as "redness" is purely by virtue or our anatomy and the ranges at which our eyes' structure receives the light that is then interpreted by our brain as red or blue. Red, or Blue are just words that we used to describe a state that exists in the universe. What would happen to these colors if our eyes were made of a type of structure that picked up EM radiation all the way from gamma-rays to long-wavelength radio waves? What "color" would we think something in the Microwave spectrum was? What about the X-Ray Colored objects?
0SilasBarta
FYI: Unlike electronic devices, your visual system doesn't detect absolute color, hence these illusions.
0thomblake
Indeed. But one wouldn't want to suggest anything mysterious is responsible.
-1Nubulous
Since we can presumably generate the appropriate signals in the optic nerve from scratch if we choose, light and its wavelength have nothing whatsoever to do with color.
2Blueberry
Downvoted for strange non sequitur. We could theoretically pipe in the appropriate electrical impulses to the part of your brain responsible for auditory processing, but that doesn't mean hearing has "nothing whatsoever" to do with sound.
3AdeleneDawner
The upvote was mine; I agree that 'nothing whatsoever' was too strong, but thought that the point about qualia observably having more to do with brainstates than the stimulii that evoke them was useful.

What dire consequences should we expect if we do, in fact, deny that there is anything that is blue ?

For my money, the discussion in p.375 and onwards of Consciousness Explained says all there is to say (in addition to theories of electromagnetism, optics and so on) about the experience of color.

I can't really do justice to that section in a comment here, but I will note its starting point:

Many have noticed that it is curiously difficult to say just what properties of things in the world colors could be. [...] What is beyond dispute is that there is no s

... (read more)
6Eliezer Yudkowsky
Um, even I would have to judge that saying anything whatsoever about apple trees, in answer to the question Mitchell is asking, is blatantly running away from the scary and hence interesting part of the problem, which will concern itself solely with matters in the interior part of the skull. Anyone talking about apple trees is running away from their ignorance of the inside of the skull, and finding something they understand better to talk about. Reductionist or not, I cannot defend that.
5Morendil
Your response puzzles me. What I take Dennett to be saying is that apple trees and the insides of skulls are deeply entangled. "Perception" is a term that will recur often when we seek to explain the entangled history of apple trees and mobile fructivores. And I'd be rather surprised to find Dennett running away from hard questions. OK, you've said where the scary part of the problem is. Can you say more about what is scary about "doing a Dennett"; or about what you take to be the scary part of the problem ? There are some unsettling, if not scary, things that come out of considering apple trees, that do not come out of considering only the insides of skulls and simplifying color as being all about light wavelengths. For instance, if "redness" is as gerrymandered a category as Dennett's view implies, then it would be in pratice impossible to design from scratch a mind that has the same "qualia" of redness, for lack of a better word, that we have.
-1RobinZ
Your first line ("What dire consequences should we expect if we do, in fact, deny that there is anything that is blue ?") is an appeal to the consequences of a belief about a matter of fact, and therefore irrelevant. What remains without that is good.
4Morendil
"What dire conceptual consequences", if you prefer. Mitchell says "you can do a Dennett" as if that was enough to scare away any reasonable person. I'd like to know what is so scary about Dennett's conclusions.
3RobinZ
Ah, that's clearer. I retract my implications.

Thomas Nagel's classic essay What is it like to be a bat? raises the question of a bat's qualia:

Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as

... (read more)
3Mitchell_Porter
Being a bat shouldn't be incomprehensible (and in fact Nagel makes some progress in his essay). You still have a body and a sensorium, they're just different. Getting your sense of space by yelling at the world and listening to the echoes - it's weird, but it's not beyond imagining. The absence of higher cognition might be the hardest thing for a human to relate to, but everyone has experienced some form of mindless behavior in themselves, dominated by sensation, emotion, and physical activity. You just have to imagine being like that all the time. Being a quantum holist[*] and all that, when it comes to consciousness, I don't believe in qualia for Deep Blue because I don't think consciousness arises in that way. If it's like something to be a rock, then maybe the separate little islands of silicon and metal making up Deep Blue's processors still had that. But I'm agnostic regarding how to speak about the being of the very simplest things, and whether it should be regarded as lying on a continuum with the being of conscious beings. Anyway, I answer both your questions yes, and I think other people may as well be optimistic too, even if they have a different theoretical approach. We should expect that it will all make sense one day. [*] ETA: What I mean by this is the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these. So it's a brain-as-quantum-computer hypothesis, with an ontological twist thrown in.

Another thread for answers to specific questions.

Second question: Where is computation?

People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state... To be in a particular cognitive state is to be in a particular computational state. But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?

9HalFinney
I don't think your question is well represented by the phrase "where is computation". Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware. For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation. Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don't. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting. In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain's behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
0Mitchell_Porter
If people want to say that consciousness is computation, they had better be able to say what computation is, in physical terms. Part of the problem is that computational properties often have a representational or functional element, but that's the problem of meaning. The other part of the problem is that computational states are typically vague, from a microphysical perspective. Using the terminology from thermodynamics of microstates and macrostates - a microstate is a complete and exact description of all the microphysical details, a macrostate is an incomplete description - computational states are macrostates, and there is an arbitrariness in how the microstates are grouped into macrostates. There is also a related but distinct sorites problem: what defines the physical boundary of the macro-objects possessing these macrostates? How do you tell whether a given elementary particle needs to be included, or not? I don't detect much sympathy for my insistence that aspects of consciousness cannot be identified with vague entities or properties (and possibly it's just not understood), so I will try to say why. It follows from insisting that consciousness and its phenomena do actually exist. To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists. (Just to be clear, I'm not saying that the object of every perception exists - if that were true, there would be no such thing as perceptual error - but I'm saying that perceptions themselves do exist.) But computational macrostates are not exactly defined from a micro level. So they are either incompletely specified, or else, to become completely specified, the fuzziness must be filled out in a way that is necessarily arbitrary and can be done in many ways. The definitional criteria for computational or functional states are simply not strict enough to compel a unique micro completion. Also, macrostates
2Vladimir_Nesov
In other words, ontologically fundamental mental entities. Could we move on please?
0Mitchell_Porter
A thing doesn't have to be fundamental in order to be exact. If individual electrons are fundamental, an "entity" consisting of one electron in a definite location, and the other electron in another definite location, is not a vague entity. The problem is not reduction per se. The problem discussed here is the attempt to identify definitely existing entities with vaguely defined entities.

All your questions come down to: why does our existence feel like something? Why is there subjective, personal, conscious experience? And why does it feel the way it does and not some other way?

In the following, I assume that your position about qualia deserving an explanation is correct. I don't have a fully formed opinion yet myself - I defer an explanation - but here's what I came up with by assuming your position.

First, I propose that we both accept the Materialistic Hypothesis as regards minds. In the following text I will use the abbreviation MP for ... (read more)

3LauraABJ
I agree with your interpretation of our current physical and experiential evidence. I believe the perceived dualistic problem arises from imperfections in our current modeling of brain states and control of our own. We cannot easily simulate experiential brain states, reconfigure our own brains to match, and try them out ourselves. We cannot make adjustments of these states on a continuum that would allow us to say physical state A corresponds exactly to experience B and here's the math. We cannot create experience on a machine and have it tell us that it is experiencing. Without internal access to our source-code, our experiences come into our consciousness fully formed and appear magical. That being said, the blunt tools we do have--descriptions of other's experiences, drugs, brain stimulation, fMRI, and psychophysics--do seem to indicate that experience follows directly from physical states of the brain without the need for a dualist explanation. Perhaps the problem will dissolve itself once uploading is possible and individual experiences are more tradeable and malleable.
0Mitchell_Porter
I certainly think about things differently: 1'. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences. 2'. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences. 3'. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences. 4'. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things. 5'. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
4Paul Crowley
This is really frustrating. When you ask questions of us who disagree with you, we tend to say "I don't think the question is well posed". But when we ask questions of you, you won't say yes, or no, or explicitly reject the question - you just return to your own questions. If you don't think the questions you're being asked are well-posed enough to answer, could you say more about why? Otherwise we're not engaging, we're just talking past each other.
1Mitchell_Porter
It can take a long time to say what the problem is. I just spent several hours trying to do this in Dan's case, and I'm not sure I succeeded. The questions aren't ill-posed, but the whole starting point was problematic. In effect I wanted to demonstrate the possibility of an alternative starting point. Dan managed to respond, and now I to that, and even this comment of yours contributed, but it took a lot of time and consideration of context even to produce an imperfect further reply. It's a tradeoff between responding adequately and responding promptly. There's been an improvement in communication since last time, but it can still get better.
0[anonymous]
I respect that.
0RobinZ
Clarify 5', please: do you intend to say that the base rules of the world are more complicated than the current physics - e.g. how a creature in a Conway's Game of Life board might say, "I know that any live cell with two or three live neighbours lives on to the next generation, but I'm missing how cells become live"?
0Mitchell_Porter
The basic ingredients and their modes of combination (not interaction, but things like part-whole relations) need to be different. See descriptions of the second type and I want to be a monist.
2RobinZ
What are "part-whole relations"? That doesn't sound like a natural category in physics.
0Mitchell_Porter
In physics, if A is part of B, it means it's a spatial part. I think the "parts" of a conscious experience are part of it in some other way. I say this very metaphorically, and only metaphorically, but it's more like the way that polyhedra have faces. The components of a conscious experience, I would think, don't even occur independently of conscious experiences. There's a whole sub-branch of ontology concerning part-whole relations, called mereology. It potentially encompasses not only spatial parts, but also subsets, "logical parts", "metaphysical parts" (e.g. the property is part of the thing with the property), the "organic wholes" of various holisms, and so on. Of course, this is philosophy, so you have people with sparse ontologies, who think most of this is not really real, and then you have the people who are realists about various abstract or exotic relations. I think I've invented a name for my own ontological position, by the way - reverse monism. I'll have to explain what that means somewhere...
1RobinZ
Before I respond to this: how much physics have you studied? Just high school, or the standard three semesters of college work? How well did you do in those classes? Have you read any popular-science discussions of physics, etc. outside of the classes you took? Have you studied any particular field of physics-related problems (e.g. materials science/engineering)? I'm asking this because your discussion of part-whole relations doesn't sound like something a scientist would invoke. If you are an expert, I'll back off, but I have to wonder if you've ever used Newton's Laws on a deeper level than cannonballs fired off cliffs.
0Mitchell_Porter
I come from theoretical physics. I've trashed my career several times over, but I've always remained engaged with the culture. However, I've also studied philosophy, and that's where all this talk of ontology comes from.
0RobinZ
Fair enough - I will read through the thread and make a new response.
1RobinZ
Can you explain that in terms of physics? According to my understanding, 'part-whole relations' are never explicitly described in the models; only implicit in the solution to the common special cases. For example, quantum mechanics includes no description of temperature; we prove temperature in quantum mechanics through statistical mechanics, without ever invoking additional laws.
-4Mitchell_Porter
Certainly there's no fundamental physical law which talks about part-whole relations. "Spatial part" is a higher-order concept. But it's still an utterly basic one. If I say "the proton is part of that nucleus", that's a physically meaningful statement. We might have avoided this digression if, instead of part-whole relations, I'd mentioned something like "spatial and temporal adjacency" as an example of the "modes of combination" of fundamental entities which exist in physical ontology. If you take the basic physical reality to be "something at a point in space-time" (where something might be a particle or a bit of field), and then say, how do I conceptually build up bigger, more complicated things? - you do that by putting other somethings at the space-time points "next door" - locations adjacent in space, or upstream/downstream in time. There are other perspectives on how to make complexity out of simplicity in physics. A more physical perspective would look at interaction, and separate objects becoming dynamically bound in some way. This is the basis of Mario Bunge's philosophy of systems (Bunge was a physicist before he became a philosopher); it's causal interaction which binds subsystems into systems. So, trying to sum up, we can say that the modes of combination of basic entities in physics have a non-causal aspect - connectedness, being next to each other, in space and time - and a causal aspect - interaction, the state of one affecting the state of another. And these aspects are even related, in that spatiotemporal proximity is required for causal interaction to occur. Finally, returning to your question - how do I expect physical ontology to change - part of the answer is that I expect the elementary non-causal bindings between things to include options besides spatial adjacency. Spatial proximity builds up spatial geometry and spatially extended objects. I think there will be ontological complexes where the relational glue is something other than spac
-1DanArmak
Let's talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only "presumable"? Do you have in mind an experiment to falsify it? And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree? Do you agree with my expectation that even with future refinements of these theories, the MP world's theories will remain "closed on MP-ness" and are not likely to lead to descriptions of subjective experiences? Sensation and behaviour are MP, not subjective. Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it's simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events. Do you agree with the above? I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events - the entire state of your brain when sensing something - then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience. This theory appears consistent with all we know to date. Do you expect it to be falsified in the future? This doesn't seem rela
0Mitchell_Porter
I think I must try one more largely indirect response and see if that leaves anything unanswered. Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn't be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels. What you call "MP theories" only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology. Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that's an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress. I have trouble calling this a "world". The actual world contains consciousness. We can talk about the parts of the actual world that don't include consciousness. W
2DanArmak
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that So which is correct? Also, I don't understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by: If those entities are basic, then they're like electrons - they can't be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind? About-ness is tricky. If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing. If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it's not clear how we can check if they're true or not (I see no reason to believe in free will of any kind). Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the "MP theories" that don't inculde concsiousness are causally closed. Here's what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes. I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem. All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experienc
-3Mitchell_Porter
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell's equations in a vacuum are causally closed, but that theory doesn't even describe atoms, let alone consciousness. The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description. Let's take a specific aspect of conscious experience - color vision. For the sake of argument (since the reality is much more complicated than this), let's suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane. Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way - as being about color, etc - from the beginning. What's the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understandi
-1DanArmak
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks. If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell's equations) is causally disconnected from the rest of them. They all describe common interacting entities. Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn't one of these. You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to "consciousness". And that has to start with giving a better definition of what consciousness is. Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is "consciousness"? It needn't be. Here's a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness. Relevant facts: we believe in and report conscious experience even though we can't define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts - IOW, without the innate feeling we wouldn't even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically
-3Mitchell_Porter
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed. Consider the Standard Model of particle physics. It's an inventory of fundamental particles and forces and how they interact. As a model it's causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus "wrong". I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only "self-awareness" is meant, or all forms of "awareness". It takes time and care to develop a shared language and understanding here. However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: "You know how sometimes you're asleep and sometimes you're awake, and how the two states are really different? That difference is what I mean by consciousness!" And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we'd be on our way down one path. The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive chang

This article contains three simple questions which I want to see answered. To organize the discussion, I'm creating a thread for each question, so people with an answer can state it or link to it. If you link, please provide a brief summary of your answer here as well.

First question: Where is color?

I see a red apple. The redness, I grant you, is not a property of the thing that grew on the tree, the object outside my skull. It's the sensation or perception of the apple which is red. However, I do insist that something is red. But if reality is nothing but particles in space, and empty space is not red, and the particles are not red, then what is? What is the red thing; where is the redness?

However, I do insist that something is red.

Why?

That aside: Red is something that arises out of things that are not themselves red. Right now I'm wearing a sweater, which is made out of things that are not themselves sweaters (plastic, cotton with a little nylon and spandex; or on a higher level, buttons, sleeves, etc.) A sweater came into existence when the sleeves were sewn onto the rest of it (or however it was pieced together); no particles, however, came into existence. My sweater just is a spatial relationship between a vague set of particles (I say "vague" because it could lose a button and still be a sweater, but unlike a piece of lint, the button is really part of the sweater). If I put it through the shredder, no particles would be destroyed, but it would not be a sweater.

My sweater is also red. When I look at it, I experience the impression of looking at a red object. No particles come into existence when I start to have this experience, but they do arrange differently. Some of the ways my brain can be arranged are such that when they are instantiated, I experience the impression of looking at something red - such as the ones that come into play when ... (read more)

-1Mitchell_Porter
Why do I believe it, or why do I insist on it? I believe it because I see it, and I insist on it because other people keep telling me that redness is actually some thing which doesn't sound red. When you say a "sweater just is a spatial relationship between a vague set of particles", it is not mysterious how that set of particles manages to have the properties that define a sweater. If we ignore the aspect of purpose (meant to be worn), and consider a sweater to be an object of specified size, shape, and substance - I know and can see how to achieve those properties by arranging atoms appropriately. But when you say that experiencing redness is also a matter of atoms in your brain arranging themselves appropriately, I see that as an expression of physicalist faith. Not only are the details unknown, but it is a complete mystery as to how, even in principle, you would make redness by combining the entities and properties available in physical ontology. And this is not the case for size, shape, and substance; it is not at all mysterious how to make those out of basic physics. As I say in the main article, the specific proposals to get subjective color out of physics are actually property dualisms. They posit an identity between the actual color, that we see in experience, and some complicated functional, computational, or other physical property of part of the brain. My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity; so if that's your theory, you should be a property dualist. I want to be a monist, but that is going to require a new physical ontology, in which things that do look like experiences are among the posited entities.
6Alicorn
Sound red? If nothing sounds red, that means you are free of a particular sort of synesthesia. :P Anyway: Suppose somebody ever-so-carefully saws open your skull and gives your brain a little electric shock in just exactly the right place to cause you to taste key lime pie. The shock does something we understand in terms of physics. It encourages itty bits of your brain to migrate to new locations. The electric shock does not have a mystical secondary existence on a special experiences-only plane that's dormant until it gets near your brain but suddenly springs into existence when it's called upon to generate pie taste, does it? We don't know enough about the brain to do this manually, as it were, for all possible experiences; or even anything very specific. fMRIs will tell us the general neighborhood of brain activity as it happens; and hormone levels will tell us some of the functions of some of the soup in which the organ swims; and electrical brain stimulation experiments will let us investigate more directly. Of course we aren't done yet. The brain is fantastically complicated and there's this annoying problem where if we mess with it too much we get charged with murder and thrown in jail. But they have yet to run into a wall; why do you think that, when they've found where pain lives and can tell you what you must have injured if you display behavior pattern X, they're suddenly going to stop making progress? So your problem is that the two things lack a primitive resemblance? My sweater's unproblematic because it and the yarn that was used to make it are... what, the same color? Both things you already associate with sweaters? If somebody told you that the brain doesn't handle emotions, the heart does - the literal, physical heart - would you buy that more readily because the heart is traditionally associated with emotion and that sounds right, so there's some kind of resemblance going on? If that's what's going on, I hope having it pointed out helps you di
-1Mitchell_Porter
They lack any resemblance at all. A trillion tiny particles moving in space is nothing like a "private shade of homogeneous pink", to use the phrase from Dennett (he has quite a talent for describing things which he then says aren't there). And yet one is supposed to be the same thing as the other. The evidence from neuroscience is, originally, data about correlation. Color, taste, pain have some relationship with physical brain events. The correlation itself is not going to tell you whether to be an eliminativist, a property dualist, or an identity theorist. I am interested in the psychological processes contributing to this philosophical choice but I do not understand them yet. What especially interests me is the response to this "lack of resemblance" issue, when a person who insists that A is B concedes that A does not "resemble" B. My answer is to say that B is A - that physics is just formalism, implying nothing about the intrinsic nature of the entities it describes, and that conscious experience is giving us a glimpse of the intrinsic nature of at least a few of those entities. Physics is actually about pink, rather than about particles. But what people seem to prefer is to deny pinkness in favor of particles, or to say that the pinkness is what it's like to be those particles, etc.
6Psychohistorian
A trillion tiny particles moving in space is like a "private shade of homogeneous pink" in that it reflects light that stimulates nerves that generate a private shade of homogenous pink. If you forbid even this relationship, you've assumed your conclusion. If not, you use "nothing" too freely. If this is a factual claim, and not an assumption, I'd like to see the research and experiments corroborating it, because I doubt they exist, or, indeed, are even meaningfully possible at this time. To use my previous example, the electrical impulses describing a series of ones and zeroes are "nothing like" lesswrong.com, yet here we are.
0Mitchell_Porter
I'm referring to the particles in the brain, some aspect of which is supposed to be the private shade of color.
0Psychohistorian
I don't see how this is meaningfully distinct from Alicorn's sweater. Sweater-ness is not a property of cloth fibers or buttons. I think the real problem here is that consciousness is so dark and mysterious. Because the units are so small and fragile, we can't really take it apart and put it back together again, or hit it with a hammer and see what happens. Our minds really aren't evolved to think about it, and, without the ability to take it apart and put it back together and make it happen in a test tube - taking good samples seems to rather break the process - it's extremely difficult to force our minds to think about it. By contrast, we're quite used to thinking about sweaters or social organization or websites. We may not be used thinking about say, photosynthesis or the ATP cycle, but we can take them apart and put them back together again, and recreate them in a test tube.
0thomblake
It might behoove you to examine Luciano Floridi's treatment of "Levels of Abstraction" - he seems to be getting at much the same thing, if I'm understanding you correctly. To read in in a pragmatist light: there's a certain sense in which we want to talk about particles, and a sense in which we want to talk about pinkness, and on the face of it there's no reason to prefer one over another. It does make sense to assert that Physics is trying to explain "pinkness" via particles, and is therefore about pinkness, not about particles.

Where is lesswrong.com? "On the internet" would be the naive answer, but there's no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there's nothing more complex that a bunch of magnets and electric signals and some servers and so on (I'm not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.

I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is - down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running - I have absolutely no idea what's going on, and probably ne... (read more)

-1Mitchell_Porter
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way - that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal -1 (or the square root of -1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They're saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.

The problem with the analogy is that we know a whole lot about numbers - math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I've made this point before.

What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let's assume there is some "magic" involved that in turn converts this physical phenomena into an experience. Wouldn't it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it's epiphenomenal, how could it accomplish this, and how could it be meaningful? If it's not epiphenomenal, how does it interact with actual matter? Why can't we detect it?

It's quite clear that when it comes to how consciousness works, the current best answer is, "We don't get it, but it has something to do with the brain and neurons.&quo... (read more)

2Jack
This is perfect and I'm not sure there is much more to say.
0Mitchell_Porter
It's our theories of matter which are the problem - and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
3wnoise
\sum_{n=0}^{infinity} 2^n "=" -1. That is a bit tongue in cheek, but there are divergent sums that are used in serious physical calculations.
1Blueberry
I'm curious about this. More details please!
1wnoise
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be "regularized" to give finite results, or at least turned into a form that while still infinite, can be "renormalized" by such means as considering various terms as referring to observed values, rather than the "bare values", which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay. Letting s be the sum above, in some sense what we're "really" saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and -1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually -- the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.). IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation http://cornellmath.wordpress.com/2007/07/28/sum-divergent-series-i/ and the next two posts are a nice introduction to some of these methods. Wikipedia has a fair number of examples: * http://en.wikipedia.org/wiki/1_−_2_%2B_3_−_4_%2B_·_·_· * http://en.wikipedia.org/wiki/1_−_2_%2B_4_−_8_%2B_·_·_· * http://en.wikipedia.org/wiki/1_%2B_1_%2B_1_%2B_1_%2B_·_·_· * http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_·_·_· Explicit physics calculations I do not have at the ready. EDIT: please do not take the descriptions of the physics above too seriously. It's not quite what people actually do, but
0Bo102010
wnoise hits it out of the park!
3Cyan
Can you clarify why does not also apply to the piling up of degrees of freedom in a quantum monad? I have another question, which I expect someone has already asked somewhere, but I doubt I'll be able to find your response, so I'll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
-5Mitchell_Porter

Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?

Wouldn't we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn't stink?

Would we wonder why the part of the brain for hearing high pitches didn't sound like a high pitch? Why the part which feels a punch in the nose doesn't actually reach out and punch us in the nose when we lean close?

I can't help feeling that this line of questioning is bizarre and unproductive.

-2Mitchell_Porter
Hal, what would be more bizarre - to say that the colors, smells, and sounds are somewhere in the brain, or to say that they are nowhere at all? Once we say that they aren't in the world outside the brain, saying they are inside the brain is the only place left, unless you're a dualist. Most people here are saying that these things are in the brain, and that they are identical with some form of neural computation. My objection is that the brain, as currently understood by physics, consists of large numbers of particles moving in space, and there is no color, smell, or sound in that. I think the majority response to that is to say that color, smell, sound is how the physical process in question "feels from the inside" - to which I say that this is postulating an extra property not actually part of physics, the "feel" of a physical configuration, and so it's property dualism. If the redness, etc, is in the brain, that doesn't mean that the brain part in question will look red when physically examined from outside. Every example of redness we have was part of a subjective experience. Redness is interior to consciousness, which is interior to the thing that is conscious. How the thing that is conscious looks when examined by another thing that is conscious is a different matter.
5AndyWood
Red is in your mind. It's a sensation. It's what it feels like inside when you're looking at something you call red. Nothing is actually red, it's just a verbal symbol you assign to a particular inner sensation. I will very, very happily grant that we do not have a good explanation for how the brain creates such subjective, inner sensations. Notice I wrote "how" and not "why". There is no such thing as a zombie as they are usually defined, because every time you make a brain, so long as it is physically indistinguishable from a regular brain, it's going to be conscious. That's what happens when you make a brain that way (you can try it out by having a child). On the other hand, if we allow the definition of a zombie to be changed just a little bit, so that it includes unconscious things that are merely behaviorally indistinguishable from conscious people, then I see no problem at all with that kind of zombie. But their insides would be physically, detectably different. Their brains would not work the same way as ours. If they did, then they would be conscious. Regular brains produce private, inner sensation, just as surely as the sun produces radiation. The right question is not why should this be so?, but how does it do it? And I grant that this is an unanswered question. Heterophenomenology is just fine as far as it goes, but it doesn't go this far. All that stuff about only asking why an agent verbalizes or believes that it sees red is profoundly unsatisfying as an explanation for redness, and for a very good reason. It's like behaviorism in psychology - maybe fine for some purposes, but inherently limited, in the negative sense. It ignores the inner sensation that we all know is there. Now, that's no reason to suppose that the red sensation is not explainable or reducible. We just don't understand the brain well enough yet, so we'll just have to keep thinking about it and wait until we do.
5Wei Dai
Did anyone else notice the similarity of Mitchell's arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind. I think this argument actually works better here (with color), because my counter-argument to his comment doesn't work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creating an AI) and so in principle we also know how to go from a physical description to a utility function. Here, we don't know how to go from a color to a physical description of a mind that can experience that color, nor can we tell what color a mind is experiencing or capable of experiencing, given a physical description of it. But I'm not sure we should expect this state of affairs to continue forever.
3Vladimir_Nesov
From the sequence on reductionism: * Hand vs. Fingers * Angry Atoms Of course, this only answers to where is the behavior of seeing color -- but then the correspondence between (introspective) behavior and experience is strict, even if the way in which it is achieved may be nontrivial: * Philosophical zombie * How an algorithm feels
2FAWS
Reposting my question from upthread: I'm not quite sure I understand the problem with blueness as you see it. Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old one as that one was to your perception of smells, flavor or sounds. If all of this was true, would that be enough for you to accept that blueness is sufficiently explicable by the behaviour of neurons alone? Or would you argue that while neurons are enough to induce the sensation of blueness this sensation itself is still something beyond the mere behaviour of neurons?
0Cyan
What would be your reply to
-3Mitchell_Porter
The redness must be in the monad. The point of postulating monads is to have an ontological correlate to the phenomenological unity of consciousness. Redness is interior to consciousness, which is interior to the thing that is conscious. A theory of monads might have an intermediate stage of incomplete descriptions, in which it is described purely formally, mathematically, and causally, but the objective is to have a theory in which there is something recognizably identical with a total instantaneous conscious experience. This is also the point of reverse monism.
0Jonii
Simple "Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head" is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it's seeing. It'll answer "blue ball", and there's nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon. The mystery, as far as I can tell, happens when we notice that we could theoretically try to see the world as that brain would it see, and suddenly we are actually experiencing that blue ball and all the qualias that come with it. Now the magic is in a much smaller space: There is no magic happening when brain claims it sees blue things, and there's nothing mysterious when we take brains point of view and try to understand how we'd see the world if we were just a brain. So the mystery of consciousness seems to hide in "in a world with no observers, can we sensibly talk about what would the world be like if seen from non-sentient things perspective". If we can, the qualia seem to be in every aspect identical to that perspective. So what's the ontology of perspective? I have no idea, but perspective seems to go hand in hand with something physical even existing, so we could be strictly materialistic while still acknowledging the existence of qualia.
-1Kyro
Color is in the wavelength of the photon. Blue is a label we use to identify a particular frequency range within the electromagnetic spectrum.
0pjeby
This is a bit of a simplification, actually... some of the colors we can see are not a single wavelength. That's why a rainbow doesn't contain every color we can see.
0wedrifid
Like brown, for example?

A final thread for answers to specific questions.

Third question: Where is meaning?

Thoughts are about things. What aspect of the physical state of an object makes that state "about" something in particular, or about anything at all?

1RobinZ
(I answer this question, because the discussion of color in the prior post was hopeless from a communication standpoint.) Consider a simple device, consisting of a chamber containing a measured amount of mercury and a very narrow tube rising from this chamber. As the temperature of the mercury changes, the volume changes as a simple function (roughly linear, but more importantly monotonically). (As the mercury is highly thermally conductive, this temperature is roughly uniform.) This change in volume causes a small amount of the mercury to expand into the narrow tube - the precise amount linearly proportional to the change in volume. It is mathematically clear, therefore, that the height of mercury in the tube is a monotonic function of the temperature of the mercury. The net result of creating this device is a stable, predictable, reliable correlation in the universe between two things - and the tube, therefore, can be marked at intervals (the first thing) corresponding to particular temperatures (the second thing). We call this a thermometer, of course. And when the mercury is next to the "76" label on the thermometer, we say that this means that the temperature is 76 degrees. Does this make sense? It would be useful to know whether this sounds like a "wretched subterfuge", as Kant called compatibilist theories of free will.
3Alicorn
Just to play with the idea: is cricket chirping about the temperature in Fahrenheit?
0RobinZ
Let us be precise: the frequency of cricket chirping is reliably correlated with the temperature in Fahrenheit - to be specific, the Fahrenheit temperature is approximately the number of chirps in 13 seconds plus 40 - and therefore a particular frequency of cricket chirping means a particular temperature. The cricket chirping itself usually means other things.
1PhilGoetz
I think this is a big distraction. As you pointed out in a comment, this is the purpose of the "Where is the chess in Deep Blue?" question, and it has nothing to do with the question you're posing about qualia. That way lies madness and John Searle.

I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don't see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.

I don't believe that confabulating a confused topic is a useful activity that is expected to advance understanding. We'd all be better off avoiding this mode of thinking, and building on better-understood concepts instead.

1Tyrrell_McAllister
What do you mean by "confabulating"? Do you just mean that people here aren't really confused about this topic?
0Vladimir_Nesov
They are, which is why constructing one more confused argument in terms of confused concepts is no use.

Given that you accept heterophenomenology, I wish you'd put this in explicitly heterophenomenological terms - in terms of accounting for the utterances that people make, in other words. The reason I keep banging on about this is that I think that it is the key move in defusing the confusions you exhibit here.

0Mitchell_Porter
I accept heterophenomenology only in the sense that people can indeed be mistaken in describing their experiences. On those occasions, you only need to account for the description. But I would say "folk phenomenology" is correct about the basics.
1Paul Crowley
Accepting heterophenomenology means accepting that if a theory successfully accounts for everything you can observe from the outside, there is no further work to do. I hope to do a top-level post about this soon.
[-][anonymous]00

Simple "Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head" is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it's seeing. It'll answer "blue ball", and there's nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon.

The mystery, as far as I ca... (read more)

[-][anonymous]00

Did anyone else notice the similarity of Mitchell's arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind.

I think this argument actually works better here (with color), because my counter-argument to his comment doesn't work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creat... (read more)

[-]FAWS00

I'm not quite sure I understand the problem with blueness as you see it.

Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old ... (read more)

[-][anonymous]00

No offense, but...

Your previous article on the subject got downvoted to -15, and yet you posted a second article anyway? Why did you do that? Did you perform further research to determine whether all of us were confused, or only you? Did you try to determine whether there was any question to answer or not? Did you try to figure out why it seemed like there was a question to answer?

I don't know you very well at all, but it appears that you're an intelligent and useful person. I'm guessing that seeing the response to your articles will be leaving you disappo... (read more)

0Mitchell_Porter
Thank you for the concern, but things have been fairly mellow this time around anyway. When I was a teenager, I thought about the mind as people here do, at least some of the time. I was happy to think of consciousness as something like a video camera aimed at its own output. But I know that by the time I was 20, I was thinking differently, and I do not expect to ever turn back. It's clear to me that computer science and mathematical physics only address a subset of the world's ontology, and that the reductionisms we have consist at best of partial descriptions, and at worst of misidentifications. Also, in the study of phenomenology, especially Husserl's transcendental phenomenology, I've had a glimpse of how to think rigorously about the rest of ontology. This is my larger agenda. The problem of the Singularity is being approached within the existing scientific ontology, which is incomplete, and the solutions being developed, like CEV and TDT, are also stated in terms of that ontology. To really know what you're doing, when attempting to initiate a Friendly Singularity, you'd need to understand those solutions, or their analogues, in terms of the true ontology. But to do that requires knowledge of the true ontology. So, while trying to figure out a better ontology, I have an interest in understanding the thought processes of people who are satisfied with the existing one, because such people dominate the Singularity enterprise. Ideally I'd be able to provoke some sense of philosophical crisis and inadequacy, but obviously that isn't happening. However, I think there has been minor progress. I intend to let the current discussion wind down - to reply where there's more to be said, but not to get into "Yes it is, no it isn't" exchanges - and to get on with the larger enterprise, once it's over. These discussions have all already occurred, at a higher level of sophistication on all sides, in the philosophical literature, and I should relocate the ontological compon
0timtyler
Not a karma junkie maybe?

Honestly, I read this:

Someone else can do the metalevel analysis and extract the rationality lessons.

And noticed that the post is currently rated at -2. All signs are telling me to not bother reading this post. I probably will anyway, but I felt like reminding my future self why the karma system is here. :P

EDIT:

Color was an issue last time.

Where is "last time"?

4Paul Crowley
The thing is, Mitchell Porter is clearly a very intelligent and thoughtful person, who seems to be sinking huge amounts of his cognitive resources into this pointless, meaningless, doomed project. If we could persuade him of the futility and folly of it, it would probably be worth it.
5byrnema
On the other hand, this idea of qualia -- whatever it is actually about -- is a sticking point for the dualists. We should try to understand what they're talking about instead of just asserting they're not talking about anything. If we can look at dualist arguments and identify the exact location of our different thinking, then we own the argument and have a chance of explaining it to them. If we only understand the problem on the level that "well, we understand the reductionist view and it doesn't present any problems about qualia" then we don't actually understand anything about dualism. Otherwise the message is: dualists just need to become reductionists in order to get over their qualia problem. Personally, I can't relate to dualism either and I am curious about why I can't.
3Paul Crowley
Consciousness Explained does try to explain why people have the idea of qualia. The next post Porter needs to do on this is one explicitly addressing the position Dennett sets out in Consciousness Explained. That position is certainly popular enough here on LW that I don't see how we're going to have a useful discussion until he makes that post. I'm disappointed that that wasn't the conclusion he drew from the previous discussion.
0whpearson
That is going to be a while, he has dropped to 26 karma and is not a frequent poster.
0MrHen
Fair enough. I did end up reading the post but was confused. I got the feeling I was jumping into the middle of a topic/conversation and missed all of the setup. I will read the link from RobinZ and see if it fills in the gaps. Although, one clarification would be nifty. I am assuming that the discussion about Color has really little to do with Color itself and more to do with the representation of Colorness in our "head", hence the hole topic of dualism. Am I even close? EDIT: Actually, thinking more about what you said, I find your comment extremely valuable. Not so much in that I feel I should persuade anyone of anything, but more in that there are more reasons to read posts than I was initially considering. :P
3RobinZ
Last time was "How to think like a quantum monadologist". Having read both, I consider this one superior, thanks to it containing less substance to hold its confusions. Edit: See SilasBarta's links.

Thought I accounted for aboutness already, in The Simple Truth. Please explain what aspect of aboutness I failed to account for here.

5Tyrrell_McAllister
That is a 6,777 word dialogue that covers many things. Can you summarize the part that is an account of aboutness specifically? Skimming it, you seem to me to be saying that a physical system A is about a physical system B if each state that B is in (up to some equivalence relation) causes A to be in a distinct state (up to some equivalence relation). Hence, the pebbles in the bucket are "about" the sheep in the field because the number of sheep in the field causes the number of pebbles in the bucket to take on a certain value. I write that summary knowing that it probably misses something crucial in your account. As I say, I only skimmed the essay, trying to skip jokes and blatant caricatures (e.g., when your foil says, "Now, I’d like to move on to the issue of how logic kills cute baby seals -"). My summary is just to give you a launching point from which to correct potential misunderstandings, should you care to.
3Eliezer Yudkowsky
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of "truth" and "semantics" and "aboutness" for people who are having trouble with it. I'm not sure there's a part I could slice off for this question, given that someone is asking the question at all. Maybe I'd ask, "In what sense are the pebbles not about the sheep? If the pebbles are about the sheep, in what sense is this at all mysterious?" I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called "aboutness" which remains unresolved, it's up to you to define it.
6Tyrrell_McAllister
Then, to call this an "account of aboutness", you should explain what it is about the human mind that makes it feel as though there is this thing called "aboutness" that feels so mysterious to so many. As you put so well here: "Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument." If you did this in your essay, it was too dispersed for me to see it as I skimmed. What I saw was a caricature of the rationalizations people use to justify their beliefs. I didn't see the origin of the intuitions standing behind their beliefs.
1thomblake
Yes, I think this was the part that was missing in the initial reply.
3SilasBarta
Agree with Tyrrell_McAllister. You need to be a lot more specific when you make a claim like this.
0[anonymous]
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of "truth" and "semantics" and "aboutness" for people who are having trouble with it. I'm not sure there's a part I could slice off for this question, given that someone is asking the question at all.
2MatthewB
I see that Dr. Searle stopped by and loaned you some Special Causal Powers™ for your bucket(s) in that story. I shall have to make a note of this post to use in my Searle work.
0Mitchell_Porter
The story only addresses how representation works in a simple mechanism exterior to a mind (i.e., the pebble method works because the number of pebbles is made to track the number of sheep). The position one usually takes in these matters is that the semantics of artefacts (like the meaning of a sound) are contingent and dependent upon convention, but that the semantics of minds (like the subject matter of a thought) are intrinsic to their nature. It is easy to see that the semantics of the pebbles has some element of contingency - they might have been counting elephants rather than sheep. It is also easy to see that the semantics of the pebbles derives from the shepherd's purposes and actions. So there is no challenge here to the usual position, stated above. But what you don't address is the semantics of minds. Do you agree with the distinction between intrinsic and mind-dependent representation? If so, how does intrinsic representation come about? What is it about the physical aspect of a thought that connects it to its meaning?
4Eliezer Yudkowsky
What's in the shepherd that's not in the pebbles, exactly? Let's move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
1Mitchell_Porter
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like "why does pebble-tracking work?". It arises because we attribute semantic content both to certain artefacts and to our own mental states. If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of "human interpretation". Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
1Tyrrell_McAllister
Even as a reductionist, I'll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It's only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles. ETA: To expand a bit: The issue I raise above is basically Quine's indeterminacy of translation problem. One's initial impulse might be to say that you just need "higher resolution". The idea is that the pebble machine just doesn't have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd's brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its "low resolution" just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.? Personally, I don't think that this is the right approach to take. I think it's better to direct our energy towards resolving our confusion surrounding the concept of a computation.
[-]JanetK-10

“The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness.” No it doesn't work because you have left out BIOLOGY. You cannot just jump from physics and algorithms to how brains function. Here is the outline of a possible path: 1.We know that consciousness has an important function because it consumes a great deal of energy – that's how evolution works. 2.Animals move – therefore th... (read more)

7RobinZ
Two carriage returns between paragraphs, please - it's hard to read in this format.

You can do a Dennett and deny that anything is really blue.

I'd like to see what he'd do if presented with blue and a red balls and given a task: "Pick up the blue ball and you'll receive 3^^^3 dollars".

Even though many claim to be confused about these common words their actual behaviour betrays them. Which raises the question that what is the benefit of this wondering of "blueness"? What does it help anyone to actually do?

4RobinZ
I believe you are confused about what Dennett asserts. Quining Qualia would probably be the most obviously relevant essay easily located online, if you want to read him in his own words. If you don't, the key point is that Dennett maintains that qualia, as commonly described, are necessarily: 1. ineffable 2. intrinsic 3. private 4. directly or immediately apprehensible in consciousness ...and that nothing actually exists with these properties. You see blue things, but there is no pure experience of blue behind your seeing blue things. Edit: Allow me to emphasize that I do not consider the confusion to reflect poorly upon yourself - yours was a reasonable reading of Mitchell_Porter's characterization of Dennett's remarks. A better wording for the opening of my reply would be: "I think the quote doesn't reflect what Dennett believes."
4sharpneli
It seems I was wrong about Dennett's claims and misinterpreted the relevant sentence. However the original question remains and can be rephrased: What predictions follow from world containing some intrinsic blueness? The topmost cached thought I have is that this is exactly the same kind of confusion as presented in Excluding the Supernatural. Basically qualia is assumed as an ontologically basic thing, instead of neural firing pattern. The big question is therefore (as presented in this thread already in various forms): What would you predict if you'd find yourself in a world with distinct blueness compared to a world without?
1RobinZ
Ah, I apologize - I had not realized you had the other point in your comment. That strikes me as a key angle, and one of the reasons why I upvoted ciphergoth's question.

Can't we just define "blue" or "blueness" or what have you to be an equivalence class and be done with it?

4thomblake
Well we wouldn't want to "just define" a word that's supposed to refer to something in the world, without figuring out what that thing is yet.
-1Sniffnoy
OK, but it's not too hard to describe what makes a thing blue. The only obvious sticking point is who's standard of blueness we're using. Perhaps a "blueness function" would be better than an equivalence class of all things blue, then. Regardless, determining whether or not a given thing is blue doesn't seem to be what the OP is asking about; I'm suggesting that this suffices.
-1PhilGoetz
I think you're missing the point of the post. The problem is about blue qualia, not the category blue. There would still be blue in a world of p-zombies, but not blue qualia.
[+][anonymous]-190