Previous articles: Personal research update, Does functionalism imply dualism?, State your physical account of experienced color.
In phenomenology, there is a name for the world of experience, the "lifeworld". The lifeworld is the place where you exist, where time flows, and where things are actually green. One of the themes of the later work of Edmund Husserl is that a scientific image of the real world has been constructed, on the basis of which it is denied that various phenomena of the lifeworld exist anywhere, at any level of reality.
When I asked, in the previous post, for a few opinions about what color is and how it relates to the world according to current science, I was trying to gauge just how bad the eclipse of the lifeworld by theoretical conceptions is, among the readers of this site. I'd say there is a problem, but it's a problem that might be solved by patient discussion.
Someone called Automaton has given us a clear statement of the extreme position: nothing is actually green at any level of reality; even green experiences don't involve the existence of anything that is actually green; there is no green in reality, there is only "experience of green" which is not itself green. I see other responses which are just a step or two away from this extreme, but they don't deny the existence of actual color with that degree of unambiguity.
A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green. Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green. Either the lifeworld exists somewhere in reality, or you must assert, as does the philosopher quoted by Automaton, that all that exists are brain processes and words. Your color sensations aren't really there, you're "having a sensation" without there being a sensation in reality.
What about the other responses? kilobug seems to think that pi actually exists inside a computer calculating the digits of pi, and that this isn't dualist. Manfred thinks that "keeping definitions and referents distinct" would somehow answer the question of where in reality the actual shades of green are. drethelin says "The universe does not work how it feels to us it works" without explaining in physical terms what these feelings about reality are, and whether any of them is actually green. pedanterrific asks why wrangle about color rather than some other property (the answer is that the case of color makes this sort of problem as obvious as it ever gets). RomeoStevens suggests I look into Jeff Hawkins. Hawkins mentions qualia once in his book "On Intelligence", where he speculates about what sort of neural encoding might be the physical correlate of a color experience; but he doesn't say how or whether anything manages to be actually colored.
amcknight asks which of 9 theories of color listed in the SEP article on that subject I'm talking about. If you go a few paragraphs back from the list of 9 theories, you will see references to "color as it is in experience" or "color as a subjective quality". That's the type of color I'm talking about. The 9 theories are all ways of talking about "color as in physical objects", and focus on the properties of the external stimuli which cause a color sensation. The article gets around to talking about actual color, subjective or "phenomenal" color, only at the end.
Richard Kennaway comes closest to my position; he calls it an apparently impossible situation which we are actually living. I wouldn't put it quite like that; the only reason to call it impossible is if you are completely invested in an ontology lacking the so-called secondary qualities; if you aren't, it's just a problem to solve, not a paradox. But Richard comes closest (though who knows what Will Newsome is thinking). LW user "scientism" bites a different bullet to the eliminativists, and says colors are real and are properties of the external objects. That gets a point for realism, but it doesn't explain color in a dream or a hallucination.
Changing people's minds on this subject is an uphill battle, but people here are willing to talk, and most of these subjects have already been discussed for decades. There's ample opportunity to dissolve, not the problem, but the false solutions which only obscure the real problem, by drawing on the work of others; preferably before the future Rationality Institute starts mass-producing people who have the vice of quale-blindness as well as the virtues of rationality. Some of those people will go on to work on Friendly AI. So it's highly desirable that someone should do this. However, that would require time that I no longer have.
In this series of posts, I certainly didn't set out to focus on the issue of color. The first post is all about Friendly AI, the ontology of consciousness, and a hypothetical future discipline of quantum neurobiology. It may still be unclear why I think evidence for quantum computing in the brain could help with the ontological problems of consciousness. I feel that the brief discussion this week has produced some minor progress in explaining myself, which needs to be consolidated into something better. But see my remarks here about being able to collapse the dualistic distinction between mental and physical ontology in a tensor network ontology; also earlier remarks here about about mathematically representing the phenomenological ontology of consciousness. I don't consider myself dogmatic about what the answer is, just about the inadequacy of all existing solutions, though I respect my own ideas enough to want to pursue them, and to believe that doing so will be usefully instructive, even if they are wrong.
However, my time is up. In real life, my ability to continue even at this inadequate level hangs by a thread. I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task. I have considerable momentum now, but it will go to waste unless I can keep going for a little longer - a few weeks, maybe a few months. That should be enough time to write something up that contains a result of genuine substance, and/or enough time to secure an economic basis for my existence in real life that permits me to keep going. I won't go into detail here about how slim my resources really are, or how adverse my conditions, but it has been the effort that you would want from someone who has important contributions to make, and nowhere to turn for direct assistance.[*] I've done what I can, these posts are the end of it, and the next few days will decide whether I can keep going, or whether I have to shut down my brain once again.
So, one final remark. Asking for donations doesn't seem to work yet. So what if I promise to pay you back? Then the only cost you bear is the opportunity cost and the slight risk of default. Ten years ago, Eliezer lent me the airfare to Atlanta for a few days of brainstorming. It took a while, but he did get that money back. I honor my commitments and this one is highly public. This really is the biggest bargain in existential risk mitigation and conceptual boundary-breaking that you'll ever get: not even a gift, just a loan is required. If you want to discuss a deal, don't do it here, but mail me at mitchtemporarily@hotmail.com. One person might be enough to make the difference.
[*]Really, I can't say that, that's an emotional statement. There has been lots of assistance, large and small, from people in my life. But it's been a struggle conducted at subsistence level the whole way.
ETA 6 Feb: I get to keep going.
Colors are just the most vivid example. Smells and feelings are definitely part of consciousness - that is, part of the same phenomenal gestalt as color - so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called "moods" that aren't phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn't a quale, it's a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it's a conceptual posit made as part of a high-order intentional state.
These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a "total instantaneous phenomenal state of affairs", and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness - they're happier to focus on sensation, because it's so obvious, not just that it's there, but that you know it's there.
However, it's also clear, not only that we think, but that we know we are thinking - even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization - and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There's a "semantics" to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it's not enough to characterize conscious states as simply a blob of sensory quale - colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly - how to state accurately the ontology of this conceptual infusion into the phenomenal - is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.
So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can't be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.
Empirically, we do not know in any very precise way what the brain locus of consciousness is. It's sort of spread around, the brain contains multiple copies of data... One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an "exact quantum-tensor-factor state machine" rather than a "coarse-grained synapse-and-ion-gate state machine" are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn't; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.