Previous articles: Personal research update, Does functionalism imply dualism?, State your physical account of experienced color.
In phenomenology, there is a name for the world of experience, the "lifeworld". The lifeworld is the place where you exist, where time flows, and where things are actually green. One of the themes of the later work of Edmund Husserl is that a scientific image of the real world has been constructed, on the basis of which it is denied that various phenomena of the lifeworld exist anywhere, at any level of reality.
When I asked, in the previous post, for a few opinions about what color is and how it relates to the world according to current science, I was trying to gauge just how bad the eclipse of the lifeworld by theoretical conceptions is, among the readers of this site. I'd say there is a problem, but it's a problem that might be solved by patient discussion.
Someone called Automaton has given us a clear statement of the extreme position: nothing is actually green at any level of reality; even green experiences don't involve the existence of anything that is actually green; there is no green in reality, there is only "experience of green" which is not itself green. I see other responses which are just a step or two away from this extreme, but they don't deny the existence of actual color with that degree of unambiguity.
A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green. Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green. Either the lifeworld exists somewhere in reality, or you must assert, as does the philosopher quoted by Automaton, that all that exists are brain processes and words. Your color sensations aren't really there, you're "having a sensation" without there being a sensation in reality.
What about the other responses? kilobug seems to think that pi actually exists inside a computer calculating the digits of pi, and that this isn't dualist. Manfred thinks that "keeping definitions and referents distinct" would somehow answer the question of where in reality the actual shades of green are. drethelin says "The universe does not work how it feels to us it works" without explaining in physical terms what these feelings about reality are, and whether any of them is actually green. pedanterrific asks why wrangle about color rather than some other property (the answer is that the case of color makes this sort of problem as obvious as it ever gets). RomeoStevens suggests I look into Jeff Hawkins. Hawkins mentions qualia once in his book "On Intelligence", where he speculates about what sort of neural encoding might be the physical correlate of a color experience; but he doesn't say how or whether anything manages to be actually colored.
amcknight asks which of 9 theories of color listed in the SEP article on that subject I'm talking about. If you go a few paragraphs back from the list of 9 theories, you will see references to "color as it is in experience" or "color as a subjective quality". That's the type of color I'm talking about. The 9 theories are all ways of talking about "color as in physical objects", and focus on the properties of the external stimuli which cause a color sensation. The article gets around to talking about actual color, subjective or "phenomenal" color, only at the end.
Richard Kennaway comes closest to my position; he calls it an apparently impossible situation which we are actually living. I wouldn't put it quite like that; the only reason to call it impossible is if you are completely invested in an ontology lacking the so-called secondary qualities; if you aren't, it's just a problem to solve, not a paradox. But Richard comes closest (though who knows what Will Newsome is thinking). LW user "scientism" bites a different bullet to the eliminativists, and says colors are real and are properties of the external objects. That gets a point for realism, but it doesn't explain color in a dream or a hallucination.
Changing people's minds on this subject is an uphill battle, but people here are willing to talk, and most of these subjects have already been discussed for decades. There's ample opportunity to dissolve, not the problem, but the false solutions which only obscure the real problem, by drawing on the work of others; preferably before the future Rationality Institute starts mass-producing people who have the vice of quale-blindness as well as the virtues of rationality. Some of those people will go on to work on Friendly AI. So it's highly desirable that someone should do this. However, that would require time that I no longer have.
In this series of posts, I certainly didn't set out to focus on the issue of color. The first post is all about Friendly AI, the ontology of consciousness, and a hypothetical future discipline of quantum neurobiology. It may still be unclear why I think evidence for quantum computing in the brain could help with the ontological problems of consciousness. I feel that the brief discussion this week has produced some minor progress in explaining myself, which needs to be consolidated into something better. But see my remarks here about being able to collapse the dualistic distinction between mental and physical ontology in a tensor network ontology; also earlier remarks here about about mathematically representing the phenomenological ontology of consciousness. I don't consider myself dogmatic about what the answer is, just about the inadequacy of all existing solutions, though I respect my own ideas enough to want to pursue them, and to believe that doing so will be usefully instructive, even if they are wrong.
However, my time is up. In real life, my ability to continue even at this inadequate level hangs by a thread. I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task. I have considerable momentum now, but it will go to waste unless I can keep going for a little longer - a few weeks, maybe a few months. That should be enough time to write something up that contains a result of genuine substance, and/or enough time to secure an economic basis for my existence in real life that permits me to keep going. I won't go into detail here about how slim my resources really are, or how adverse my conditions, but it has been the effort that you would want from someone who has important contributions to make, and nowhere to turn for direct assistance.[*] I've done what I can, these posts are the end of it, and the next few days will decide whether I can keep going, or whether I have to shut down my brain once again.
So, one final remark. Asking for donations doesn't seem to work yet. So what if I promise to pay you back? Then the only cost you bear is the opportunity cost and the slight risk of default. Ten years ago, Eliezer lent me the airfare to Atlanta for a few days of brainstorming. It took a while, but he did get that money back. I honor my commitments and this one is highly public. This really is the biggest bargain in existential risk mitigation and conceptual boundary-breaking that you'll ever get: not even a gift, just a loan is required. If you want to discuss a deal, don't do it here, but mail me at mitchtemporarily@hotmail.com. One person might be enough to make the difference.
[*]Really, I can't say that, that's an emotional statement. There has been lots of assistance, large and small, from people in my life. But it's been a struggle conducted at subsistence level the whole way.
ETA 6 Feb: I get to keep going.
An ontology is a theory about what it is that exists. I have to speak of "physical ontology" and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn't have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.
So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don't have color, sound, or any of those "feels" which get filed under qualia.
Asking "where is the experienced color in the physical brain?" shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location - the property of always being at some point in space - and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn't actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles - e.g. "location of center of mass" or "having a part at location x0 and another part at x1". We can even extend to counterfactual properties, e.g. "the property of flying apart if a heavy third particle were to fly past on a certain trajectory".
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that's a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional "physical" predicates that can also enter into the definition.
I presume that most modern people don't consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no "minds" or "psychological states", there are just bodies that speak. It's the classic case of accounting for experience by only accounting for what people say about experience.
Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.
But I claim they can't be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience - it is the experience - and you say that these neuronal sheets which behave like the parts of an experience still aren't the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).
There are two peculiarities to this second option. First, haven't we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can't be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn't it sound like a soul - something that's not a network of neurons, but a single thing; the single place where the whole experience is localized?
I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there's no reason why the elementary internal properties of an entangled system can't literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity - not the structure of a collective behavior (as when we talk about the state of a neuron as "firing" or "not firing", a description which passes over the intricate microscopic detail of the exact detailed state).
The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but "fundamental", in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it's not a virtual machine.
Maybe that's the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it's a state machine at the "bottom level of implementation". If consciousness is a virtual state machine - so I argue - then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don't intrinsically have.
If you are just making a causal model of something, there's no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you're modeling. But consciousness isn't just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can't just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it's the "Cartesian theater" where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.
(That is, the experience of willing is awareness of a certain type of causality taking place. I'm not saying that the will is a quale; the will is just the self in its causal role, and there are "qualia of the will" which constitute the experience of having a will, and they result from reflective awareness of the self's causal role and causal power... Or at least, these are my private speculations. )
I'll guess that my prose got a little difficult again towards the end, but that's how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.