Essential Background: Dissolving the Question
How could we fully explain the difference between red and green to a colorblind person?
Well, we could of course draw the analogy between colors of the spectrum and tones of sound; have them learn which objects are typically green and which are typically red (or better yet, give them a video camera with a red filter to look through); explain many of the political, cultural and emotional associations of red and green, and so forth... but it seems that the actual difference between our experience of redness and our experience of greenness is something much harder to convey. If we focus in on that aspect of experience, we end up with the classic philosophical concept of qualia, and the famous thought experiment known as Mary’s Room1.
Mary is a brilliant neuroscientist who has been colorblind from birth (due to a retina problem; her visual cortex would work normally if it were given the color input). She’s an expert on the electromagnetic spectrum, optics, and the science of color vision. We can postulate, since this is a thought experiment, that she knows and fully understands every physical fact involved in color vision; she knows precisely what happens, on various levels, when the human eye sees red (and the optic nerve transmits particular types of signals, and the visual cortex processes these signals, etc).
One day, Mary gets an operation that fixes her retinas, so that she finally sees in color for the first time. And when she wakes up, she looks at an apple and exclaims, "Oh! So that's what red actually looks like."2
Now, this exclamation poses a challenge to any physical reductionist account of subjective experience. For if the qualia of seeing red could be reduced to a collection of basic facts about the physical world, then Mary would have learned those facts earlier and wouldn't learn anything extra now– but of course it seems that she really does learn something when she sees red for the first time. This is not merely the god-of-the-gaps argument that we haven't yet found a full reductionist explanation of subjective experience, but an intuitive proof that no such explanation would be complete.
The argument in academic philosophy over Mary's Room remains unsettled to this day (though it has an interesting history, including a change of mind on the part of its originator). If we ignore the topic of subjective experience, the arguments for reductionism appear to be quite overwhelming; so why does this objection, in a domain in which our ignorance is so vast3, seem so difficult for reductionists to convincingly reject?
Veterans of this blog will know where I'm going: a question like this needs to be dissolved, not merely answered.
That is, rather than just rehashing the philosophical arguments about whether and in what sense qualia exist4, as plenty of philosophers have done without reaching consensus, we might instead ask where our thoughts about qualia come from, and search for a simplified version of the cognitive algorithm behind (our expectation of) Mary's reaction. The great thing about this alternative query is that it's likely to actually have an answer, and that this answer can help us in our thinking about the original question.
Eliezer introduced this approach in his discussion of classical definitional disputes and later on in the sequence on free will, and (independently, it seems) Gary Drescher relied on it in his excellent book Good and Real to account for a number of apparent paradoxes, but it seems that academic philosophers haven't yet taken to the idea. Essentially, it brings to the philosophy of mind an approach that is standard in the mathematical sciences: if there's a phenomenon we don't understand, it usually helps to find a simpler model that exhibits the same phenomenon, and figure out how exactly it arises in that model.
Modeling Qualia
Our goal, then, is to build a model of a mind that would have an analogous reaction for a genuine reason5 when placed in a scenario like Mary's Room. We don't need this model to encapsulate the full structure of human subjective experience, just enough to see where the Mary's Room argument pulls a sleight of hand.
What kinds of features might our model require in order to qualify? Since the argument relies on the notions of learning and direct experience, we will certainly need to incorporate these. Another factor which is not immediately relevant, but which I argue is vital, is that our model must designate some smaller part of itself as the "conscious" mind, and have much of its activity take place outside of that part.
Now, why should the conscious/unconscious divide matter to the experience of qualia? Firstly, we note that our qualia feel ineffable to us: that is, it seems like we know their nature very well but could never adequately communicate or articulate it. If we're thinking like a cognitive scientist, we might hypothesize that an unconscious part of the mind knows something more fully while the conscious mind, better suited to using language, lacks access to the full knowledge6.
Secondly, there's an interesting pattern to our intuitions about qualia: we only get this feeling of ineffability about mental events that we're conscious of, but which are mostly processed subconsciously. For example, we don't experience the feeling of ineffability for something like counting, which happens consciously (above a threshold of five or six). If Mary had never counted more than 100 objects before, and today she counted 113 sheep in a field, we wouldn't expect her to exclaim "Oh, so that's what 113 looks like!"
In the other direction, there's a lot of unconscious processing that goes into the process of digestion, but unless we get sick, the intermediate steps don't generally rise to conscious awareness. If Mary had never had pineapple before, she might well extol the qualia of its taste, but not that of its properties as it navigates her small intestine. You could think of these as hidden qualia, perhaps, but it doesn't intuitively feel like there's something extra to be explained the way there is with redness.
Of course, there are plenty of other features we might nominate for inclusion in our model, but as it turns out, we can get a long way with just these two. In the next post, I'll introduce Martha, a simple model of a learning mind with a conscious/unconscious distinction, and in the third post I'll show how Martha reacts in the situation of Mary's Room, and how this reaction arises in a non-mysterious way. Even without claiming that Martha is a good analogue of the human mind, this will suffice to show why Mary's Room is not a logically valid argument against reductionism, since if it were then it would equally apply to Martha. And if we start to see a bit of ourselves in Martha after all, so much the better for our understanding of qualia...
TO BE CONTINUED
Disclaimer
One could reasonably ask what makes my attempt special on such a well-argued topic, given that I’m not credentialed as a philosopher. First, I'd reiterate that academic philosophers really haven’t started to use the concept of dissolving a question- I don’t think Daniel Dennett, for instance, ever explored this train of thought. And secondly, of those who do try and map cognitive algorithms within philosophy of mind, Eliezer hasn't tackled qualia in this way, while Gary Drescher gives them short shrift in Good and Real. (The latter essentially makes Dennett's argument that with enough self-knowledge qualia wouldn’t be ineffable. But in my mind this fails to really dissolve the question- see my footnote 4.)
Footnotes:
1. The argument is called "Mary’s Room" because the original version (due to Frank Jackson) posited that Mary had perfectly normal vision but happened to be raised and educated in a perfectly grayscale environment, and one day stepped out into the colorful world like Dorothy in The Wizard of Oz. I prefer the more plausible and philosophically equivalent variant discussed above, although it drifts away from the etymology of the argument’s name.
2. Ironically, it was a green apple rather than a red one, but Mary soon realized and rectified her error. The point stands.
3. In general, an important rationalist heuristic is to not draw far-reaching conclusions from an intuitively plausible argument about a subject (like subjective experience) which you find extremely confusing.
4. Before we move on, though, one key reductionist reply to Mary’s Room is that either qualia have physical effects (like causing Mary to say "Oh!") or they don't. If they do, then either they reduce to ordinary physics or you could expect to find violations of physical law in the human brain, which few modern philosophers would dare to bet on. And if they don't have any physical effects, then somehow whatever causes her to say "Oh!" has nothing to do with her actual experience of redness, which is an exceptionally weird stance if you ponder it for a moment; read the zombie sequence if you're curious.
Furthermore, one could object (as Dennett does) that Mary’s Room, like Searle’s Chinese Room, is playing sleight of hand with impossible levels of knowledge for a human, and that an agent who could really handle such massive quantities of information really wouldn't learn anything new when finally having the experience. But to me this is an unsatisfying objection, because we don’t expect to see the effect of the experience diminish significantly as we increase her level of understanding within human bounds– and at most, this objection provides a plausible escape from the argument rather than a refutation.
5. (and not, for instance, because we programmed in that specific reaction on its own)
6. Indeed, the vast majority of visual processing- estimating distances, distinguishing objects, even identifying colors- is done subconsciously; that's why knowing that something is an optical illusion doesn't make you stop seeing the illusion. Steven Pinker's How the Mind Works contains a treasure trove of examples on this subject.
I've conceded that they're as special as birds that don't fly. That is, that they're things which don't require any special explanation. One of the things you learn from computer programming is that recursion has to bottom out somewhere. To me, the idea that there are experiential primitives is no more surprising than the fact that computer languages have primitive operations: that's what you make the non-primitives out of. No more surprising than the idea that at some point, we'll stop discovering new levels of fundamental particles.
Among programmers, it can be a fun pastime to see just how few primitives you can have in a language, but evolution doesn't have a brain that enjoys such games. So it's unsurprising that evolution would work almost exclusively in the form of primitives -- in other words, a very wide-bottomed pyramid.
Humans are the special ones - the only species that unquestionably uses recursive symbolic communication, and is therefore the only species that makes conceptual pyramids at all.
So, from my point of view, anything that's not a primitive neural event is the thing that needs a special explanation!
You appear to be distorting my argument, by conflating experiential primitives and experiential grounding. Humans can communicate metaphorically, analogously, and in various other ways... but all of that communication takes place either in symbols (grounded in some prior experience), or through the direct analog means available to us (tone of voice, movement, drawing, facial expressions) to ground a communication in some actual, present-moment experience.
But, I expect you already knew that, which makes me think you're simply trolling.
Why are you here, exactly?
Clearly, you're not a Bayesian reductionist, nor do you appear to show any interest whatsoever in becoming one. In not one comment have I ever seen you learn something from your participation, nor do I see anything that suggests you have any interest in learning anything, or really doing anything else but generating a feeling of superiority through your ability to remain unconvinced of anything while putting on a show of your education.
Your language about arguments and concessions strongly suggest that you think this is a debating society, or that arguments are soldiers to be sent forth in support of a bottom line...
And I don't think I've ever seen you ask a single question that wasn't of the rhetorical, trying-to-score-points-off-your-opponent variety, which suggests you have very little interest in becoming... well, any less wrong than you currently are.
So, why are you here?