Mercurial comments on Human consciousness as a tractable scientific problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (103)
I think you're confusing me and Yvain. I'll take that as a complement, though!
I agree with pretty much everything you've said here - but it's posed as though to stand as an argument against what I think, so I'm a little bit concerned that we're not talking about the same thing. For instance, you say:
I agree, dualism is unnecessary as far as we know. It's hard for me to conceive of a type of evidence that would ever suggest that we need dualism.
However, the existence of qualia does not immediately require dualism. The term "qualia" just points to the experiences we have that currently seem to sit on the "other side" of the hard problem of consciousness with respect to our current empirical knowledge. Presumably, we will eventually find a reductionist answer to the hard problem of consciousness. In the meantime, though, we still need a way of talking about the phenomenon in question. Qualia don't play the same role in this question that vis vitalis did with vitalism; it isn't that we're trying to answer the hard problem by saying "subjective experience is made up of qualia", but instead we're trying to describe how subjective experience presents itself to us. We see red, and this experience of redness seems to have a certain character to it, so we tag it with the descriptor of being the quale of red. The question is, how is it that there is a conscious experience induced by neurons firing in response to stimulation of the optic nerve? We know how visual perception works, but as far as I know we don't very well know how the quale of red appears from that. It's a statement of the question, not a phlogiston-class proclamation masquerading as an answer.
Does that clarify where I'm coming from on this? It's not dualism (or at least I'm pretty darn sure it's not!); it's just naming a confusion in as much detail as possible.
Oops, I think you may be right, I'm sorry and/or you're welcome. Heh.
Anyway, oddly enough, I understand the details of your argument, but I don't see the big picture that you're presenting. You reject the proposal that qualia are dualistic in nature, so we're definitely on the same page here. But then you ask,
I agree that this is a hard question (seeing as it hasn't been fully answered yet), but I don't see this question as categorically different from questions such as "how is our blood flow regulated ?" or "how does visual perception work in humans ?". Presumably, a sleepwalker's brain, or a robot's circuitry, or a zombie's... er... goo or whatever it is zombies have, would implement this functionality in different ways than normal human brains do; and we could tell whether the sleepwalker/robot/zombie implements this functionality or not by talking to them (as you have pointed out in your thought experiment).
So, would you agree that the question "how does consciousness work" is no different from "how does blood flow work" ? If not (as I suspect is the case), then what's the difference ?
By the way, when people talk about qualia, they usually claim that we all share the same ones. Thus, for example, when I see something that I experience as "red", and you see something else (or maybe even the same object) that you experience as "red", we are both using the same exact quale to experience that stuff with. There's pretty much nowhere to go from this premise other than toward dualism, which is why I'd originally assumed you were going toward that route. But now I think that you'd reject the premise just as I do -- is that correct ?
Ah, then perhaps I'm more confused than I thought! I still haven't identified the source of my confusion, though.
Er... Yes and no. I agree that eventually we should be able to find an answer that sounds as reduced as an answer to "How does blood flow work?" does. But from where we currently stand, they seem to be really, incredibly fundamentally different questions - as long as you understand the question "How does consciousness work?" to be in the hard sense rather than in the easy one.
I think you get near to the crux of the matter in this statement:
Yes, presumably that's the case, and eventually we'll nail that down. But from what we can currently tell, there doesn't seem to be even an in-principle plausible mechanism for adding qualia to a computer's way of processing things. A computer receives input, does some well-defined manipulations, and offers output. Where do qualia come into play? How is it we get the subjective impression of there being a "someone" who is "watching" what's going on in the Cartesian theater? The very concept is internally inconsistent (e.g., how does the homunculus experience?), but the point is the same: there doesn't seem to be any plausible way that we have currently thought of to get from neurons firing to qualia.
I guess the categorical difference is that when asking about blood flow, there's someone who experiences the question and the data and the subsequent answer; but when asking about consciousness, it's the very process of being able to understand the question in the first place that we're asking about. I'm not sure that's entirely equivalent to the hard problem, though.
You might find it helpful to read the Wikipedia page on the hard problem. That might help to explain some of the nuances better than I've been able to thus far. (In particular, it helps to point out that by "hard problem" I don't mean "a challenging problem" but rather "a problem whose potential to be answered even in theory seems in question.")
Again, I think that was Yvain.
Ok, that makes sense. I understand now that this is what you believe, but I still don't see why. You say:
This, to me, sounds like a circular argument at worst, and a circular analogy (if there is such a thing) at best. You are trying to illustrate your belief that qualia are categorically different from visual perception (just f.ex.), by introducing a computer which possesses visual perception but not qualia, because, due to the qualia being so different from visual perception, there is no way to grant qualia to the computer even in principle. So, "qualia are hard because qualia are hard", which is a tautology. Your next paragraph makes a lot more sense to me:
I think that, if you go this route, you arrive at a kind of solipsism. You know for a fact that you personally have a consciousness, but you don't know this about anyone else, myself included. You can only infer that other beings are conscious based on their behavior. Ok, to be fair, the fact that they are biologically human and therefore possess the same kind of a brain that you do can count as supporting evidence; but I don't know if you want to go that route (Searle does, AFAIK). Anyway, let's assume that your main criterion for judging whether anyone else besides yourself is conscious is their behavior (if that's not the case, I can offer some arguments for why it should be), and that you reject the solipsistic proposition that you are the only conscious being around (ditto). In this case, a perfect sleepwalker or a qualia-less computer that perfectly simulates having qualia, etc., is actually less parsimonious than the alternative, and therefore the concept of qualia buys you nothing (assuming that dualism is false, as always). And then, the "hard question" becomes one of those "mysterious questions" to which you could give a "mysterious answer", as per the Sequences.
I'd actually read that page earlier, and it (along with associated links) seemed to imply that either dualism offers the best answer to the "hard question", or the "hard question" is meaningless as per Dennet -- which is why I took the time to slam dualism in my previous posts.
Darn, again, I'm sorry. But nevertheless, I think it's a good thought experiment.
Mmm. Yes, I think you're right. As I've chewed on this, I've come to wonder if that's part of where I've been getting the impression that there's a hard problem in the first place. As I've tried to reduce the question enough to notice where reduction seems to fail or at least get a bit lost, my confusion confuses me. I don't know if that's progress, but at least it's different!
I'm afraid I'm a bit slow on the uptake here. Why does this require solipsism? I agree that you can go there with a discussion of consciousness, but I'm not sure how it's necessarily tied into the fact that consciousness is how you know there's a question in the first place. Could you explain that a bit more?
Well... Yes, I think I agree in spirit. The term "behavior" is a bit fuzzy in an important way, because a lot of the impression I have that others are conscious comes from a perception that, as far as I can tell, is every bit as basic as my ability to identify a chair by sight. I don't see a crying person and consciously deduce sadness; the sadness seems self-evident to me. Similarly, I sometimes just get a "feel" for what someone's emotional state is without really being able to pinpoint why I get that impression. But as long as we're talking about a generalized sense of "behavior" that includes cues that go unnoticed by the conscious mind, then sure!
It's not a matter of what qualia buy you. The oddity is that they're there at all, in anything. I think you're pointing out that it'd be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd. But what's even more odd is that even though we can be extremely confident that there's some mechanism that goes from firing neurons to qualia, we have no clue what it could be. Not just that we don't yet know what it is, but as far as I know we don't know what could possibly play the role of such a mechanism.
It's almost as though we're in the position of early 19th century natural philosophers who are trying to make sense of magnetism: "Surely, objects can't act at a distance without a medium, so there must be some kind of stuff going on between the magnets to pull them toward one another." Sure, that's close enough, but if you focus on building more and more powerful microscopes to try to find that medium, you'll be SOL. The problem in this context is that there are some hidden assumptions that are being brought to bear on the question of what magnetism is that keep us from asking the right questions.
Mind you, I don't know if understanding consciousness will actually turn out to yield that much of a shift in our understanding of the human mind. But it does seem to be slippery in much the same way that magnetism from a billiard-balls-colliding perspective was, as I understand it. I suspect in the end consciousness will turn out to be no more mysterious than magnetism, and we'll be quite capable of building conscious machines someday.
In case this adds some clarity: My personal best proto-guess is that consciousness is a fuzzy term that applies to both (a) the coordination of various parts of the mind, including sensory input and our sense of social relationships; and (b) the internal narrative that accompanies (a). If this fuzzily stated guess is in the right ballpark, then the reason consciousness seems like such a hard problem is that we can't ever pin down a part of the brain that is the "seat of consciousness", nor can we ever say exactly when a signal from the optic nerve turns into vision. Similarly, we can't just "remove consciousness", although we can remove parts of it (e.g., cutting out the narrator or messing with the coordination, as in meditation or alcohol).
I wouldn't be at all surprised if this guess were totally bollocks. But hopefully that gives you some idea of what I'm guessing the end result of solving the consciousness riddle might look like.
Well, there's exactly one being in existence that you know for sure is conscious and experiences qualia: yourself. You suspect that other beings (such as myself) are conscious as well, based on available evidence, though you can't be sure. This, by itself, is not a problem. What evidence could you use, though ? Here are some options.
You could say, "I think other humans are conscious because they have the same kind of brains that I do", but then you'd have to exclude other potentially conscious beings, such as aliens, uploaded humans, etc., and I'm not sure if you want to go that route (let me know if you do). In addition, it's still possible that any given human is not a human at all, but one of those perfect emulator-androids, so this doesn't buy you much.
You could put the human under a brain scanner, and demonstrate that his brain states are similar to your own brain states, which you have identified as contributing to consciousness. If you could do that, though, then you would've reduced consciousness down to physical brain states, and the problem would be solved, and we wouldn't be having this conversation (though you'd still have a problem with aliens and uploaded humans and such).
You could also observe the human's behavior, and say, "this person behaves exactly as though he was conscious, therefore I'm going to assume that he is, until proven otherwise". However, since you postulate the existence of androids/zombies/etc. that emulate consciousness perfectly without experiencing, you can't rely on behavior, either.
Basically, try as I might, I can't think of any piece of evidence that would let you distinguish between a being -- other than yourself -- who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia. I don't think that such evidence could even exist, given the existence of perfect zombies (since they would be imperfect if such evidence existed). Thus, you are forced to conclude that the only being who is conscious is yourself, which is a kind of solipsism (though not the classic, existential kind).
It seems like we agree on this point, then -- yey ! Of course, I would go one step further, and argue that there's nothing special about our subconscious mind. We know how some parts of it work, we have mapped them down to physical areas of the brain, and our maps are getting better every day.
I don't just think it would be odd, I think it would be logically inconsistent, as long as you're willing to assume that people other than yourself are, in fact, conscious. If you're not willing to assume that, then you arrive at a kind of solipsism, which has its own problems.
Right, which is why I reject the existence of qualia as an independent entity altogether. As per your magnetism analogy:
Right, and the problem here is not that your microscopes aren't powerful enough, but that your very idea of a magnetic attraction medium is flawed. In reality, there are (probably) no such things as "magnets" at all; there are just collections of waveforms of various kinds (again, probably). You choose to call some of them "magnets" and some others "apples", but those words are just grossly simplified abstractions that you have created in order to talk about the world -- because if you had to describe every single quark of it, you'd never get anywhere.
Similarly, "qualia" and "consciousness" are just abstractions that you'd created in order to talk about human brains -- including your own brain. I understand that you can observe your own consciousness "from the inside", which is not true of magnets, but I don't see this as an especially interesting fact. After all, you can observe gravity "from the inside", as well (your body is heavy, and tends to fall down a lot), but that doesn't mean that your own gravity is somehow different from my gravity, or a rock's gravity, because as far as gravity is concerned, you aren't special.
I don't think that we need to necessarily pin down a single part of the brain that is the "seat of consciousness". We can't pin down a single part that constitutes the "seat of vision", either, but human vision is nonetheless fairly well understood by now. The signal from the optic nerve is just part of the larger mechanism which includes the retina, the optic nerve, the visual cortex, and ultimately a large portion of the brain. There's no point at which electrochemical signals turn into vision, because these signals are a part of vision. Similarly, there isn't a single "seat of blood flow" within the human body, but blood flow is likewise fairly well understood.
I'm not sure I follow your reasoning here. What do you mean by "removing consciousness" and "cutting out the narrator", and why is it important ? Drunk (or meditating) people are still conscious, after a fashion.
Ah! Okay. Three points:
Er... Except that we're not conscious of it! I'd say that's pretty special - as long as we agree that "special" means "different" rather than "mysterious".
Sorry, I meant "odd" in the artistically understated sense. We agree on this.
So here, I think, is a source of our miscommunication. I also reject qualia as being independent.
I think part of the problem we're running into here is that by naming qualia as nouns and talking about whether it's possible to add or remove them, we've inadvertently employed our parietal cortices to make sense of conscious experience. It's like how people talk about "government" as though it's a person when, really, they're just reifying complex social behavior (and as a result often hiding a lot of complexity from themselves).
"Quale" is a name that has been, sadly, agreed upon to capture the experience of blueness, or the sense of a melody, or what-have-you. We needed some kind of word to distinguish these components of conscious experience from the physical mechanisms of perception because there is a difference, just like there's a difference between a software program and the physical processes that result in the program running. Yes, as far as the universe is concerned, it's just quarks quarking about. But just like it's helpful to talk about chairs and doors, it's helpful to talk about qualia in order to understand what our experience consists of.
I suspect in the future we'll be able to agree that "qualia" was actually a really bad term to use, with the benefit of hindsight. I suspect consciousness will turn out to be a reification, and thus talking about its components as though they're things just throws us off the track and creates confusion in the guise of a mystery. But even if we dump the term "qualia", we're still stuck with the fact that we experience, and there's a qualitative sense in which experience doesn't seem like it's even in-principle describable in terms of firing neurons. If you told me that it was discovered that there's actually a region of the brain that's responsible for adding qualia to vision (pardoning the horrid implicit metaphor), I wouldn't feel like hardly anything had been explained. So you found circuitry that, when monkeyed with, makes all yellow vanish from my conscious awareness. But how did yellow appear in the first place, as opposed to being just neuronal signals bouncing around? Pointing to a region of the brain and saying "That does it" still leaves me baffled as to how. I don't see how explaining the circuitry of that brain region in perfect synapse-level detail could answer that question.
However, I could totally see consciousness turning out to have this "hard problem" because it's like trying to describe where Mario is in terms of the transistors in a game console.
On this point, I think we might just be frozen in disagreement. You seem to be taking as practically axiomatic that there's nothing significantly different about consciousness as compared to anything else, like gravity. To me, that view of consciousness is internally incoherent. You can make sense of gravity as an outside observer, but you can't make sense of your own consciousness as an outside observer. That's hugely relevant for any attempt to approach consciousness with the same empirical eye as used on gravity, or magnetism, or any other physical phenomenon. We can look at those phenomena from a position that largely doesn't interact with them in a relevant way, but I cannot fathom a comparable place to stand in order to be conscious of consciousness while not interacting with it.
This is not to say that consciousness is intrinsically more mysterious than gravity. I'm just utterly dumbfounded that you can think that your ability to be aware of anything is somehow no more interesting than any other random phenomenon in the universe.
I don't think so either.
...
We seem to keep doing this. I agree, because that's part of the point I was making.
Removing consciousness is exactly the process that would turn a person into a p-zombie, yes? So what I've suggested as a general direction to consider for how consciousness appears passes the sanity test of not allowing p-zombies.
As for the narrator... Well, you know how there's a kind of running commentary going on in your mind? It's possible to stop that narration, and if you do so it changes the quality of consciousness by quite a lot.
Meditation, alcohol, and quite a number of other things can all monkey with the way parts of the mind coordinate and also get the narrator to stop narrating (or at least not become an implicit center of attention anymore). And I'm not claiming that doing these things removes consciousness. Quite the opposite, I'm pointing out that drunk and meditating people have a different kind of conscious experience.
True, but you can carry the reasoning one step further. The claim "other people are conscious" is a positive claim. As such, it requires positive evidence (unless it's logically necessary, which in this case it's not). If your concept of qualia/consciousness precludes the possibility of evidence, you'd be justified in rejecting the claim.
Fair enough.
Well, it depends on what you mean by "perception". If you mean, for example, "light hitting my retina and producing a signal in my optic nerve", then yes, experience is different -- because the aforementioned process is a component of it. The overall process of experience involves your visual cortex, and ultimately your entire brain, and there's a lot more stuff that goes on in there.
Hmm, I don't know, is there such a difference ? As far as I understand, when Firefox is running, we can (plus or minus some engineering constraints) reduce its functionality down to the individual electrons inside the integrated circuits of my computer (plus or minus some quantum physics constraints). Where does the difference come in ?
I lack this sense, apparently :-(
As it happens, there's a real neurological phenomenon called "blindsight" which is similar to what you're describing. It's relatively well understood (AFAIK), and, in this specific case, we can indeed point to a specific region of the brain that causes it. So, at least in case of vision, we can actually map the presence or absence of conscious visual experience to a specific area of the brain. I suspect that there are scientists who are even now busily pursuing further explanations.
The word "axiomatic" is perhaps too strong of a word. I just don't think that it's possible to treat consciousness as being categorically different from other phenomena, such as gravity, while still maintaining a logically and epistemically (if that's a word) consistent, non-solipsistic worldview.
Ok, let me temporarily grant you this premise. What about the consciousness of other people ? Can I make sense of those consciousness as an outside observer ? If the answer is "no", then consciousness becomes totally mysterious, because I can only observe other people's consciousness from the outside. If the answer is "yes", then you end up saying, "my own consciousness is categorically different from anyone else's", which seems unlikely to be true, since you're just a regular human like the rest of us.
I agree, but I don't think this means that you can't "make sense" of your consciousness regardless. In a way, this entire site is a toolkit for making sense of your own consciousness -- specifically, its biases -- and for using this understanding to alter it.
Ah, ok, I get it, and I agree, but I'm still not sure how this relates to the point you're making. If anything, it offers tangential evidence against it -- because the existence of a relatively simple physical mechanism (such as alcohol) that can alter your consciousness points the way to reducing your own consciousness down to a collection of strictly physical interactions.
You know, I think we're getting lost in the little details here, and we keep communicating past one another.
First, let me emphasize that I do think we'll eventually be able to explain consciousness in a reductionist way. I've tried to make that clear, but some of your arguments make me wonder if I've failed to convey that.
Second, remember that this whole discussion arose because you questioned the value of trying to answer the hard problem of consciousness. I now suspect what you originally meant was that you don't think there is a hard problem, so there wasn't anything to answer. And in an ultimate sense, I think you're right: I think people like Thomas Nagel are trying to argue that we need a complete paradigm shift in order to explain how qualia exist, and I think they're wrong. Eventually it almost certainly comes down to brain behavior. Even if it's not clear what that pathway could be, that's a description of human creativity and not of the intrinsic mysteriousness of the phenomenon.
But what you said was this:
This, to me, really sounds like you're saying we can't detect qualia, so we might as well assume there are no qualia, so we shouldn't worry about how qualia arise. Maybe that wasn't your point. But if it was, I stand in firm disagreement because I think that qualia are the only things we can care about!
For some reason I can't seem to convey why I think that. I feel rather like I'm pointing at the sun and saying "Look! Light!" and you're responding with "We don't have a way of detecting the light, so we might as well assume it isn't there." (Please excuse the flaw in the analogy in that we can detect light. Pretend for the moment that we can't.) All I can do is blink stupidly and point again at the sun. If I can't get you to acknowledge that you, too, can see, then no amount of argumentation is going to get the point across.
So all I'm left with is an insistence that if my understanding of the universe is completely off and it turns out to be possible to remove conscious experience from people, I most certainly would not want that done to me - not that I could care afterwards, but I absolutely would care beforehand! So to me, the presence or absence of qualia matters a lot.
But if you cannot relate to that at all, I don't think I'll ever be able to convey why I feel that way. I'm completely at a loss as to how this could possibly be a topic of disagreement.
Sorry, you're right, I tend to do that a lot :-(
That's correct, I think; though obviously I'm all for acquiring a better understanding of consciousness.
I think it's not entirely clear what that pathway is, but there are some very good clues regarding what that pathway could be, since certain aspects of consciousness (such as vision, f.ex.) are reasonably well understood.
Pretty much, but I think we should make a distinction between a person's own qualia, as experienced by the person, and the qualia of other people, from the point of view of that same person. Let's call the person's own qualia "P" and everyone else's qualia (from the point of view of the person) "Q".
Obviously, each person individually can detect P. Until some sort of telepathy gets developed (assuming that such a thing is possible in principle), no person can detect Q (at least, not directly).
You seem to be saying -- and I could be wrong about this, so I apologize in advance if that's the case -- that, in order to build a general theory of consciousness, we need to figure out a way to study P in an objective way. This is hard (I would say, impossible), since P is by its nature subjective, and thus inaccessible to anyone other than yourself.
I, on the other hand, am arguing that a general theory of consciousness can be built based solely on the same kind of evidence that compels us to believe that other people experience things -- i.e., that Q exists and is reducible to brain states. Let's say that we built some sort of a statistical model of consciousness. We can estimate (with a reasonably high degree of certainty) what any given person will experience in any situation, by using this model and plugging in a whole bunch of parameters (representing the person and the situation). I think you would you agree that such a model can, in principle, exist (though please correct me if I'm wrong). Then, would you agree that this model can also predict what you, yourself, will experience in a given situation ? If not, then why not ? If yes, then how is P any different from Q ?
I agree, but I believe that removing a person's consciousness will necessarily alter his behavior; in most cases, this alteration would be quite drastic. Thus, I definitely wouldn't want this done to me, or to anyone else, for this matter.
However, I think you are contemplating a situation where we remove a person's consciousness, and yet his behavior (which includes talking about his consciousness) remains exactly the same. I argue that, if such a thing is possible, then consciousness is a null concept, since it has literally no effect on anything we could ever detect. As far as I understand, you agree with me with respect to Q, but disagree with respect to P. But then, you must necessarily believe that P is categorically different from Q, somehow... mustn't you ?
If you do believe this, then you must also believe that any model of consciousness that we could possibly build will work correctly for anyone other than yourself. This seems highly unlikely to me, however -- what makes you such an outlier ? You are a human like the rest of us, after all. And if you are not an outlier, and yet you believe that the model won't function for you, then you must believe that such a model cannot be built in principle (i.e., it won't function for anyone else, either), and yet I think you would deny this. As I see it, the only way to reconcile these contradictions is to reject the idea that P is categorically different from Q, and thus there's nothing special about your own qualia, and thus the problem consciousness isn't any harder than the problem of, say, unifying gravity with the other fundamental forces (which is pretty hard, admittedly).
As I discussed here - see also this comment for clarification - we should in theory be able to discover if other beings have qualia if we were to learn about their brains in such microscopic detail that we are performing approximately the same computations in our brains that their brains are running; we then "get their qualia" first-hand.
As for arguing about qualia verbally, I hold qualia to be both entirely indefinable (implying that the concept is irreducible, if it exists) and something that the vast majority of humans apprehend directly and believe very strongly to exist. There is little to be gained by arguing about whether qualia exist, because of this problem - the best that can be achieved through argument is that both of you accept the consensus regarding the existence of this indefinable thing that nonetheless needs to be given a name.
Ok, I read your article as well as your comment, and found them very confusing. More on this in a minute.
How is that different from saying, "I found qualia to be a meaningless concept" ? I may as well say, "I think that human consciousness can best be explained by asdfgh, where asdfgh is an undefinable concept". That's not much of an explanation. In addition, this makes it impossible to discuss qualia at all (with anyone other than yourself, that is), which once again hints at a kind of solipsism.
This is weak evidence at best. The vast majority of humans apprehend all kinds of stuff directly (or so they believe), including gods, demons, honest politicians, etc. At least some of these things have a very low probability of existing, so how are qualia any different ? In addition, regardless of what the vast majority of people believe, I personally disagree with this "consensus regarding the existence of this indefinable thing", so you'll need to convince me some other way other than stating the consensus.
Note that I agree with the statement, "humans appear to act as though they believe that they experience things, just as I do" -- a statement which we may reduce to something like, "humans experience things" (with the usual understanding that there's some non-zero probability of this being false). I just don't see why we need a special name for these experiences, and why we have to treat them any differently from anything else that humans do (or that rocks do, for that matter).
Which brings me back to your article (and comment). In it, you describe qualia as being indefinable. You then proceed to discuss them at great length, which means that you must have some sort of a definition in mind, or else your article would be meaningless (or perhaps it would be meaningless to everyone other than yourself, which isn't much better). Your central argument appears to rest on the assumption that qualia are irreducible, but I still don't understand why you'd assume that in the first place.
In short, qualia appear to be a "mysterious answer to a mysterious question": they are impossible to define, irreducible, and totally inexplicable -- and thus impossible to study or even discuss. They are a kind of elan vital, and therefore not terribly useful as a concept.