I encounter many intelligent people (not usually LWers, though) who say that despite our recent scientific advances, human consciousness remains a mystery and currently intractable to science. This is wrong. Empirically distinguishable theories of consciousness have been around for at least 15 years, and the data are beginning to favor some theories over others. For a recent example, see this August 2011 article from Lau & Rosenthal in Trends in Cognitive Sciences, one of my favorite journals. (Review articles, yay!)

Abstract:

Higher-order theories of consciousness argue that conscious awareness crucially depends on higher-order mental representations that represent oneself as being in particular mental states. These theories have featured prominently in recent debates on conscious awareness. We provide new leverage on these debates by reviewing the empirical evidence in support of the higher-order view. We focus on evidence that distinguishes the higher-order view from its alternatives, such as the first-order, global workspace and recurrent visual processing theories. We defend the higher-order view against several major criticisms, such as prefrontal activity reflects attention but not awareness, and prefrontal lesion does not abolish awareness. Although the higher-order approach originated in philosophical discussions, we show that it is testable and has received substantial empirical support.

New to LessWrong?

New Comment
103 comments, sorted by Click to highlight new comments since: Today at 12:00 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]13y150

Please can someone tell me/(tell me where to learn) what is meant by 'first order' and 'higher order' in this context? I am familiar with the terms from logic but I don't think this is what the terms mean here.

The definition from http://plato.stanford.edu/entries/consciousness-higher is too circular for me to understand:

Higher-order theories of consciousness try to explain the distinctive properties of consciousness in terms of some relation obtaining between the conscious state in question and a higher-order representation of some sort (either a higher-order perception of that state, or a higher-order thought or belief about it).

edit: I think the background material necessary is all here http://davidrosenthal.jottit.com/ I will update this after reading it.

edit: Here are my notes so far:

From The Higher-Order Model of Consciousness I gather the following terms: mental state seems to be synonymous with thought, I am treating this term as an roughly undefined and trying to fill it in as I read. The first order thoughts are those direct from sensory modalities whereas higher order thoughts are those which observe thoughts. The example of being hungry is given: this is a first orde... (read more)

Aren't most of the people who say consciousness is a mystery talking about the hard problem, whereas global-workspace theory and higher-order theory and the like address the easy problem?

0lukeprog13y
Perhaps, though some - including Dennett - think that the hard problem will end up being solved by solving the easy problems. I tend to take this 'deflationary' view about the problems of consciousness.
7[anonymous]13y
Maybe it's antisocial to keep asking these sorts of questions, I hope not. Do either of you have any idea where I can find a less-wrongian friendly description of this "hard problem". Everything I've read that tries to describe it in the past is full of snippets like "something that it's like to be" and "subjective qualitative" and other such things I have no ability to understand...
3Scott Alexander13y
It's kind of supposed to be hard to explain, but...hmmmm...maybe something like "why is there a subjectively perceived difference between sleepwalking through your life and being awake?" If we imagine a sort of "perfect sleepwalker" who, while sleepwalking, could hold conversations, go to work, write poetry, and do anything else that people do while awake exactly as waking people do it - even to the point where if we ask her "Are you awake?" she answers "Yes." - then it might be necessarily impossible for us outsiders to distinguish her sleep from her waking. But we feel an intuitive believe that she should be able to do so easily. If she's awake, she can notice her awakeness and all the sensations she's feeling and experiences she's having. If she's sleeping, then it doesn't even make sense to "experience" not being awake, because there's no one "at home" to do the experiencing. An equivalent interpretation of the problem revolves around qualia. Suppose that your experience of "red" was everyone else's experience of "blue". You would never be able to confirm this by talking to other people - you would say things like "blue is the color of the sky and the sea and short-wavelength light" and they would agree with you, but you would be thinking of red when you said it, and everyone else would be thinking of blue. This "experience of blue" which is separate from statements about blue or concepts surrounding blue is the "quale" (plural "qualia") of blue. Intuition tells us one difference between the sleepwalker and the awake person is that if you ask the sleepwalker what color a stop sign is, the light rays would hit her eyes, go through a chain of neurons in her brain, and produce the response "it's red". The same thing would happen in the awake person, but she'd also have the conscious visual experience qualia thing where she "sees" a certain color in her "mind's eye". The hard problem is whether there's a difference between awake people with qualia and perfect s
0Bugmaster13y
Is the answer even relevant ? As far as I understand, there currently exists no "qualia-detector", and building one may be impossible in principle. Thus, in the absence of any ability to detect qualia, and given the way you'd set up your thought experiment about the sleepwalker, there's absolutely no way to tell a perfect sleepwalker from an awake person. As far as everyone -- including the potential sleepwalker -- is concerned, the two cases are completely functionally equivalent. Thus, it doesn't matter who has qualia and who doesn't, since these qualia do not affect anything that we can detect. They are kind of like souls or Saganesque teapots that way.
1Mercurial13y
It certainly matters to the subject! I sure wouldn't want to lose my ability to experience regardless of whether others can ever notice (or whether it's possible). Your objection here strikes me a little bit like behaviorism. Yes, there are valuable things to be gotten most of the time from such an approach, but behaviorism suffered from an unwillingness to acknowledge that people had thoughts. After all, thoughts didn't demonstrate themselves in behavior beyond being talked about, in which case it was the talk that was part of the scientific domain, not the thinking. The thing is, I know I think, and my strong impression is that others talk about "thoughts" for the same reason I do: they think. The fact that behaviorism didn't have a clear empirical approach to exploring these subjective experiences tagged "thoughts" didn't mean that they were uninteresting or irrelevant. This was the main reason why we switched away from behaviorism in the second half of the 20th century. (Quick note: Yes, I know that not all behaviorism was like this. Some behaviorists simply said that they didn't want to make claims about what thinking entailed because they didn't know how to approach the matter empirically. However, it was common if not dominant among behaviorists to take the "If we can't study it then it doesn't exist or doesn't matter" approach.) In exactly the same way, quale-type experience seems to be present for me, and my own impression is that everything I talk about in terms of empiricism gets filtered through qualia. There are no data that transmit information to my mind that I am aware of without that awareness taking on qualia. I'm under the strong impression that others experience the world similarly. The fact that we don't know what that means in reductionistic terms doesn't mean that it's irrelevant or that qualia don't exist. It just means that we don't know how to approach the question unambiguously as yet.
0Bugmaster13y
But according to the though experiment you'd set up, you wouldn't notice: Granted, you go on to say, But that to me seems like a contradiction. We asked the sleepwalker if she was awake, and she said "Yes", after all. If she could in fact determine that she was sleeping, she'd say "No". Which brings me to your next point: As far as I understand, a hardcore behaviorist would actually claim that humans have no internal state and are basic reflex agents; that's obviously silly, so that is not my position. Instead, I actually agree almost completely with everything you'd said above. You think thoughts, and your thoughts affect your actions. Unlike a simple reflex agent, you are able to actually think about your thoughts; this ability affects your actions even further. For example, when someone asks you, "are you awake ?", you can think about it for a moment, and say "Yes, probably", or "Most likely, not". You can also hold a lively discourse on the subject of your own thoughts. Thus, your actions -- including your speech -- are, in fact, evidence for your consciousness. A perfect sleepwalker, then, would have to perfectly emulate being conscious, as well; and I'm willing to stick my neck out and say that a perfect emulation of X is, in fact, X. You say that you are "under the strong impression that others experience the world similarly" to yourself -- well, why is that ? I would argue that your "strong impression" is actually based on evidence (well, that, and possibly some biologically programmed response, but mostly evidence). You converse with others and they respond in certain ways that are consistent with the hypothesis that they, like yourself, are conscious. Sure, they could be perfect sleepwalkers, but that is a less parsimonious hypothesis. Thus, again, I see no need for dualistic assumptions of any kind, which includes qualia. Besides, dualism doesn't actually explain anything, it just replaces one mystery with another, because now you have to explain ho
1Mercurial13y
I think you're confusing me and Yvain. I'll take that as a complement, though! I agree with pretty much everything you've said here - but it's posed as though to stand as an argument against what I think, so I'm a little bit concerned that we're not talking about the same thing. For instance, you say: I agree, dualism is unnecessary as far as we know. It's hard for me to conceive of a type of evidence that would ever suggest that we need dualism. However, the existence of qualia does not immediately require dualism. The term "qualia" just points to the experiences we have that currently seem to sit on the "other side" of the hard problem of consciousness with respect to our current empirical knowledge. Presumably, we will eventually find a reductionist answer to the hard problem of consciousness. In the meantime, though, we still need a way of talking about the phenomenon in question. Qualia don't play the same role in this question that vis vitalis did with vitalism; it isn't that we're trying to answer the hard problem by saying "subjective experience is made up of qualia", but instead we're trying to describe how subjective experience presents itself to us. We see red, and this experience of redness seems to have a certain character to it, so we tag it with the descriptor of being the quale of red. The question is, how is it that there is a conscious experience induced by neurons firing in response to stimulation of the optic nerve? We know how visual perception works, but as far as I know we don't very well know how the quale of red appears from that. It's a statement of the question, not a phlogiston-class proclamation masquerading as an answer. Does that clarify where I'm coming from on this? It's not dualism (or at least I'm pretty darn sure it's not!); it's just naming a confusion in as much detail as possible.
0Bugmaster13y
Oops, I think you may be right, I'm sorry and/or you're welcome. Heh. Anyway, oddly enough, I understand the details of your argument, but I don't see the big picture that you're presenting. You reject the proposal that qualia are dualistic in nature, so we're definitely on the same page here. But then you ask, I agree that this is a hard question (seeing as it hasn't been fully answered yet), but I don't see this question as categorically different from questions such as "how is our blood flow regulated ?" or "how does visual perception work in humans ?". Presumably, a sleepwalker's brain, or a robot's circuitry, or a zombie's... er... goo or whatever it is zombies have, would implement this functionality in different ways than normal human brains do; and we could tell whether the sleepwalker/robot/zombie implements this functionality or not by talking to them (as you have pointed out in your thought experiment). So, would you agree that the question "how does consciousness work" is no different from "how does blood flow work" ? If not (as I suspect is the case), then what's the difference ? By the way, when people talk about qualia, they usually claim that we all share the same ones. Thus, for example, when I see something that I experience as "red", and you see something else (or maybe even the same object) that you experience as "red", we are both using the same exact quale to experience that stuff with. There's pretty much nowhere to go from this premise other than toward dualism, which is why I'd originally assumed you were going toward that route. But now I think that you'd reject the premise just as I do -- is that correct ?
1Mercurial13y
Ah, then perhaps I'm more confused than I thought! I still haven't identified the source of my confusion, though. Er... Yes and no. I agree that eventually we should be able to find an answer that sounds as reduced as an answer to "How does blood flow work?" does. But from where we currently stand, they seem to be really, incredibly fundamentally different questions - as long as you understand the question "How does consciousness work?" to be in the hard sense rather than in the easy one. I think you get near to the crux of the matter in this statement: Yes, presumably that's the case, and eventually we'll nail that down. But from what we can currently tell, there doesn't seem to be even an in-principle plausible mechanism for adding qualia to a computer's way of processing things. A computer receives input, does some well-defined manipulations, and offers output. Where do qualia come into play? How is it we get the subjective impression of there being a "someone" who is "watching" what's going on in the Cartesian theater? The very concept is internally inconsistent (e.g., how does the homunculus experience?), but the point is the same: there doesn't seem to be any plausible way that we have currently thought of to get from neurons firing to qualia. I guess the categorical difference is that when asking about blood flow, there's someone who experiences the question and the data and the subsequent answer; but when asking about consciousness, it's the very process of being able to understand the question in the first place that we're asking about. I'm not sure that's entirely equivalent to the hard problem, though. You might find it helpful to read the Wikipedia page on the hard problem. That might help to explain some of the nuances better than I've been able to thus far. (In particular, it helps to point out that by "hard problem" I don't mean "a challenging problem" but rather "a problem whose potential to be answered even in theory seems in question.") Again
0Bugmaster13y
Ok, that makes sense. I understand now that this is what you believe, but I still don't see why. You say: This, to me, sounds like a circular argument at worst, and a circular analogy (if there is such a thing) at best. You are trying to illustrate your belief that qualia are categorically different from visual perception (just f.ex.), by introducing a computer which possesses visual perception but not qualia, because, due to the qualia being so different from visual perception, there is no way to grant qualia to the computer even in principle. So, "qualia are hard because qualia are hard", which is a tautology. Your next paragraph makes a lot more sense to me: I think that, if you go this route, you arrive at a kind of solipsism. You know for a fact that you personally have a consciousness, but you don't know this about anyone else, myself included. You can only infer that other beings are conscious based on their behavior. Ok, to be fair, the fact that they are biologically human and therefore possess the same kind of a brain that you do can count as supporting evidence; but I don't know if you want to go that route (Searle does, AFAIK). Anyway, let's assume that your main criterion for judging whether anyone else besides yourself is conscious is their behavior (if that's not the case, I can offer some arguments for why it should be), and that you reject the solipsistic proposition that you are the only conscious being around (ditto). In this case, a perfect sleepwalker or a qualia-less computer that perfectly simulates having qualia, etc., is actually less parsimonious than the alternative, and therefore the concept of qualia buys you nothing (assuming that dualism is false, as always). And then, the "hard question" becomes one of those "mysterious questions" to which you could give a "mysterious answer", as per the Sequences. I'd actually read that page earlier, and it (along with associated links) seemed to imply that either dualism offers the best answer to
0Mercurial12y
Mmm. Yes, I think you're right. As I've chewed on this, I've come to wonder if that's part of where I've been getting the impression that there's a hard problem in the first place. As I've tried to reduce the question enough to notice where reduction seems to fail or at least get a bit lost, my confusion confuses me. I don't know if that's progress, but at least it's different! I'm afraid I'm a bit slow on the uptake here. Why does this require solipsism? I agree that you can go there with a discussion of consciousness, but I'm not sure how it's necessarily tied into the fact that consciousness is how you know there's a question in the first place. Could you explain that a bit more? Well... Yes, I think I agree in spirit. The term "behavior" is a bit fuzzy in an important way, because a lot of the impression I have that others are conscious comes from a perception that, as far as I can tell, is every bit as basic as my ability to identify a chair by sight. I don't see a crying person and consciously deduce sadness; the sadness seems self-evident to me. Similarly, I sometimes just get a "feel" for what someone's emotional state is without really being able to pinpoint why I get that impression. But as long as we're talking about a generalized sense of "behavior" that includes cues that go unnoticed by the conscious mind, then sure! It's not a matter of what qualia buy you. The oddity is that they're there at all, in anything. I think you're pointing out that it'd be very odd to have a quale-free but otherwise perfect simulation of a human mind. I agree, that would be odd. But what's even more odd is that even though we can be extremely confident that there's some mechanism that goes from firing neurons to qualia, we have no clue what it could be. Not just that we don't yet know what it is, but as far as I know we don't know what could possibly play the role of such a mechanism. It's almost as though we're in the position of early 19th century natural philosophers
2Bugmaster12y
Well, there's exactly one being in existence that you know for sure is conscious and experiences qualia: yourself. You suspect that other beings (such as myself) are conscious as well, based on available evidence, though you can't be sure. This, by itself, is not a problem. What evidence could you use, though ? Here are some options. You could say, "I think other humans are conscious because they have the same kind of brains that I do", but then you'd have to exclude other potentially conscious beings, such as aliens, uploaded humans, etc., and I'm not sure if you want to go that route (let me know if you do). In addition, it's still possible that any given human is not a human at all, but one of those perfect emulator-androids, so this doesn't buy you much. You could put the human under a brain scanner, and demonstrate that his brain states are similar to your own brain states, which you have identified as contributing to consciousness. If you could do that, though, then you would've reduced consciousness down to physical brain states, and the problem would be solved, and we wouldn't be having this conversation (though you'd still have a problem with aliens and uploaded humans and such). You could also observe the human's behavior, and say, "this person behaves exactly as though he was conscious, therefore I'm going to assume that he is, until proven otherwise". However, since you postulate the existence of androids/zombies/etc. that emulate consciousness perfectly without experiencing, you can't rely on behavior, either. Basically, try as I might, I can't think of any piece of evidence that would let you distinguish between a being -- other than yourself -- who is consciousness and experiences qualia, and a being who pretends to be conscious with perfect fidelity, but does not in fact experience qualia. I don't think that such evidence could even exist, given the existence of perfect zombies (since they would be imperfect if such evidence existed). Thus, you a
0Mercurial12y
Ah! Okay. Three points: * I think you're arguing for something I agree with anyway. I don't think of qualia as being inherently independent of everything else. I think of qualia as self-evident. I don't think my experience of green can be entirely separated from the physical process of perceiving light of a certain wavelength, but I do think it's fair to say that I'm conscious of the green color of the "Help" link below this text box. * Even if I did think qualia were divisible from the physical processes involved in perception (which I think would force dualism), I wouldn't be able to conclude that I'm the only one who is conscious. I would have to conclude that as far as I currently know, I have no way of knowing who else is or isn't conscious. So solipsism would then be a possibility, but not a logical necessity. * I'm not arguing that p-zombies can exist. I seriously doubt they can. If this is a point you've been trying to argue me into agreeing, please note that we started out agreeing in the first place! Er... Except that we're not conscious of it! I'd say that's pretty special - as long as we agree that "special" means "different" rather than "mysterious". Sorry, I meant "odd" in the artistically understated sense. We agree on this. So here, I think, is a source of our miscommunication. I also reject qualia as being independent. I think part of the problem we're running into here is that by naming qualia as nouns and talking about whether it's possible to add or remove them, we've inadvertently employed our parietal cortices to make sense of conscious experience. It's like how people talk about "government" as though it's a person when, really, they're just reifying complex social behavior (and as a result often hiding a lot of complexity from themselves). "Quale" is a name that has been, sadly, agreed upon to capture the experience of blueness, or the sense of a melody, or what-have-you. We needed some kind of word to distinguish these components of
0Bugmaster12y
True, but you can carry the reasoning one step further. The claim "other people are conscious" is a positive claim. As such, it requires positive evidence (unless it's logically necessary, which in this case it's not). If your concept of qualia/consciousness precludes the possibility of evidence, you'd be justified in rejecting the claim. Fair enough. Well, it depends on what you mean by "perception". If you mean, for example, "light hitting my retina and producing a signal in my optic nerve", then yes, experience is different -- because the aforementioned process is a component of it. The overall process of experience involves your visual cortex, and ultimately your entire brain, and there's a lot more stuff that goes on in there. Hmm, I don't know, is there such a difference ? As far as I understand, when Firefox is running, we can (plus or minus some engineering constraints) reduce its functionality down to the individual electrons inside the integrated circuits of my computer (plus or minus some quantum physics constraints). Where does the difference come in ? I lack this sense, apparently :-( As it happens, there's a real neurological phenomenon called "blindsight" which is similar to what you're describing. It's relatively well understood (AFAIK), and, in this specific case, we can indeed point to a specific region of the brain that causes it. So, at least in case of vision, we can actually map the presence or absence of conscious visual experience to a specific area of the brain. I suspect that there are scientists who are even now busily pursuing further explanations. The word "axiomatic" is perhaps too strong of a word. I just don't think that it's possible to treat consciousness as being categorically different from other phenomena, such as gravity, while still maintaining a logically and epistemically (if that's a word) consistent, non-solipsistic worldview. Ok, let me temporarily grant you this premise. What about the consciousness of other people
1Mercurial12y
You know, I think we're getting lost in the little details here, and we keep communicating past one another. First, let me emphasize that I do think we'll eventually be able to explain consciousness in a reductionist way. I've tried to make that clear, but some of your arguments make me wonder if I've failed to convey that. Second, remember that this whole discussion arose because you questioned the value of trying to answer the hard problem of consciousness. I now suspect what you originally meant was that you don't think there is a hard problem, so there wasn't anything to answer. And in an ultimate sense, I think you're right: I think people like Thomas Nagel are trying to argue that we need a complete paradigm shift in order to explain how qualia exist, and I think they're wrong. Eventually it almost certainly comes down to brain behavior. Even if it's not clear what that pathway could be, that's a description of human creativity and not of the intrinsic mysteriousness of the phenomenon. But what you said was this: This, to me, really sounds like you're saying we can't detect qualia, so we might as well assume there are no qualia, so we shouldn't worry about how qualia arise. Maybe that wasn't your point. But if it was, I stand in firm disagreement because I think that qualia are the only things we can care about! For some reason I can't seem to convey why I think that. I feel rather like I'm pointing at the sun and saying "Look! Light!" and you're responding with "We don't have a way of detecting the light, so we might as well assume it isn't there." (Please excuse the flaw in the analogy in that we can detect light. Pretend for the moment that we can't.) All I can do is blink stupidly and point again at the sun. If I can't get you to acknowledge that you, too, can see, then no amount of argumentation is going to get the point across. So all I'm left with is an insistence that if my understanding of the universe is completely off and it turns out to be po
0Bugmaster12y
Sorry, you're right, I tend to do that a lot :-( That's correct, I think; though obviously I'm all for acquiring a better understanding of consciousness. I think it's not entirely clear what that pathway is, but there are some very good clues regarding what that pathway could be, since certain aspects of consciousness (such as vision, f.ex.) are reasonably well understood. Pretty much, but I think we should make a distinction between a person's own qualia, as experienced by the person, and the qualia of other people, from the point of view of that same person. Let's call the person's own qualia "P" and everyone else's qualia (from the point of view of the person) "Q". Obviously, each person individually can detect P. Until some sort of telepathy gets developed (assuming that such a thing is possible in principle), no person can detect Q (at least, not directly). You seem to be saying -- and I could be wrong about this, so I apologize in advance if that's the case -- that, in order to build a general theory of consciousness, we need to figure out a way to study P in an objective way. This is hard (I would say, impossible), since P is by its nature subjective, and thus inaccessible to anyone other than yourself. I, on the other hand, am arguing that a general theory of consciousness can be built based solely on the same kind of evidence that compels us to believe that other people experience things -- i.e., that Q exists and is reducible to brain states. Let's say that we built some sort of a statistical model of consciousness. We can estimate (with a reasonably high degree of certainty) what any given person will experience in any situation, by using this model and plugging in a whole bunch of parameters (representing the person and the situation). I think you would you agree that such a model can, in principle, exist (though please correct me if I'm wrong). Then, would you agree that this model can also predict what you, yourself, will experience in a given si
0Mercurial12y
Apparently my reply is "too long", so I'll reply in two parts. PART 1: Hey, apparently I do too! Excellent. Um... Sure, let's go with that. There's a nuance here that's disregarding the hard problem, but I don't think we'll get much mileage repeating the same kind of detail-focusing we've been doing. :-P Sure, agreed. I should warn you, though, that I'm not sure that this distinction is coherent. There's some reason to suspect that our perception of others as conscious is part of how we construct our sense of self. So, it might not make sense to talk about "my" conscious experience as distinct from "your" conscious experience as though we start with a self and then grant it consciousness. It might be the other way around. I emphasize this because explaining Q without ever touching P might not tell us much about P. If we start with conscious experience and then define the line between "my" experience and "others'" experience by the distinction between P and Q, all we do by detailing Q is explain our impression that others are conscious. We might think we're addressing others' P, but we never actually address our P (which, it seems, is the only P we can ever have access to - which might be because we define "me" in part by "that which has access to P" and "not me" by "that which doesn't have access to P"). So with that warning, I'll just run with the intuitive distinction between P and Q that I believe you're suggesting. I agree, and I would go just a little bit farther: I would argue that it's not possible even in principle to detect Q as a kind of P. If I experience another person's experience from a first-person perspective, it's not their experience anymore. It's mine. Sure, we might share it, like two people watching the same movie. But the P I have access to is still my own, and the Q that I'm supposedly accessing as a kind of P is still removed: I still have to assume that the person sitting next to me is also experiencing the movie. Yeah, I think tha
0Mercurial12y
PART 2: Yep. I believe that's Eliezer's argument (the "anti-zombie principle" I think it was called), and I agree. That's why I prefaced it with saying that my understanding of the universe would have to be pretty far off in order for my self-zombification to even be possible. So, given the highly improbable event that p-zombies are possible, I sure wouldn't want to become one! Ergo, my own qualia matter a great deal to me regardless of anyone else's ability to detect them. ... I'm not sure what it would mean for me to agree in terms of Q but not P. I'm not quite sure what you're suggesting I'm saying. So maybe you're right, but I honestly don't know! Mmm... I'm not saying that I, personally, am special. I'm saying that an experiencing subject is special from the point of view of the experiencing subject, precisely because P is not the same as Q. It so happens that I'm an experiencing subject, so from my point of view my perspective is extremely special. Remember that science doesn't discover anything at all. Scientists do. Scientists explore natural phenomena and run experiments and experience the results and come to conclusions. So it's not that exploring Q would just happen and then a model emerges from the mist. Instead, people explore Q and people develop a model that people can see predicts their impressions of Q. That's what empiricism means! I emphasize this because every description is always from some point of view. For most phenomena, we've found a way to take a point of view that doesn't make the difference between P and Q all that relevant. A passive-voice description of gravity seems to hold from both P and Q, for instance. But when we're trying to explore what makes P and Q different, we can't start by modulating their difference. We have to decide what the point of view we're taking is, and since part of what we're studying is the phenomenon of there being points of view in the first place, that decision is going to matter a lot. I think that
0Bugmaster12y
Bah ! Curse you, machine overlords ! shakes fist I did not mean to imply that. In fact, I agree with you in principle when you say, Sure, it might be, or something else might be the case; my P and Q categories were meant to be purely descriptive, not explanatory. Your conscious experience, of whose existence you are certain, and which you are experiencing at this very minute, is P. Other people's conscious experience, whose existence you can never personally experience, but can only infer based on available evidence, intuition, or whatever, is Q. That's all I meant. Thus, when you say, "...we might think we're addressing others' P, but we never actually address our P", you are confusing the terminology; there's no such thing as "other people's P", there's only P and Q. You may suspect that other people have conscious experiences, but the best you can do as lump them into Q. You move on to say several things which, I believe, reinforce my argument (my apologies if I seem to be quote-mining you out of context, please let me know if I'd done so on accident): You appear to be very committed to the idea that your own experience is categorically different from anyone else's, and that a general model of consciousness -- assuming it was even possible to create such a thing -- may not tell you anything about your own experience. The problem with this statement, though, is that there exists one, and only one, "experiencing subject" in this Universe: yourself. As I said above, you suspect that other people (such as your wife, for example) are experiencing things, but you aren't sure of it; and you don't know if they experience things the same way that you do, or whether it even makes sense to ask that latter question. There are two possible corollaries to this fact (well, there are two that I can think of): 1). Other people in this world are categorically similar to yourself, and thus a general model of consciousness can never be developed, in principle, because such a mo
0Mercurial12y
It's nice to see this discussion converging! I was afraid we'd get myred in confusing language forever and have to give up at some point. :-( :-D Ah, okay. I thought you meant, "Given a subject, that subject's experience is P, and others' is Q." The above distinction seems more coherent. Let's do away with possessive pronouns when referring to P and Q, then. We'll say P is phenomenal experience (what I'm tempted to call "my experience" but am explicitly avoiding assigning to a particular subject since my sense of myself as a subject might well arise from the existence of P), and Q is the part of P that gives the impression that we describe as "Others seem to be conscious." I think we can agree that those two phenomena are different, even if Q seems to be a part of P. (I have a hard time conceiving of a kind of experience that's not part of P, for that matter!) Sound good? Sorry about that. I see what you mean. It doesn't look that way to me at first brush. Thanks for the consideration, though. :-) I think here is where the use of possessive pronouns betrays us. What I'm very committed to is that P is more than Q, so a priori knowing everything about Q doesn't necessarily tell us anything about why P arises in the first place. The only reason we seem to think this is likely, as far as I know, is that Q is specifically the impression that P-like phenomena exist "in others." (I honestly can't think of a way to describe the relationship between P and Q without talking about Q in terms of others. I think that might be intrinsic to the definition of Q.) What we will have explained with a full and robust theory of Q is why the impression of "others who have P-type experience" arises. (Again, I don't know how else to phrase that.) That wouldn't tell us why red appears as red, although it would tell us why others who are conscious (if any) would be under the impression that we experience red as red. Or said a little differently, it seems perfectly plausible to me th
1Bugmaster12y
Oops, actually, the latter definition is closer to what I had in mind. It seems like we need three letters: * P: Your own subjective personal experience. * Q: The personal subjective that you suspect other people are having, which may be similar to yours in some way; or, as you put it, the impression that "others have P-type experience". You have no way of accessing this experience directly, and no way of experiencing it yourself. * Pq: "The part of P that gives the impression that we describe as "Others seem to be conscious."" Pq is all the evidence you have for Q's existence. Since Pq is a part of P, as you said, I don't want to focus too much on it. I also want to emphasize that P is your own personal experience, not any abstract "subject's". It's the one that you can access directly. Moving on, you say: 1). I would agree with your statement if you removed the word "completely". Obviously, you know you are conscious, and you can experience P directly. However, you can also collect the same kind of data on yourself (or have someone, or some thing, do it for you) as you would on other people. For example, you could get your brain scanned, record your own voice and then play it back, install a sensor on your fridge that records your feeding habits, etc.; these are all real pieces of evidence that people are routinely collecting for practical purposes. 2). If you think that the above paragraph is true, then it would follow that you (probably) can collect some data on your own Q, as it would be experienced by someone else who is conscious (assuming, again, that you are not the only conscious being in the Universe, and that your own consciousness is not privileged in any cosmic way). 3). If you agree with that as well, then, assuming that we ever develop a good enough model of Q which would allow you to predict any person's behavior with some useful degree of certainty, such a model would then be able to predict your own behavior with some useful degree of cer
0Mercurial12y
I guess so! Er... By "your", do you mean to refer to me, personally? I'll assume that's what you meant unless you specify otherwise. Henceforth I am the subject! :-D But that's the crux! I know I'm conscious in a way that is so devastatingly self-evident that "evidence" to the contrary would render itself meaningless. But if some theory for P were developed that demonstrated that Q doesn't exist, I wouldn't view that theory as nonsensical. It'd be surprising, but not blatantly self-contradictory like a theory that says P doesn't exist. I believe in Q for highly fallible reasons, but I believe in P for completely different reasons that don't seem to be at all fallible to me. I deduce Q but I don't deduce P. (Although I wonder if we're just spinning our wheels in the muck produced from a fuzzy word. If we both agree that P is self-evident while Q is deduced from Pq, perhaps there's no disagreement...?) Agreed. Notice, though, that the only way I'm able to correlate this Q-like data with P is because I can see the results of, say, the brain scan and recognize that it pairs with a particular part of P. For instance, I can tell that a certain brain scan corresponds with when I'm mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process. Mostly agreed. If others experience, then others experience. :-) The main point at which I disagree is that P is privileged. There's no such thing as a P-less perspective. But if we're granting that others are actually conscious (i.e., that Q exists) and that we can switch subjects with a sort of P-transformation (i.e., we can grant that you have P and that within your P my consciousness is part of Q), then I think that might not be terribly important to your point. We can mimic strong objectivity by looking at those truths that remain invariant under such transformations. Hmm... "behavior" is being used in two d
0Bugmaster12y
Yep, that's right. I'm just electrons in a circuit as far as you're concerned ! :-) Sure, that makes sense, but I'm not trying to abolish P altogether. All I'm trying to do is establish that P and Q are the same thing (most likely), and thus the "Hard Problem of Consciousness" is a non-issue. Thus, I can agree with the last sentence in the quote above, but that probably isn't worth much as far as our discussion is concerned. I'm not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn't indicate that you're mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I'm not sure what you're driving at. When I used the word "behavior", I actually had a much narrower definition in mind -- i.e., "something that we and our instruments can observe". So, brain scans would fit into this category, but also things like, "the subject answers 'blue' when we ask him what color this 450nm light is". I deliberately split up "what the test subject would say" from "what he will actually think and experience". But it seems like you agree with both points, maybe: Pretty much. What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, "sure, the theory of gravity fully explains why the Earth doesn't fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening". Sure, the gnomes could exist, but there are lots of things that could exist... If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, "sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn't mean that we can just equate electricity and magnetism". Maybe we disagree because of this: Well, yeah, Occam's Razor isn't
4Mercurial12y
You know, something clicked last night as I was falling asleep, and I realized why you're right and where my confusion has been. But thanks for giving me something specific to work from! :-D I think my argument can be summarized like so: * All data comes through P. * Therefore, all data about P comes through P. * All theories about P must be verified through data about P. * This means P is required to explain P. * Therefore, it doesn't seem like there can be an explanation about P. That last step is nuts. Here's an analogy: * All (visual) data is seen. * Therefore, all (visual) data about how we see is seen. * All theories of vision must be verified through data about vision. (Let's say we count only visual data. So we can use charts, but not the way an optic nerve feels to the touch.) * This means vision is required to explain vision. * Therefore, it doesn't seem like there can be an explanation of vision. The glaring problem is that explaining vision doesn't render it retroactively useless for data-collection. Thanks for giving me time to wrestle with this dumbth. Wrongness acknowledged. :-) What I was driving at is that there's no evidence that it corresponds to mentally rehearsing Mozart for anyone until I look at my own brain scan. All we can correlate the brain scans with is people's reports of what they were doing. For instance, if my brain scan said I was rehearsing Mozart but I wasn't, and yet I was inclined to report that I was, that would give me reason for concern. The confusion here comes down to a point that I still think is true, but only because I think it's tautological: From my point of view, my point of view is special. But I'm not sure what it would mean for this to be false, so I'm not sure there's any additional information in this point - aside from maybe an emotional one (e.g., there's a kind of emotional shift that occurs when I make the empathic shift and realize what something feels like from another person's perspective
0Bugmaster12y
All right, so it seems like we mostly agree now -- cool ! Ok, I get it now, but I would still argue that we should assume we're awake, until we have some evidence to the contrary; thus, the "hard problem of dreaming" is a non-issue. It looks like you might agree with me, somewhat: In this situation, we assume that we're awake a priori, and we are then deliberately trying to induce dreaming (which should be lucid, a well). So, we need a test that tells us whether we've succeeded or not. Thus, we need to develop some evidence-collecting techniques that work even when we're asleep. This seems perfectly reasonable to me, but the setup is not analogous to your previous one -- since we start out with the a priori assumption that we're currently in the awake state; that we could transition to the dream state when we choose; and that there exists some evidence that will tell us which state we're in. By contrast, the "hard problem of dreaming" scenario assumes that we don't know which state we're in, and that there's no way to collect any relevant evidence at all.
0Mercurial12y
Yep! Rationality training: helping minds change since 2002. :-D You're coming at it from a philosophical angle, I think. I'm coming at it from a purely pragmatic one. Let's say you're dreaming right now. If you start with the assumption that you're awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you're really awake. Even if your empirical tests conclusively show that you're dreaming, dreams have a way of screwing with your reasoning process so that early assumptions don't update on evidence. For instance, a typical dream test is jumping up in the air and trying to stay there a bit longer than physics would allow. The goal, usually, is flight. I commonly find that if I jump into the air and then hang there for just a little itty bitty bit longer than physics would allow, I think something like, "Oh, that was barely longer than possible. So I must not be quite dreaming." That makes absolutely no sense at all, but it's worth bearing in mind that you typically don't have your whole mind available to you when you're trying to become lucid. (You might once you are lucid, but that's not terribly useful, is it?) In this case, you have to be really, insanely careful not to jump to the conclusion that you're awake. If you think you're awake, you have to pause and ask yourself, "Well, is there any way I could be mistaken?" Otherwise your stupid dreaming self will just go along with the plot and ignore the floating pink elephants passing through your living room walls. This means that when you're working on lucid dreaming, it usually pays to recognize that you could be dreaming and can never actually prove conclusively that you're awake. But I agree with you in all cases where lucid dreaming isn't of interest. :-)
0Bugmaster12y
That's funny, I was about to say the same thing, only about yourself instead of me. But I think I see where you're coming from: So, your primary goal (in this specific case) is not to gain any new insights about epistemology or consciousness or whatever, but to develop a useful skill: lucid dreaming. In this case, yes, your assumptions make perfect sense, since you must correct for an incredibly strong built-in bias that only surfaces while you're dreaming. That makes sense.
0[anonymous]12y
As I discussed here - see also this comment for clarification - we should in theory be able to discover if other beings have qualia if we were to learn about their brains in such microscopic detail that we are performing approximately the same computations in our brains that their brains are running; we then "get their qualia" first-hand. As for arguing about qualia verbally, I hold qualia to be both entirely indefinable (implying that the concept is irreducible, if it exists) and something that the vast majority of humans apprehend directly and believe very strongly to exist. There is little to be gained by arguing about whether qualia exist, because of this problem - the best that can be achieved through argument is that both of you accept the consensus regarding the existence of this indefinable thing that nonetheless needs to be given a name.
0Bugmaster12y
Ok, I read your article as well as your comment, and found them very confusing. More on this in a minute. How is that different from saying, "I found qualia to be a meaningless concept" ? I may as well say, "I think that human consciousness can best be explained by asdfgh, where asdfgh is an undefinable concept". That's not much of an explanation. In addition, this makes it impossible to discuss qualia at all (with anyone other than yourself, that is), which once again hints at a kind of solipsism. This is weak evidence at best. The vast majority of humans apprehend all kinds of stuff directly (or so they believe), including gods, demons, honest politicians, etc. At least some of these things have a very low probability of existing, so how are qualia any different ? In addition, regardless of what the vast majority of people believe, I personally disagree with this "consensus regarding the existence of this indefinable thing", so you'll need to convince me some other way other than stating the consensus. Note that I agree with the statement, "humans appear to act as though they believe that they experience things, just as I do" -- a statement which we may reduce to something like, "humans experience things" (with the usual understanding that there's some non-zero probability of this being false). I just don't see why we need a special name for these experiences, and why we have to treat them any differently from anything else that humans do (or that rocks do, for that matter). Which brings me back to your article (and comment). In it, you describe qualia as being indefinable. You then proceed to discuss them at great length, which means that you must have some sort of a definition in mind, or else your article would be meaningless (or perhaps it would be meaningless to everyone other than yourself, which isn't much better). Your central argument appears to rest on the assumption that qualia are irreducible, but I still don't understand why you'd assume that in t
0lessdazed13y
How well established is it that they are equivalent? The second, qualia version seems much less mysterious to me.
0XiXiDu13y
ETA: I completely forgot that you wrote an article on that, sorry. There were two people in the comments whose comments seemed to suggest that they lack a "mind's eye", Garth and Blueberry. Note that you are using potentially confusing analogies and therefore terminology here. Some people, e.g. my dad, are unable to see anything with their "mind's eye". Indeed, those people don't even know what you mean by that. If you ask them if they are able to imagine a beautiful sunset then they think that you are asking them if they could describe or paint a sunset (database query), they do not understand that you ask them to simulate a beautiful sunset visually and experience it with their "mind's eye" as if dreaming. My dad can only experience visual images if they actually happen live, but he has visual dreams when asleep. That's how I figured this out in the first place, by asking if he is able to deliberately cause dream-like sensory experiences that do not correlate with the outside world (he can't, it is all "black"). I asked others and there are quite a few people who are not capable of "daydreaming" (my mom is). I don't know how else to call this but a lack of a certain type of consciousness.
1summerstay13y
I find David Chalmers's explanation of what is meant by "qualia" and "subjective experience" and "something its like to be" the easiest to understand. For example, read the first chapter of The Conscious Mind. The Knowledge Argument above refers to the fictional story of Mary the color scientist. She was raised in a black and white environment and never saw color. But she read textbooks on color theory (printed in black and white, of course.) The question is, when she finally experiences the color blue, how is that different from the previous knowledge she had about what the color blue would be like? That different extra aspect to the actual experience is what we refer to as qualia, and how such an experience can be caused by physical processes is (in Chalmers's terminology that has now been widely adopted) the Hard Problem.
1Mercurial13y
That's a really clear description! Thanks for summarizing it. I suspect it's highly relevant that if someone were to actually grow up in a grayscale environment, they wouldn't be capable of experiencing blue. Even if the optic nerve had somehow retained the ability to transmit data from cones, the brain simply would not be wired for blue-processing. I'm pretty sure her brain would interpret a colored world the way a black-and-white television would. (This is my understanding of neuroscience, by the way, not my stab at philosophy.) I haven't taken the time to think carefully about the implications of this. It just seems suspicious to me that one of the clearest descriptions of qualia I've encountered involves a process that's neurologically implausible to enact.
1red7513y
Results of gene therapy for color blindness suggest otherwise. Maybe those monkeys and mice cannot experience colors, but they react as if they can. I'm really want to try this myself. Infrared sensitive opsin in a retina, isn't it wonderful?
1lukeprog13y
Does this article help? The knowledge argument is a famous argument that tries to pump the intuition about the existence of qualia, which are the source of the 'hard problem' of consciousness.
0[anonymous]13y
Thanks for the link but it still doesn't make sense to me (I've tried to understand what this qualia thing a few times before and I am still baffled about what it is and why everyone other than me thinks it's real).
2Jack13y
I can't find the source but I recall reading the following interesting point: different people might possess different degrees of qualia, perhaps ranging down to none, and that perhaps some or all of the debate could be explained by this fact. So maybe it's just your brain. You've read the qualia wikipedia article I assume? It's a pretty obvious concept for most people- whether or not it is 'real'. Perhaps you recall wondering as a child if when other people saw something green they saw the same color you did: "Maybe what you see when you look at things called 'green' is what I see when I look at things called 'orange'. How would we know?'" The aspect of color that cannot be communicated and generates this worry is the qualia of 'greenness',
4[anonymous]13y
deleted
3Jack13y
Yeah. Except you're not the physically impossible kind since you're actually reporting your lack of qualia- and I'm not sure you actually lack qualia entirely- it's just very weak in you. +1 For Less Wrong neurodiversity. Would you be offended if I prodded you with questions about how you think? (Heck, I sort of think you should just start a discussion post called "I'm a zombie AMA".)
0[anonymous]13y
Yes quite happy to answering anything. In fact I sort of preemptively started already! I don't think I'm knowledgeable enough to make an AMA worthwhile for the people asking though.
0Jack13y
If I get to anything too private, feel free to tell me to frack off. Do you know your IQ? Have you been diagnosed with any condition listed in the DSM? Are there any cognitive tasks that you find yourself notably worse than average (especially compared to those with similar IQs)? What about tasks that you find yourself to be notably better than average? Can you got into detail about this: What games/experiments? Would you say you have a good sense of humor? Can you reliably read someone's emotions from non-verbal cues? Do you have empathy for others that are suffering? Does music evoke emotions in you? Ever been in love? Let's say you are in a situation which could lead to either excitement or anxiety. When you learn that you are anxious and not excited does that information just come to you verbally? Do you read physical signs of your body? For most people the way we know whether we are excited or anxious is that these emotions feel different- their qualia are different (or at least that is how we report learning about our emotional state... I'm not entirely sure that story is right).
2[anonymous]13y
-
0lessdazed13y
Is this a good zombie test? I have to consciously search my body for hints to tell those apart.
0Jack13y
No idea, never interviewed a zombie before. I'm not especially confident I have yet. It certainly might be overly-sensitive.
0lessdazed13y
OK I think it is an OK test but you might get false positives from that alone.
2XiXiDu13y
Well, "the philosophical zombie is a hypothetical person whose behavior is indistinguishable from an ordinary person, but who lacks conscious experience." That's nonsensical, but you could however lack conscious experience while acting differently, e.g. not being able to comprehend the concept of qualia.
0lessdazed13y
Excellent point.
1XiXiDu13y
By the way, I'd be interested in your opinion on my comment here, are you able to deliberately cause dream-like, simulated sensory experiences that do not correlate with the outside world? I am able to simulate sensory input, i.e. dream deliberately, enter my personal Matrix (Holodeck). I can see, hear, feel and smell without the presence of light, sound, tactile or olfactory sensory input. That is, I do not need to undergo certain conditions to consciously experience them. They do not have to happen live, I can imagine them, simulate them. I can replay previous and create new sensory experiences in my mind, i.e. perceive them with my minds eye. I can pursue and experience activities inside my head without any environmental circumstances, i.e. all I need is my body. I can walk through a park, see and hear children playing, feel and smell the air, while being weightlessness in a totally dark and quiet zero gravity environment. Can you do that?
0[anonymous]13y
deleted
1red7513y
Maybe it's better to start from obvious things. Color experience, for example. Can you tell which light of traffic lights is illuminated while you are not using position of light and you aren't asking himself which color it is? Is there something in your perception of different lights that allows you to tell that they are different?
0[anonymous]13y
The cones in the eye detect three different aspects of light (redness, greenness, blueness) and these are sent to the brain in three different fibers. By this mechanism we see there's nothing magic going on in telling the difference between two colors. I guess the rods (which detect variation in brightness) are more relevant to the question of which light is on though.
1red7513y
I doubt that you think about rods and cones when you are deciding if it's safe to cross the road. The question is: is there something in your perception of illuminated traffic light, that allows you to say that it is red or green or yellow? Or maybe you just know that it is green or yellow, but you can't see any differences but position and luminosity?
0[anonymous]13y
I don't understand what the question is getting at. You're right that I don't think about cones when I check which color a light is, but this is the mechanism by which it enters my brain: since different lights enter my brain in different ways it is no surprise I can differentiate between them.
1red7513y
I am getting there. There's a phenomenon called blindsight type 1. Try to imagine that you have "color blindsight", i.e. you can't differentiate between colors, but you can guess above chance what color it is. In this condition you lack qualia of colors.
0shokwave13y
Not to be dismissive of a chunk of philosophy, but your post is akin to "I've tried to understand elan vital, but I keep running into problems integrating it with my biology course".
1lessdazed13y
Materialists should be able to answer "why do people think they have qualia" without it being a mystery.
1Jack13y
Even the most strident eliminative materialists understand what is meant by qualia. You don't have to integrate a concept into your theory to understand what it means.
0scientism13y
It's hard to understand because it's confused.

I encounter many intelligent people (not usually LWers, though) who say that despite our recent scientific advances, human consciousness remains a mystery and currently intractable to science.

I would ask them to state their definition of consciousness, "describe and model the principal features of consciousness", to be able to discern if they actually believe that science is inept or if the true problem is the terminological vagueness. Personally I don't know what is meant by consciousness.

Here is a starting point for those who wish to delve i... (read more)

0atucker13y
I liked the Ego Tunnel, and Metzinger has a longer more detailed (but worse written, prose-wise) book called Being No One: The Self-Model Theory of Subjectivity

Has anyone else on the site read/encountered Metzinger's work? I read the Ego Tunnel and am working through Being No One, and I'm fairly impressed.

He often refers to various mental disorders and abnormal phenomenal states in order to separate out individual parts of consciousness, and is one of the most hardcore materialists I've ever read.

There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.

The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky's older A-B brain ideas in "The Society of Mind". I wish I were a "cognitive scientist" it would be so much easier to publish!

However, needless to say any such hypothesis must be founded... (read more)

0[anonymous]13y
I was looking some things up after you mentioned this, and after reading a bit about it, qualia appears to be extremely similar to sensory memory. (http://en.wikipedia.org/wiki/Qualia) (http://en.wikipedia.org/wiki/Sensory_memory) These quotes about them from Wikipedia(with the names removed) seem to do a good job describing the similarity: 'The information represented in ### is the "raw data" which provides a snapshot of a person's overall sensory experience.' 'Another way of defining ### is as "raw feels." A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition.' If you think about this in P-zombie terms, and someone attempts to say "A P-zombie is a person who has sensory memory, but not qualia." I'm not sure what would even be the difference between that and a regular person. Either one can call on their sensory memory to say "I am experiencing redness right now, and now I am experiencing my experiences of redness" and it would seem to be correct if that is what is in their sensory memory. There doesn't appear to be anything left for qualia to explain, and it feels a lot like the question is dissolved at that point. Is this approximately correct, or is there something else that qualia attempts to explain that sensory memory doesn't that I'm not perceiving?
0examachine12y
Subjective experience isn't limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!
0[anonymous]13y
I don't know what phenomenal consciousness or subjective experience means. Could you please give a reference or explanation for these terms?
3Mercurial13y
That is the hard problem, actually. If we could operationalize those terms, we would be able to study what they refer to with a reductionist lens. Until then, we're kind of stuck using words to point at experience rather than at structural definitions. In case you're honestly not sure what everyone is talking about, though: There's a difference between red as a certain frequency of light and red as experienced. Yes, we know there's a strong connection between the two, and we can describe in some fair detail how a certain frequency of light stimulates optic nerves and is processed in the brain and so on. But it's not at all clear how we get from those mechanical processes to the experience of red. We don't experience red as a frequency; we experience it as red! That latter bit, the redness of red, is what people refer to as the qualium of red. ("Qualium" is the singular form of "qualia".) The reductionist thesis maintains that there must be a way to reduce the connection between physical mechanisms and qualia down to mechanisms. The hard problem of consciousness is that no one seems to be able to come up with even an in-principle plausible way of making that connection. In other words, everyone is confused but doesn't have a clear way to even start dispelling the confusion. People like Daniel Dennett have made efforts, but many people question whether their efforts even count as progress. So in short: "phenomenal consciousness" refers to the experience of qualia, although we don't know what that means aside from pointing at the fact that everyone seems to experience qualia and that mechanisms affect but don't seem to be qualia. "Subjective experience" usually refers to the same thing, but is often used to emphasize the fact that the experience of qualia seems to depend on the individual; e.g., you don't experience my experiencing red the way I do and vice versa.
2arundelo13y
"Quale", according to Wiktionary, the Stanford Encyclopedia of Philosophy, and my 1993 Random House unabridged dictionary (which gives the pronunciations KWAH-lee, KWAH-lay, and KWAY-lee). Edit for completeness: For the plural, "qualia", the Random House gives the pronunciations KWAH-lee-uh and KWAY-lee-uh. (The second edition OED pronunces "quale" as KWAY-lee but does not include "qualia" at all.)
1Mercurial13y
Ah, I had been misinformed! I was informed it was the Latin neuter form, which uses "-um" for singular endings and "-a" for plural. Thanks for the correction!
2Mitchell_Porter13y
Do you understand the difference between being asleep and being awake?
0[anonymous]13y
It seems like a subtle question which I could be missing the point of, so I'll explain my answer instead of just saying "yes": When awake someone is generally acting based on their sensory inputs and plan. When asleep they are in one of several different sleep stages, I don't know much about these different states but I'll say in general that I think they are still (using the HOT terminology) creature-conscious of sensory inputs (that's how you can wake from the alarm clock) but they are not transitive-conscious (except in the cases when you incorporate these into your dreams). Let me also add that I've been re-reading the wiki and Stanford encyclopedia pages on all these terms and it makes just as much sense as last time I tried to understand what it's all about (none). I'm a bit worried about people getting angry at me for not "getting it" as fast as they did but hopefully people on LW are more forgiving than what I'm used to.
2mwengler13y
Chimera writes: I'm a bit worried about people getting angry at me for not "getting it" You are what? Worried? Worried is a conscious experience. A movie of you being worried does not show someone else being worried, it shows an unconscious image that looks like you being worried. An automaton built to duplicate your behavior when you are worried feels nothing, there is nothing (no consciousness) there to feel anything, but when you are doing that stuff people know and more importantly, you know how you feel and what it means to feel worried. Imagine a world filled with disney animatronic robots all programmed to behave like real world people in our world behaved. Unless you think all those singing ghosts in the Haunted Mansion at disneyland are feeling happy and scared, then you can know what is being discussed here by imagining the difference between what images of people feel (nothing) and what actual people feel. Good luck with this.
1Bugmaster13y
I would argue that if someone constructed an automaton that behaved exactly like I would in any given real-world situation -- including novel situations, which Disney automatons can't handle -- then that automaton would, for all intents and purposes, be as conscious as I am. In fact, this automaton would, in fact, be a copy of me. Let's imagine that tonight, while you sleep, evil aliens replace everyone else in your home town (except for yourself, that is) with one of those perfect automatons. Would you be able to tell that this had occurred ? If so, how would you determine this ?
0mwengler12y
Perhaps I might not know the difference, but I am not the only observer here. Would the people replaced know the difference? Fooling you by replacing me is one thing. Fooling me by replacing me is an entirely more difficult thing to do.
0Bugmaster12y
Well, presumably, the original people who were replace would indeed know the difference, as they watch helplessly from within the bubbling storage tanks where the evil aliens / wizards / whomever had put them prior to replacing them with the automatons. The more interesting question is, would the automatons believe that they were the originals ? My claim is that, in order to emulate the originals perfectly with 100% accuracy -- which is what this thought experiment requires -- the automatons would have to believe that they were, in fact, original; and thus they would have to be conscious. You could probably say, "ah-hah, sure the automatons may believe that they are the originals, but they're wrong ! The originals are back on the mothership inside the storage vats !" This doesn't sound like a very fruitful objection to me, however, since it doesn't help you prove that the automatons are not conscious -- merely that they aren't composed of the same atoms as some other conscious beings (the ones inside the vats). So what, everyone is made of different atoms, you and I included.
0[anonymous]13y
deleted
1Mitchell_Porter13y
You skated past the hard problem of consciousness right there. Why does "acting based on sensory inputs and a plan" correlate with "being awake"?
0[anonymous]13y
It's just the term "awake" is defined that way, or is that wrong?
1Mitchell_Porter13y
It depends on whether your definition of "sensory input" and "acting on a plan" already require the concept of being conscious. Functionalists have definitions of those concepts which are just about relations of causality (sensory input = something outside the nervous system affects something inside the nervous system) and isomorphism (plan = combinatorial structure in nervous system with limited isomorphism to possible future world-states). And the point of the original question is that when you know you're awake, it's not because you know that your nervous system currently contains a combinatorial structure possessing certain isomorphisms to the world, that stands in an appropriate causal relation to the actions of your body. In fact, that is something that you deduce from (1) knowing that you are awake (2) having a functionalist theory of consciousness. So, when you are awake (or "conscious"), how do you know that you are conscious?
0[anonymous]13y
When awake you are not necessarily transitively conscious of it - I think usually we are but there are times when we 'zone out' and only have first order thoughts.
0Mitchell_Porter13y
OK. But it seems (according to your answer) that when I am awake and knowing it, it's because I'm transitively conscious of something. Transitively conscious of what?
0[anonymous]13y
of being awake, as defined above: "I notice that I am taking audio-visual input from my environment and acting on it". (The quote should be 'noninferential, nondispositional and assertoric' but I am not completely sure it is of that nature, if not, my mistake)
2Mitchell_Porter13y
i.e. you know you're awake when you have subjective experience of phenomenal consciousness. :-) Or something very close to this - that may not be the most nuanced, 100% correct way of stating it. Would you say that only a functionalist can know whether they are awake, because only a functionalist knows what consciousness is? I presume not. But that means that it is possible to name and identify what consciousness is, and to say that I am awake and that I know it, in terms which do not presuppose functionalism. In this we have both the justification for the jargon terms "subjective experience" and "phenomenal consciousness", and also the reason why the hard problem is a problem. If the existence of consciousness is not logically identical with the existence of a particular causal-functional system, then I can legitimately ask why the existence of that system leads to the existence of an accompanying conscious experience. And that "why" is the hard problem of consciousness.
0[anonymous]13y
Thanks for your comment but I don't understand it.

I was reading through some of these comments, and now I'm not sure if I'm normal. When one imagines images, is it the same as dreaming or seeing? I can imagine what my room looks like around me, but all I "see" is black.

Is there a scientific/mechanical model that would enable a machine to feel pain? Not react to pain as if it did feel pain, but to actually feel pain in the same sense as a human does? The answer is no, there is nothing in science or philosophy that can come up with such a model even in theory, much less using current technology.

And that is only a small part of consciousness. Our abilities to understand and appreciate 'meaning', our vision, imagination, sense of free will....our general human experience of ourselves and our environment cannot be mathema... (read more)

6Plasmon12y
Do you know how to distinguish "actually feeling pain" from "acting as if" it feels pain? If so, do tell. If not, would you perhaps also claim that a machine which passes the Turing test is not "actually" conscious, but merely "acts as if" it is conscious ? Anti-reductionists are always quick to point at "qualia", "subjective experience", "consciousness" (or the subjective experience of pain, in this case) as examples of Great Big Unexplained Mysteries which have not been/can not be solved by science, but they can never quite explain what exactly the problem is, or what a solution would look like.
0ArisKatsaris12y
A solution would help dissolve our confusion about how the territory of our consciousness can be produced by the map that is our brain's computation. I feel I've made some small progress on elements of that front after connecting some seperate ideas from other fields, like Tegmark IV, fractals and great attractors, and calculus. I hope to most some of these ideas later this month, or on February.
0CuSithBell12y
Well, I suppose you'd do it the same way you'd distinguish "actually has a cat in a box" from "pretending to have a cat in a box" (without checking the box). I do think there's something weird going on with consciousness - why there is something that thinks it has the experience of having thoughts and experiences is as yet unexplained, and is tricky to talk about given the inability to directly access the subject matter - but I imagine it's in principle explicable. And saying we need to find a "mysterious" way of understanding it... well, there are all sorts of reasons why that's not going to work.
0shminux12y
If there is no way to check the content of the box, ever, in any conceivable way, then there is no difference, period.
5CuSithBell12y
Sure. But that's not true of cats / boxes, nor is it necessarily true of consciousness (based on the notion that consciousness is in principle explicable / reducible). The parallels being that we can't check now, the person acts in such a way that the cat/consciousness is/isn't a parsimonious explanation of their behavior, it might be difficult to check, you can fake it (to some degree), you can be wrong about it... and perhaps the cat might be a delusion. Moreover, some people here claim to have values that encompass things that they cannot in principle interact with in any way (things external to their light cone, for example), so I'm not sure your assertion is unproblematic. If you're going to step on my box, it matters to me whether there's a cat in it, even if you can't check that, and it might in fact matter to you as well. But facts tend to have ripples, so it seems likely that there is, in principle at least, a way to check the catbox.
-2Ghazzali12y
The fact that the problem cannot be explained is because of the limitations of language/logic/reason....the tools that we rely on to explain mechanical phenomenon. Things that require equal signs. The fact that this subject is not easilly explainable is not a hit against our side, it is a hit against your side. It is the non-rational aspect of consciousness that makes it seemingly impossible to explain in the first place. The reaction of reductionists and some rationalists (I argue that it is quite rational to conlude that this is indeed a mystery as of present time) that because we cannot explain what that sensation of 'pain' is then it may not exist to begin with is dubious at best.
8Plasmon12y
"You can't explain the precession of the perihelion of Mercury" is a hit against Newton's theory of gravity. "You can't explain "zoink", and I can't tell you what "zoink" is, nor what an explanation of "zoink" would look like" is not a hit against anything. Also, arguments are not soldiers, and talking about "hits" and "sides" is unwise. There have, in history, been many occasions where something was not understood. When temperature was not understood, it was still possible to explain to someone what this ill-understood "temperature" was. Specifically, it is simple to make sure that your notion of "colour" or "temperature" is similar to my notion of "colour" or "temperature" even if I don't understand what "colour" and "temperature" are. I predict that there has never been a concept that * was not understood at some point in time * was "not easily explicable" in the sense that "the subjective experience of pain" is not easily explicable * later turned out to be well-defined and to "cut reality at its joints" If you can come up with an example of such a concept, I will start taking arguments from vague not-easily-explicable concepts far more seriously. On the other hand, there are at least some concepts that * were not understood at some point in time * were "not easily explicable" in the sense that "the subjective experience of pain" is not easily explicable * turned out to be completely bogus namely, the concepts of "soul", "god", etc...
0Ghazzali12y
Sorry for the allegorical language if it offended you. There is a difference between not finding a solution for a problem, and not even understanding what a solution may look like even in the abstract form. It is also not a good sign when the problem gets to be more of a mystery the more science we discover. The concern here is that we have an irrational view that rationalism is a universal tool. The fact that we have unsolved scientific and intellectual problems is not a proof of that. The fact that there seem to be problems that in their very nature seem to be unsolvable by reason is.
4Plasmon12y
I am not offended Certainly. And further on that scale, there is "understanding so little of the problem that you're nor even sure there's a problem in the first place". Progress on the the P vs NP problem has been largely limited to determining what the solution doesn't look like , and few if any people have any idea what it does look like, or if it (a solution) even exists (might be undecidable). So, this scale goes * Solved problems * Unsolved problems where we have a pretty good idea what the solution looks like * Unsolved problems where we have no idea what the solution looks like : subjective experience is not here * problems we suspect exist, but can't even define properly in the first place : subjective experience is here! Consciousness and the subjective experience of pain have not gotten more mysterious the more science we discover. At worst, we understand exactly as much now as we did when we started, i.e. nothing (and neurologists would certainly argue we do understand more now). It is. Have a look at Solomonoff induction It's not proof, but it is evidence. What makes you think that these problems are "in their very nature unsolvable by reason" ? Is it because you think they are inherently mysterious ?
-4Ghazzali12y
I will make a point about the progress of science in this subject and then use that to step towards a more general argument for the innate mystery of consciousness with regards to reason. Ever since the time of the enlightenment there has been a real movement in the west to view the world as purely mechanical/physical so that a conclusion of reason as a universal tool could be accepted. That meant the elimination from society of not just God but also the soul and other things. Ironically it was a particular invention of science and reason that made rationalists realize the problem of eliminiating all non-mechanical/physical realities from a human being: the computer. With the development of the computer it became painfully obvious that human beings were fundamentally different from any designed piece of technology. Although they could theoratically design and program a coputer for all kinds of amazing functions, there is no rational model as to how to make that machine 'conscious'. It was through computers that mankind realized in the most clear and blunt way the mystery of consciousness. So to reassert my point....from the development of computers throughout this past century into the advancment of it to this century, the more we progress the more we understand that consciousness does not seem to be a matter of just complexity and sophistication. Secondly, our faculty of reason itself does not even work in the same way a computer works. A computer's mechanical structure "signals" a conclusion. The machine moves in a certain way, albeith at the tiniest levels, to signal that something is right or wrong. For us, it is understanding that makes us realize a right or wrong, it is a feeling. Even at the most fundamental level of using reason itself the mystery of consciousness is engaged and operating in a way we do not understand.
4Plasmon12y
Evidence-based Citation needed. ( From a neurologist or computer scientist. Nothing about how our own massively parallel architecture differs from the Von Neumann architecture.) The more we understand of the workings of the brain, the more we can mimic it on a computer. ("Ha, but these are simple tasks! Not difficult tasks like consciousness." How convenient of you to have chosen a metric you can't even define to judge progress towards full understanding of the human brain) And there was no such model before the development of computers either. Your unstated assumption seems to be that it is rational to expect a quick development of a "model of consciousness" (whatever that is) after the invention of the computer. If that were so, you might have a point, but, again : evidence needed. Evidence-based Citation needed. Our brain runs on physics. Although there may be various as-of-yet unknown algorithms running in our brain, there is no reason to assume anything non-computational is going on. Will you change your mind if/when whole brain emulation becomes feasible ?
-4Ghazzali12y
Our brain is physical, no doubt, but as you can imagine I am making a claim that mind (consciousness, spirit, whatever you want to call it) is not the same as brain. There is a connection between the two, but my argument using rational judgment is that consciousness does not seem to be physical because there is no way to understand it rationally. Your point against me is what I use against you. You say I am mistaken because I cannot even define what is consciousness, I say that is precisely the point! The only way you can reply is to hold out for the view that consciousness may not even exist, so it may not be a problem in the first place. And that is a whole other issue, for if consciousness is only an illusion that breaks down the entire human experience of reality. Furthermore, there are other reasons why the idea of a purely physical human being without any mysterious non-physical reality is extremely problematic: 1. It would mean no free will. To deny free will is to deny rationality to begin with. How can a conclusion made by reason in turn negate reason? 2. It would deny any real morality. Fundamdentally a human being would be the same as a piece of wood, except more complex. It is the western insistence that reason be a univeral tool (and therefore reality be universally physical) that has led them to completely deny dualism. But if you recognize that reason itself is pointing towards its own limits, dualism is not that bad of a conclusion.
2CuSithBell12y
It is essentially certain that it is possible in principle to construct out of matter a thing which can feel pain, have an experience of self, etc., to the extent that these are meaningful concepts. The proof is very simple.
0Ghazzali12y
Up to this point in human history no rational or scientific model has been presented that would explain how matter could be put together to feel pain. Or feel anything for that matter. Whether it is possible or impossible to do is another conversation.
0CuSithBell12y
Sure, no one does or has ever really had a clue where consciousness comes from. What's your point? The way you're saying "no rational or scientific model" rather than "no model whatsoever" implies you think these are poor tools - do you have some alternative in mind?
0Ghazzali12y
What we know is that reason is extremely useful when applied to mechanical/material subjects. We should continue to use it in that way. We know that it has extreme difficulty in explaining and analyzing some key issues, including consciousness and all of its manifestations; pain/pleasure, emotions, imagination, and meaning in general as well as others. Once again, this seems to be the case because consciousness itself is extremely difficult to put into mechanical/material terms. Therefore reason has a problem with it. If a tool is proficient in explaining some things but not other things, is it 'rational' to consider it a universal tool? In this way I am using reason itself to conclude that it is not a universal tool. So your question is what then should we use to understand consciousness if not reason? Just as reason seems to do well in understanding things of a certain nature (mechanical/physical), we can look at consciousness and conclude from its mysteries what kind of tool is needed to give us insight into it. (Notice that I am still using reason throughout this process, it never really leaves our endeavors. We are just being honest in that we recognize something more is there that is beyond its limits.) Consciousness does not seem to be mechanical or physical in nature because we are not able to even model in theory an explanation for it. Therefore the tool to be used to understand it should have a much more mysterious/abstract nature. Once we make that conclusion it is a whole other topic as to what that other 'tool' might be. Whatever it is, it will probably be more elusive and less universally apparent throughout the human population than reason is.