I'm starting Dennett's "Consciousness Explained".  Dennett says, in the introduction, that he believes he has solved the problem of consciousness.  Since several people have referred to his work here with approval, I'm going to give it a go.  I'm going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it.  "D" will stand for Dennett.

If you loathe the C-word, just stop now.  That's what the convenient break just below is for.  You are responsible for your own wasted time if you proceed.

Chpt. 1: Prelude: How are Hallucinations Possible?

D describes the brain in a vat, and asks how we can know we aren't brains in vats.  This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.

Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats.  Sigh.

He then asks how hallucinations are possible: "How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?"  Sigh again.  This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth.  D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.

D recounts Descartes' remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, "there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind."  Sigh.  Now not only consciousness, but also dreams, are impossible.  However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.

On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations).  A hallucination could happen when the bottom-up channel is cut off.  D doesn't get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world.  Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form.  Compressed data necessarily looks more random than prior to compression.  This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise.  D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.

D dismisses Freud's ideas about dreams - that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors - by observing that we should not posit homunculi inside our brains who are smarter than we are.

[In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness.  He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action.  It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.

Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about.  For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem.  Not intelligence, not even assigning meaning to representations, but consciousness.  It is a different problem.  (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)

Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion:  "A computer can never write a symphony.  Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death."  I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning.  People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.

At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it's about consciousness, but that it has solved the problem.  I shall see.]

New to LessWrong?

New Comment
100 comments, sorted by Click to highlight new comments since: Today at 11:34 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats.

Maybe I was being too generous when I read this chapter, but I don't think that's what Dennett was saying. He was saying that in order for a brain-in-a-vat to work, the operator would have to anticipate every possible observation you could make, resulting in a combinatorial explosion that could not be handled by anything simpler than the universe itself.

That ties in with his next point (that you mention) about hallucinations, and how they persist only until you make an observation that the hallucination-generator can't fake.

It wasn't about bandwidth (rate of information transfer) at all. But perhaps I should re-read it.

3thomblake14y
I agree with your reading of Dennett.
3Tyrrell_McAllister14y
I just took a look at the prelude. I think that your interpretation is right. However, Dennett does use the word "bandwidth", which might not have been the best choice. I can see why it would lead a reader to think that Dennett was talking about channel capacity.
0Peterdjones12y
Assuming the simulation is runnign in our universe. However, the Simulators could be fooling the Brains aout the size of the universe. Maybe what the Brains think the universe is, is a tiny corner of theirs. Equally, they could be fooling the Brains about the capactities of computers, even the fundamental of computer science. See, scepticism is a Universal Solvent [*]. Once you accept it all bets are off. [*] D. C.Dennett.

I think Consciousness Explained makes huge strides in "breaking the back" of the problem of consciousness, but it's not some perfectly formed jewel of a book that warrants this sort of chapter-by-chapter treatment. The problems you point out here aren't central to the way it addresses things. I think you'll get a much more useful discussion if you finish the book and then post about the biggest problems you see with its overall treatment.

I sketched a brief Overview of Dennett's book a few years ago, if that's of any interest to people here...

It's worth stressing that he's really just explaining the "soft problem" of consciousness, i.e. its informational aspect rather than its qualitative aspect. But he does have lots of interesting stuff to say about the former. (And of course lots of folks here will agree with Dennett that the third personal data of "heterophenomenology" are all that need to be explained. I'm just flagging that he doesn't say anything that'll satisfy people who don't share this assumption.)

1Paul Crowley14y
"position" rather than "assumption", I think. He doesn't just assume it, he works hard to justify it.
0PhilGoetz14y
Thanks! Lots of interesting stuff there. It does sound like this book isn't going to be useful in helping me talk about qualia; but that it would have been useful to help me think about intelligent agent architectures back when it came out in 1991.
1RobinZ14y
I've linked this a few times here, but Dennett's specific essay on qualia is "Quining Qualia", and available online.

Suggestion: stop reading, and, before you continue, write (perhaps in a comment to Mitchell's post) what you take the problems of consciousness to be; what work you expect Dennett to achieve if he is to deserve the book's title.

Or perhaps that could be a top-level "poll" type post, asking LW readers to post their framing of the issues in consciousness; we would then have some immediate data on your hypothesis that "a lot of very bright people cannot see that consciousness is a big, strange problem".

You appear to have missed the whole po... (read more)

0Paul Crowley14y
There's one step in the book that I come back to over and over again, that I have so far never got a hard-problemer to directly address: the idea of heterophenomenology. If you follow this advice, then when you come to write your comment on what the problem of consciousness is, consider whether you have to directly and explicitly appeal to a shared experience of consciousness, or whether you can do it by referring to what we say about consciousness, which is observable from the outside.
3Morendil14y
My understanding of Dennett's heterophenomenology has benefited from comparing it with Pickering/Latour and the STS folks' approach, which rests on reconciling two positions which initially seem at odd with each other: * we commit to taking seriously the first-person accounts of, respectively, "what it is like to be a conscious person" and "what it is like to advance scientific knowledge" * we decline in both case to take these accounts at face value; that is, we assert that our position as an outside observer is no less privileded than our "inside" interlocutor's; we seek to explain why people say what they say about how they come to have certain forms of knowledge, without assuming their reports are infallible. When investigating something like attentional blindness, this goes roughly as follows: we show a subject a short video of basketball players in the street after giving them brief instructions. Then we ask them afterwards "what did you consciously see for the past few minutes" ? They are likely to say that they were consciously observing a street scene during that time. But it turns out that we, the investigator, know something about the video which leads us to doubt the subject's report about what they were conscious of. (I don't want to spoil anything for those who haven't seen the video yet, but I assume many people know what I'm talking about. If you don't, go see the video.) As far as I can tell, a large number of "problems of consciousness" fall into this category; people's self-reports of what it is like to be a conscious person conflict with what various clever experiments indicate about what it actually is like to be a conscious person. They also conflict with our intuitions obtained from physical theories. For instance we can poll people on whether an atom-for-atom copy of themselves would be "the same person", and notice that most people say "no way, because there can only be one of me". To explain consciousness is to explain why people fee
0AndyWood14y
Heterophenomenology is neat, tidy, and and wonderful for doing science about a whole bunch of questions about inner sensation. It's great as far as it goes. Some of us just don't think it goes to the finish line, and are deeply dissatisfied with the attitude that seems to suggest that it is our "scientific duty" to abandon the question of how the brain generates characteristic inner sensations, on the grounds that we can't directly access such things from the outside. I believe that future discovery and insight will show that view (assuming I am even ascribing it correctly in the first place) to be short-sighted.
0Paul Crowley14y
Heterophenomenology does tackle that question, just at one remove - it attempts to account for your reports of those inner sensations.
5AndyWood14y
Again, that is useful in its own right, but the indirection changes the question, and so it is not an answer. Accounting for reports of sensations is not conceptually problematic. It's easy to imagine making the same kinds of explanations for the utterances of a simpler, unconscious machine. Except that there are certain utterances I would not expect the machine to make. E.g., assuming it was not designed with trickery in mind, I would not expect the machine to insist that it had a tangible, first-person, inner experience. Explaining the utterances does not explain the actual mechanism that I'm talking about when I insist that I have that experience. I'm not interested in why I say it, I'm interested in what it is that the brain is doing to produce that experience. If it's a bidirectional feedback loop involving my body, itself, and its environment (or whatever), then I want to know that. And I want to know whether one can construct such a feedback loop and have it experience the same effect. Please note that I am not making the standard zombie argument. I'm suggesting that humans and animals must have an extra physical - not metaphysical, not extra-physical - component that produces our first-person experience. I want to know how that works, and I currently do not accept that that question is either meaningless, or unanswerable in principle.
2Paul Crowley14y
This is precisely the point! Why not? Why, once we've explained why you sincerely insist you have that experience, do you assume there's more to explain?
5AndyWood14y
For certain senses of the word "why" in that sentence, which do not "explain away" the experience, there might not be more to explain. From reading Dennett, I have not yet got the sense that he, at least, ever means to answer "why" non-trivially. Trivially, I already know why I insist - it's because I have subjective experience. I can sit here in silence all day and experience all kinds of non-verbal assurances that this is so - textures, tastes, colors, shapes, spacial relationships, sounds, etc. Whatever systems in my brain register these, and register the registering, interact with the systems that produce beliefs, speech, and so forth. What I'm looking for, and what I suspect a lot of people who posit a "hard problem" are really looking for, is more detail on how the registration works. Dennett's "multiple drafts" model might be a good start, for all I know, but it leaves me wanting more. Not wanting a so-called Cartesian Theater - just wanting more explanation of the sort that might be very vaguely analogous to how an electromagnetic speaker produces sound waves. Frankly, I find it very difficult even to think of a proper analogy. At any rate, I'm happy to wait until someone figures it out, but in the meantime I object to philosophies that imply there is nothing left to figure out.
0RobinZ14y
Which, to his credit, Dennett does not imply (at least, not in Consciousness Explained).
-3Richard_Kennaway14y
It does so in terms making no reference to those inner sensations. Heterophenomenology is a lot more than the idea that first-person reports of inner experience are something to be explained, rather than taken as direct reports of the truth. It -- Dennett -- requires that such reports be explained without reference to inner experience. Heterophenomenology is the view that we are all p-zombies. It avoids the argument that a distinction between conscious beings and p-zombies makes no sense, by denying that there are conscious beings. There is no inner experience to be explained. Zombie World is this world. Consciousness is not extra-physical, but non-existent. It is consciousness that is absurd, not p-zombies. You do not exist. I do not exist. There are no persons, no selves, no experiences. There are reports of these things, but nothing that they are reports about. In such reports nothing is true, all is a lie. Physics revealed the universe to be meaningless. Biology and palaeontology revealed our creation to be meaningless. Now, neuroscience reveals that we are meaningless. Such, at any rate, is my understanding of Dennett's book.
1Paul Crowley14y
This is the exact opposite of my understanding, which is that heterophenomenology itself sets out only what it is that is to be accounted for and is entirely neutral on what the account might be.
1pdf23ds14y
Sure. Doesn't follow. H17y can be seen as simply a first, more tractable step on the way to solving the hard problem. Perhaps others would agree with your statement, but I don't believe Dennett would.
0Morendil14y
A flawed understanding, then. Dennett certainly does not deny the existence of selves, or of persons. What he does assert is that "self" is something of a different category from the primary elements of our current physics' ontology (particles, etc.). His analogy is to a "center of gravity" - a notional object, but "real" in the sense of what you take it to be definitely makes a difference in what you predict will happen.
0whpearson14y
The trouble comes in when we start putting a utility on pleasure and pain. For example lets say you were given a programmatic description of a less than 100% faithful simulation of a human and asked to assess whether it would have (or reported it had) pain, without you running it. Your answer would determine whether it was used in a world simulation.
2Paul Crowley14y
Proposing a change in physics to make your utility function more intuitive seems like a serious mis-step to me.
2whpearson14y
I'm just identifying the problem. I have no preferred solution at this point. ETA: Altering physics is one possible solution. I'd wait on proposing a change to physics until we have a more concrete theory of intelligence and how human type systems are built. I think we can still push computers to be more like human-style systems. So I'm reserving judgement until we have those types of systems.
-1DanArmak14y
Whatever we say is explicable in terms of brain physics. It is enough to postulate a p-zombie-like world to explain what we say. If we didn't experience consciousness directly, the very idea (edit: that is, of p-zombies) would never have occurred to us. Therefore I don't see why anyone would want or need to discuss consciousness in terms of outside observations.
4Paul Crowley14y
The fact that the idea occurred to us is observable from the outside - that's pretty much the central insight behind heterophenomenology. An external observer could see for example this entire thread of discussion, and conclude that we've come up with an idea we call "consciousness" and some of us discuss it lots. And that's definitely an observation that any worthwhile theory has to account for, it's completely copper-bottomed objective truth. If you haven't already, have a look at the sequence on zombies especially the first couple of articles.
0DanArmak14y
You may have misinterpreted my comment. I meant that if we didn't experience consciousness directly, the idea of p-zombies would not have occurred to us.
0Paul Crowley14y
I did misinterpret it, but it doesn't matter because the response is almost exactly the same. The fact that the idea of p-zombies occurred to us is also observable from the outside, and is therefore copper-bottomed evidence that heterophenomenology takes into account. If you can state the problem based on what we observe from the outside, it moves us from a rather slippery world of questions that are hard to pin down to a very straightforward hard-edged question about "does your theory predict what we observe?". And I'm not asking whether you think this is a necessary move - I'm asking whether it's sufficient - whether you're still able to state the problem relying only on what you can observe from the outside.
1DanArmak14y
The fundamental premise of consciousness (in its usual definitions) is, indeed, something that by definition cannot be observed from the outside. Yes, this involves a lot of problems like p-zombies. But once you prove (or assume) that you can handle the problem purely from the outside, then you've effectively solved (or destroyed) the problem, congratulations. Hard Problem Of Consciousness tl;dr: "It feels like there's something on the inside that cannot be observed from the outside..."
-1Paul Crowley14y
Not only cannot be observed from the outside, but has no observable consequences whatsoever? This whole thread isn't a consequence of consciousness? Could you confirm for me that you mean to bite that bullet? It's starting to look like I or someone should do a top level article explicitly on this subject, but in the meantime, you might be interested in Dennett's Who's On First?. EDIT: probably too late, but requesting downvote explanation - thanks!
1DanArmak14y
Just to make clear, this isn't my view. I'm explaining the views of other people who think "consciousness" is an extra-physical phenomenon. I started by pointing out the necessary consequences of that position, but it's not my own position. (I said this in other threads on the subject but not here, I see now.) And yes, if people who postulate extra-physical consciousness, AFAICS they have to bite this bullet. If consciousness is at all extra-physical, then it is completely extra-physical, and is not the cause of any physical event. On the other hand, if consciousness was the name of some ordinary, physical, pattern, then it wouldn't help explain the subjective experience that forms the "Hard Problem of Consciousness."

Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about.

It might be the case that some of us are over-confident regarding the chances qualia and consciousness can be explained as properties of certain kinds of physical systems (though I would never take even odds against science). But it seems to me a much bigger problem that someone would take the fact that no... (read more)

The suggestion that the integration of new sense-data into a model is at least partly driven by the state of the model is further supported by images with multiple interpretations (classically the Necker cube or shadows of rotating objects). Data consistent with multiple models is integrated into the currently held one. Inattentional blindness is a similar phenomena.

... consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness.

Why?

Mitchell Porter hasn't explained this either. What do yo... (read more)

0PhilGoetz14y
It would be a little bit analogous to a prime mover argument if we actually were the prime movers.
[-][anonymous]14y10

I'm a foolish layman, but the problem of consciousness seems very easy to me. Probably because I'm a foolish layman.

Qualia are simply holes in our knowledge. The qualium or redness exists because your brain doesn't record the details of the light. If you were built to feel its frequency, or the chemical composition of food and smells, you'd have qualia for those. It's also possible to have qualia for things like "the network card driver crashing, SIAI damn I hate that".

I don't see why you need to think further than that.

Re: "For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem."

A dubious assertion, which you apparently don't bother backing up. If consciousness is just perception that can be reflected on, it does not seem like a very big or a very strange problem.

So: what exactly is the purported problem?

3PhilGoetz14y
The problem is how matter can have self-awareness. It's hard to describe in words, because all of the words to describe this (consciousness, feeling, awareness) have also been (ab)used to describe the non-mysterious processes that enable an organism to act in the same way as one that we believe has consciousness, feeling, awareness. You can say you're a functionalist, and you believe that a system that accurately reproduces all the same observable behavior of consciousness necessarily will also reproduce consciousness. Supposing that were so; it wouldn't explain consciousness. I think functionalism is the claim that consciousness is not epiphenomenal. Suppose functionalism is false, and something that behaves like a conscious system is not necessarily conscious. This would mean that a conscious system possessed some extra quality, "consciousness" which was not a behavior and is not observable. Hence, epiphenomenal. Alternately, people could mean by functionalism that anything that reproduces all the behavior of a conscious system that we are currently capable of observing (or at least theorizing about, having the necessary concepts in our physics), is necessarily conscious. But that would be silly; it would be equivalent to the assertion that today's physics is complete.
1MichaelVassar14y
It's very reasonable to claim that epiphenomenalism is not just false but incoherent.
-1PhilGoetz14y
Is that assuming that you don't believe in free will?
0anonym14y
Epiphenomenalism and functionalism are conceptually independent -- at least, there is no obvious relation between them such that one would imply the other. I've also never heard functionalism as the claim that "consciousness is not epiphenomenal", despite having heard at least 25 different authors use the term. A standard formulation, from the Stanford Encyclopedia of Philosophy, is: Your usage of epiphenomenal is also too imprecise. It means not that some system has some extra quality, but that there are two systems operating in parallel or in something analogous to a parallel manner, such that there is a base phenomenon that causes the epiphenomenon but that is never in turn influenced by the epiphenomenon. A good example of an epiphenomenon would be Plato's Allegory of the Cave: the people on the walkway, and the light, are the causes (i.e., base phenomenon) of the parallel system of shadows on the wall (i.e., epiphenomenon) that the prisoners see and take as real agents, but the shadow system has no influence at all on the underlying real system of the people on the walkway (assume they are unaware and have never actually seen the shadows behind them).
1DanArmak14y
Consciousness is (purportedly) the property of being able to perceive anything (not just itself). The property of having subjective experience. Most people claim to have such themselves, and that opens the question of what this property actually is and what other things may have it. Which is indeed a "big and strange" question, if you define this property as being extra-phenomenal - which is inherent in most discussions of subjective experience. To deny consciousness as a problem, you need to deny the existence of your own subjective experience. Not just your objectively existing self-reporting of it, but the actual subjective experience that you feel and that I can't even in principle check if you feel. On the other hand, this consciousness must be logically necessary (otherwise we get p-zombies) and cannot causally influence the objective universe (otherwise we get dualism and our physical theories are all wrong).
3timtyler14y
Re: "Consciousness is (purportedly) the property of being able to perceive anything (not just itself)." Sleepwalkers perceive things (they must to be able to walk and balance). However, they are not conscious. Also, there is "Unconscious Perception". ...and we already have a word for perception. It's "perception". So: consciousness is best not being defined that way.
2DanArmak14y
How do you know? Maybe they (and dreamers) are conscious, to a degree, they just don't form memories.
1gwern14y
And don't forget lucid dreamers can remember and carry out actions with their eyes.
0DanArmak14y
A good point. That certainly looks like a central component of consciousness (whatever that is) that's absent from sleepwalking.
-5timtyler14y
1timtyler14y
Re: "To deny consciousness as a problem, you need to deny the existence of your own subjective experience." That just sounds like nonsense to me :-(
1DanArmak14y
Keeping in mind that I'm explaining a view with which I don't fully agree (but I don't hold to an alternative view either, I just don't fully understand the matter) - I'll try to reformulate. We have subjective experience. It does not seem to be describe-able in ordinary physical terms, or to arise from theories of the physical world, because these theories don't have any place for "experience" or "feeling" as seen from the inside - only as seen from the outside. What the experience is about, the events and information we experience, is part of the physical world. Aboutness is fully explained as part of the physical world. What's not explained is why we feel at all. Why does an algorithm feel like something from the inside? Why does it have an "inside"? I have feelings, experiences, etc. The question being asked isn't even why I have them. It's more like, what are they? What ontological kind do they have?
-3timtyler14y
Re: "We have subjective experience. It does not seem to be describe-able in ordinary physical terms" ...but it must be - everything in the universe is. Re: "or to arise from theories of the physical world, because these theories don't have any place for "experience" or "feeling" as seen from the inside" So what? They don't have the notion of "fractal drainage patterns" or "screw dislocations" either. Complex systems have emergent properties, not obviously related to physical laws - but still ultimately the product of those laws. Feelings are patterns - and like all patterns, are made of information.
1DanArmak14y
Is that an observed fact, or a definition of "everything in the universe"? If a fact, a rule that has held so far, then (some people claim that) consciousness is an observation that contradicts this rule. If a definition, then perhaps consciousness can also be said to be "in" the universe, but that doesn't help us understand it... Anyway, I don't think I have any more to contribute to this discussion. I fully understand your position. I think I also understand the position of at least some people who claim that consciousness is a real, but extra-physical, thing to be explained (like MItchell_Porter?). So I've tried to explain the latter viewpoint. But I ended up going in circles because this idea rests on everyone agreeing that their subjective experiences indicate that such a "extra-physical consciousness" exists, and the moment someone doesn't accept this premise - like you - the discussion is pretty much over. I'm ambivalent myself: I can understand what the "pro-consciousness" people mean, and I might accept their claims if they could answer all the resulting questions, which they don't. So I see a possible unresolved problem. On the other hand, it's likely that if I hadn't encountered this idea of consciousness I would never have come up with it myself, all talk of "immediate subjective knowledge" nonwithstanding. That's why I'm not sure there is a problem.
1timtyler14y
Re: "If a fact, a rule that has held so far, then (some people claim that) consciousness is an observation that contradicts this rule." Right - but those people have no convincing evidence. If there was some mysterious meta-physical do-dah out there, we should expect to see some evidence. Until there is evidence, the hypothesis is not favoured by Occam's razor. The advocates can look for evidence, and the sceptics can think they are crazy - but until some actual evidence is found, there's not much for people like me to discuss. The hypothesis is about as near to dead as it can get.
1DanArmak14y
As I said: the hypothesis relies entirely on everyone agreeing that they, too, sense this mysterious thing inside them (or identical with them, or whatever). Until new evidence or argument is brought forward, I'll continue treating it as a cultural mental artifact. But I do feel somewhat sympathetic towards attempts at creating such new arguments without using new evidence.
1timtyler14y
Why would a subjective experience cause people to think they know more about physics than physicists do? Subjective experiences are an especially poor quality form of evidence.
0DanArmak14y
Subjective experience is immediate. You can't ignore or deny its existence (although you may think there's nothing unexplained or mysterious about it). When people consider that physics doesn't explain their subjective experience (whether or not these people fully understand physics), they therefore feel they have no choice but to conclude that the physics, or the physical ontology, is incomplete.
1timtyler14y
In an computable universe, you can make agents experience literally anything. No amount of zen moments would add up to reasonable evidence. What would be more convincing is that if brains demonstrably did something that violated the known laws of physics. Much like Penrose thought they did, IOW. Then we would have to poke around in search of what was going on. However, there seems to be no hint of that.
1pdf23ds14y
Sure you can, if you're an epiphenomenalist. (Am I right that you've been advocating that position, though you may not hold it?) A conscious being could sincerely deny experiencing consciousness. Such a being wouldn't be a normal human, though possibly a brain-damaged human. At any rate, they surely exist in mind-space. Likewise, an unconscious being could claim to experience consciousness (i.e. a p-zombie). It would seem that heterophenomenology as ciphergoth has been advocating is incompatible with ephiphenomenalism. I suspect that there might be some sort of personality disposition to be more or less willing to claim to experience experience, to feel the immediacy of consciousness. Something analogous to the conservative/liberal divide. If that's true, then making claims like the quoted one is just the mind projection fallacy.
1DanArmak14y
Yes, that's more or less what I've been advocating. (The funny thing is that I don't even have a clear position of my own...) Regarding consciousness without experience, in what sense is it consciousness then? I'd call it an unconscious but highly intelligent agent - perhaps the AIs we'll build will be such. A very good idea, and a possible explanation for many disagreements. It'd be just like the known cases of people disagreeing about whether thinking necessarily involved visual mental images, or whether human thinking necessarily involves "talking to oneself" using sound processing circuitry. The "experience experience" of those who do report it still has to be explained to their satisfaction. Those who don't experience it as vividly just tend to shrug it off as not important or not real or a cultural delusion of some sort.
-1PhilGoetz14y
Perhaps you are a zombie.
6timtyler14y
This is not the first time that the qualiaphiles have used that put-down on me.
0[anonymous]14y
It seems to me that zombie is tongue-in-cheek, while qualiaphile is a calculated rhetorical put-down.

I'm a foolish layman, but the problem of consciousness seems very easy to me. Probably because I'm a foolish layman.

Qualia are simply holes in our knowledge. The qualium or redness exists because your brain doesn't record the details of the light. If you were built to feel its frequency, or the chemical composition of food and smells, you'd have qualia for those. It's also possible to have qualia for things like "the network card driver crashing, SIAI damn I hate that".

Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine.

(It is my understanding that consciousness, as used here, is the state of having qualia. Correct me if I'm wrong.)

0Sideways14y
Your eyes do detect the frequency of light, your nose does detect the chemical composition of smells, and your tongue does detect the chemical composition of food. That's exactly what the senses of sight, smell, and taste do. Our brains then interpret the data from our eyes, noses, and tongues as color, scent, and flavor. It's possible to 'decode', e.g., color into a number (the frequency of light), and vice versa; you can find charts on the internet that match frequency/wavelength numbers to color. Decoding taste and scent data into the molecules that produce them is more difficult, but people find ways to do it--that's how artificial flavorings are made. There are lots of different ways to encode data, and some of them are more useful in some situations, but none of them are strictly privileged. A non-human brain could experience the 'color' of light as a number that just happens to correspond to its frequency in oscillations/second, but that wouldn't prevent it from having qualia, any more than encoding numbers into hexadecimal prevents you from doing addition. So it's not the 'redness' of light that's a quale; 'red' is just a code word for 'wavelength 635-700 nanometers.' The qualia of redness are the associations, connections, emotional responses that your brain attaches to the plain sensory experience.
4mattnewport14y
The human experience of colour is not really about recognizing a specific wavelength of light. We've discussed this before here. Our rods and cones are sensitive to the wavelength of light but the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.
0Sideways14y
True, but irrelevant to the subject at hand. No, the qualia of color have nothing to do with the observed object. This is the pons asinorum of qualia. The experience of color is a product of the invariant surface properties of objects; the qualia of color is a product of the relationship between that experience and other similar experiences. A human looking at an optical illusion might say, "That looks red, but it's really white," acknowledging that spectral color is objective, but psychophysical color is more malleable. But compare that sentence to "that sounds good, but it's really bad." Statements about color aren't entirely subjective--to some extent they're about fact, not opinion. Statements about qualia are about the subjective aspect of an experience: e.g., red is the color of rage; of love; the color that means 'stop.'

Outside view tells me Dennett didn't solve the problem of consciousness, because philosophers don't solve problems...

Any purported explanation of consciousness had better clearly identify the minimum stuff required for consciousness to exist. For example, if someone claims computer programs can be conscious, they should provide a 100-line conscious program. If it has to be some kind of quantum voodoo machine, ditto: how do we build the simplest possible one? I don't think that's setting the bar too high!

If Dennett's book contains an answer to this question, I'd like to hear it right away before wading into the philosophical gook. If it doesn't, case closed.

This "outside view abuse" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.

3Joanna Morningstar14y
Being No One, Metzinger. Review and overview here. Precis here. It's heavy cognitive neurology, but it does attempt to find minimal sets of properties needed for subjectivity and conciousness. It also suggests that the fundamental problem in monist/dualist debates is that the processes of cognition are invisible to self-inspection.
0pjeby14y
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements "minimal consciousness" by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model. I don't see this as a problem with the ideas presented, mind you, it's more of a problem in the statement of the constraints. I think that what he meant to require that a conscious system have a subsystem which can selectively observe a limited subset of a nonconcsious model of the world. (In which case a thermostat would fail, since it has only a single, non-reflective level of modeling.) Much of the precis (or at least the 20% I got through before getting tired of wading through vague and ambiguous language full of mind-projections) seems to have similar problems. It's definitely not an implementation specification for consciousness, as far as I can tell, but at the same time I have found little fault with what the author appears to be pointing towards. The answers given seem vaguely helpful, but tend to raise new questions.
0gwern14y
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your 'selective' would seem to exclude many lower animals. It might be better to think of minimal as being unconscious - a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
3pjeby14y
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to "react mentally", either, and the distinction is meaningless. Either way, "choice" is less meaningful than "selection" - we can argue how much choice there is in the selection later. In fact, the mere fact of selectivity means, there's always something not being "reacted to mentally" by the "observer" of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it "drawn" to things, so clearly, selectivity is a mixed bag with respect to choice.
0gwern14y
It seems we disagree on what 'reacting mentally' is - I'd say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog - lower than gerbils or rats even - is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can 'tune out' stimuli. But an example may help. What would you have to add to a thermostat to make it non-'minimal', do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
3pjeby14y
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all. No, in order to have selective attention you'd need something that could say, choose which of six thermal input sensors to "pay attention to" (i.e., use to drive outputs) based on which sensor had more "interesting" data. I'm not sure what to add to give it a self-model - unless it was something like an efficiency score, or various statistics about how it's been paying attention, and allow the attention system to use that as part of its attention-selection and output. Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as "minimally conscious"... and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced. In other words, I think it's provocative but thoroughly unsatisfying. (I also think you're doing a similar intuitive anthropomorphic projection on the notions of "reacting mentally" and "tune out", which would explain our difficulty in communicating.)
0Joanna Morningstar14y
The precis is, by its nature, shorter than it should be; the book gives more precise definitions and gives a defence of that set of constraints over others. I don't have the book on hand at the moment, as it's in the university library. The book itself is more concerned with the neurology; the precis is more a quick overview of claimed results for other philosophers.
-9cousin_it14y
0DanArmak14y
I think Dennett is more into showing that the naive view of consciousness is inconsistent and that "being conscious" is not a legit property of things.
0PhilGoetz14y
What does that mean?
0DanArmak14y
I meant that perhaps consciousness cannot be consistently and meaningfully defined as a property of things, so as to enable us to say: a man is conscious, a rock is not. What is consciousness, anyway? It comes to something when we need a whole book (Consciousness Explained) to tell us what a word means, instead of a simple definition. And even then we don't agree. I certainly sympathize with those who'd prefer to abolish the whole idea of consciousness, instead.
0timtyler14y
Why do you say "philosophers don't solve problems"? That seems rather harsh!
3cousin_it14y
I can't name offhand any important problem that philosophers posed and other philosophers later solved. From Zeno's paradox to Newcomb's problem, solutions always seem to come from other fields.
6Morendil14y
Noticing a problem seems an important contribution to solving it.
3Technologos14y
Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers' framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy. Philosophy doesn't really solve questions in itself; instead, it allows others to solve them.
1gwern14y
--Wittgenstein
3timtyler14y
Take David Hume's correct refutation of the design argument, for example: http://en.wikipedia.org/wiki/David_Hume#The_design_argument This argument is still used today - though we know a bit more about the subject now.
0DanArmak14y
Refute it then.
5timtyler14y
http://www.philosophyetc.net/2008/02/examples-of-solved-philosophy.html ...has one guy's list. One might also point to the philosophy of science (Popper, Kuhn, Hull) to see philosophers making definite progress on the problems in their field.
0ideclarecrockerrules14y
Most of his other points rely on loose definitions, IMO ("rational", "justified", "selfish", "cat"), but this one seems plainly wrong to me, as he seems to attach the same meaning to the word "evidence" as LW does (although not that formal). I'm not saying philosophers do not contribute to problem-solving, far from it. It may be that he is wrong and this is not "at least as well-established as most scientific results" in philosophy. It may also be that a significant amount of philosophers disregard (or have no knowledge of) Bayesian inference.
0timtyler14y
http://www.philosophyetc.net/2005/09/raven-paradox-essay.html Fair enough, I think. I too would generally regard observations of black ravens as being weak evidence that all ravens are black.
0ideclarecrockerrules14y
Weak evidence, but evidence nonetheless. I read the essay again, and it appears that what the author means is that there exists a case where observing a black raven is not evidence that all ravens are black; the case he specified is one where the raven is picked from a population already known to be consisting of black ravens only. In some sense, he is correct. Then again, this is not a new observation. He does present a case where observing a red haring constitutes weak probabilistic evidence that all ravens are black. So, my disagreement comes from my misinterpretation of the word "may".
0Daniel_Burfoot14y
I would find this list more convincing if the author weren't himself a philosopher. I agree that the philosophy of science is a different category entirely. I would also suggest that the current sorry state of AI is due primarily to limitations in our current understanding of scientific philosophy (as opposed to limitations of our mathematical or neurological understanding).