Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats.
Maybe I was being too generous when I read this chapter, but I don't think that's what Dennett was saying. He was saying that in order for a brain-in-a-vat to work, the operator would have to anticipate every possible observation you could make, resulting in a combinatorial explosion that could not be handled by anything simpler than the universe itself.
That ties in with his next point (that you mention) about hallucinations, and how they persist only until you make an observation that the hallucination-generator can't fake.
It wasn't about bandwidth (rate of information transfer) at all. But perhaps I should re-read it.
I think Consciousness Explained makes huge strides in "breaking the back" of the problem of consciousness, but it's not some perfectly formed jewel of a book that warrants this sort of chapter-by-chapter treatment. The problems you point out here aren't central to the way it addresses things. I think you'll get a much more useful discussion if you finish the book and then post about the biggest problems you see with its overall treatment.
I sketched a brief Overview of Dennett's book a few years ago, if that's of any interest to people here...
It's worth stressing that he's really just explaining the "soft problem" of consciousness, i.e. its informational aspect rather than its qualitative aspect. But he does have lots of interesting stuff to say about the former. (And of course lots of folks here will agree with Dennett that the third personal data of "heterophenomenology" are all that need to be explained. I'm just flagging that he doesn't say anything that'll satisfy people who don't share this assumption.)
Suggestion: stop reading, and, before you continue, write (perhaps in a comment to Mitchell's post) what you take the problems of consciousness to be; what work you expect Dennett to achieve if he is to deserve the book's title.
Or perhaps that could be a top-level "poll" type post, asking LW readers to post their framing of the issues in consciousness; we would then have some immediate data on your hypothesis that "a lot of very bright people cannot see that consciousness is a big, strange problem".
You appear to have missed the whole po...
Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about.
It might be the case that some of us are over-confident regarding the chances qualia and consciousness can be explained as properties of certain kinds of physical systems (though I would never take even odds against science). But it seems to me a much bigger problem that someone would take the fact that no...
The suggestion that the integration of new sense-data into a model is at least partly driven by the state of the model is further supported by images with multiple interpretations (classically the Necker cube or shadows of rotating objects). Data consistent with multiple models is integrated into the currently held one. Inattentional blindness is a similar phenomena.
... consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness.
Why?
Mitchell Porter hasn't explained this either. What do yo...
I'm a foolish layman, but the problem of consciousness seems very easy to me. Probably because I'm a foolish layman.
Qualia are simply holes in our knowledge. The qualium or redness exists because your brain doesn't record the details of the light. If you were built to feel its frequency, or the chemical composition of food and smells, you'd have qualia for those. It's also possible to have qualia for things like "the network card driver crashing, SIAI damn I hate that".
I don't see why you need to think further than that.
Re: "For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem."
A dubious assertion, which you apparently don't bother backing up. If consciousness is just perception that can be reflected on, it does not seem like a very big or a very strange problem.
So: what exactly is the purported problem?
I'm a foolish layman, but the problem of consciousness seems very easy to me. Probably because I'm a foolish layman.
Qualia are simply holes in our knowledge. The qualium or redness exists because your brain doesn't record the details of the light. If you were built to feel its frequency, or the chemical composition of food and smells, you'd have qualia for those. It's also possible to have qualia for things like "the network card driver crashing, SIAI damn I hate that".
Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine.
(It is my understanding that consciousness, as used here, is the state of having qualia. Correct me if I'm wrong.)
Outside view tells me Dennett didn't solve the problem of consciousness, because philosophers don't solve problems...
Any purported explanation of consciousness had better clearly identify the minimum stuff required for consciousness to exist. For example, if someone claims computer programs can be conscious, they should provide a 100-line conscious program. If it has to be some kind of quantum voodoo machine, ditto: how do we build the simplest possible one? I don't think that's setting the bar too high!
If Dennett's book contains an answer to this question, I'd like to hear it right away before wading into the philosophical gook. If it doesn't, case closed.
This "outside view abuse" is getting a little extreme. Next it will tell you that Barack Obama isn't President, because people don't become President.
I'm starting Dennett's "Consciousness Explained". Dennett says, in the introduction, that he believes he has solved the problem of consciousness. Since several people have referred to his work here with approval, I'm going to give it a go. I'm going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it. "D" will stand for Dennett.
If you loathe the C-word, just stop now. That's what the convenient break just below is for. You are responsible for your own wasted time if you proceed.
Chpt. 1: Prelude: How are Hallucinations Possible?
D describes the brain in a vat, and asks how we can know we aren't brains in vats. This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.
Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats. Sigh.
He then asks how hallucinations are possible: "How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?" Sigh again. This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth. D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.
D recounts Descartes' remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, "there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind." Sigh. Now not only consciousness, but also dreams, are impossible. However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.
On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations). A hallucination could happen when the bottom-up channel is cut off. D doesn't get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world. Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form. Compressed data necessarily looks more random than prior to compression. This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise. D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.
D dismisses Freud's ideas about dreams - that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors - by observing that we should not posit homunculi inside our brains who are smarter than we are.
[In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness. He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action. It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.
Repeatedly on LW, I've seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about. For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness. It is a different problem. (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)
Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion: "A computer can never write a symphony. Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death." I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning. People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.
At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it's about consciousness, but that it has solved the problem. I shall see.]