B_For_Bandana comments on Rationality Quotes June 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (778)
From the remarkable opening chapter of Consciousness Explained:
--Daniel Dennett
I'm someone who still finds subjective experience mysterious, and I'd like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?
I've had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)
That said, it's a pretty fun read. If the subject interests you, I'd recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.
He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don't know how to translate that into an explanation of why I am conscious.
The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).
A perfect and complete explanation of of the behavior of humans, still doesn't seem to bridge the gap from "objective" to "subjective" experience.
I don't claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn't seem to answer the question or dissolve it.
Here's how I got rid of my gut feeling that qualia are both real and ineffable.
First, phrasing the problem:
Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience - for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.
What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call "green" if you could access it, but since I learned my colour words by looking at firetrucks, I still call it "red".
If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the "atoms" of experience additionally have intrinsic natures (I'll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.
You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren't real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.
An attempt at a solution:
Take another experiential "spectrum": pleasure vs. displeasure. Spectrum inversion is harder, I'd say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really "ultimately" being UNPLEASANT for her.
Anyway, if pleasure-displeasure can't be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren't really all you need to represent colour experience. Colour experience doesn't, and can't, ever occur isolated from other cognition.
For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.
If the monkey in the green (to it, RED') room gets antsy, or the monkey in the red (to it, GREEN') room doesn't, then that means the spectrum-inversion was causal and ineffable qualia don't exist.
But if the monkey in the green room doesn't get antsy, or the monkey in the red room does, then it hasn't been a full spectrum inversion. RED' without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you'd have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.
This isn't knockdown, but it convinced me.
I'm not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.
What you've done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what's required to 'make the inversion succesfully' is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?
It seems intuitive to assume 'red' and 'green' remain the same in normal conditions: but I'm left totally lost as to what 'red' would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel 'what is it like to be a bat' problem, and I've never understood how that dissolves.
It's been a long time since I read Dennett, but I was in the camp of 'not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought'. No-one's ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.
If the hard problem of consciousness has really been solved I'd really like to know!
Consider the following dialog:
A: "Why do containers contain their contents?"
B: "Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe."
A: "Yes, of course, I know that, but why does that lead to containment?"
B: "I don't quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this --"
A: "No, no, I understand that stuff. I've been studying containment for years; I understand the simple problem of containment quite well. I'm asking about the hard problem of containment: how does containment arise from those merely mechanical things?"
B: "Huh? Those 'merely mechanical things' are just what containment is. If there's no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?"
A: "That's an admirable formulation of the hard problem of containment, but it doesn't solve it."
How would you reply to A?
But I don't think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don't see how it pierces through to consciousness as experienced, and linked questions such as 'what is it like to be a bat?' or 'how do I know my green isn't your red'
It would help if you could sum up the merely mechnical things that are 'just what consciousness is' in Dennett's (or your!) sense. I've never been clear on what confident materialists are saying on this: I'm sometimes left with the impression that they're denying that we have subjective experience, sometimes that they're saying it's somehow an inherent quality of other things, sometimes that it's an incidental byproduct. All of these seem to be problematic to me.
I don't think it would, actually.
The merely mechanical things that are 'just what consciousness is' in Dennett's sense are the "soft problem of consciousness" in Chalmers' sense; I don't expect any amount of summarizing or detailing the former to help anyone feel like the "hard problem of consciousness" has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the "hard problem of containment" has been addressed.
But, since you asked: I'm not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you're looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don't think that's what you're looking for.
As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I'm not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?
And: how would you reply to A?
I may not remember Chalmer's soft problem well enough either for reference of that to help!
If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we'd say 'it now has conscious experiences'. Or whether we'd talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can't respond to it.
With a container, you describe various qualities and that leaves the question 'can it contain things': do things stay in it when put there. You're adding a sort of purpose-based functional classification to a physical object. When we ask 'is something conscious', we're not asking about a function that it can perform. On a similar note, I don't think we're trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We're not chasing some over-abstracted ideal of consciousness, we're trying to explain an experienced reality.
So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y. You're over-idealising a useful functional concept'. The same is not true of consciousness, because it's not (just) a function.
It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...
That's such a broad statement, it could cover some forms of dualism.
That's funny, David again and the other David arguing about the hard versus the "soft" problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the "real" problem hasn't been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn't get explained away by a "mechanistic" model.
B believes that a (probably reductionist) model suffices, "this configuration of matter/energy can be called 'conscious'" is not fundamentally different from "this configuration of matter/energy can be called 'a particle'". If you're content with such an explanation for the latter, why not the former? ...
However, with many Bs I find that even accepting a matter-of-fact workable definition of "these states correspond to consciousness" is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a "value" for "subjective experience", we would know only about our own, and probably only ascribe that property to similar 'things' (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? "What an algorithm feels like on the inside" - any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It's not clear why qualia should depend on "only things that can communicate can experience qualia", for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could -- if it did experience it. Occam's Razor may prefer "everything can experience" to "only very special configurations of matter can experience", keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la "I can work with that, what more do I want?". "Here be dragons" is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn't a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as "Dave -- no, not that Dave, the other one."
I always assumed that the name was originally to distinguish you from David Gerard.
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don't see any value to asking the question. If it makes you feel better if I don't deny their existence, well, OK, I don't deny their existence, but I really can't see why anyone should care one way or the other.
In any case, I don't agree that the B's studying conscious experience fail to explore further questions. Quite the contrary, they've made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don't explore the particular questions you're talking about here.
And it's not clear to me that the A's exploring those questions are accomplishing anything.
So, A asks "If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?"
How would you reply to A?
My response is something like "We know that certain configurations of physical objects give rise to containment. Sure, it's not impossible that "unprocessed containment" exists in other systems, and we just haven't ever noticed it, but why are you even asking that question?"
There's nothing left to explain about containment. There's something left to explain about consc.
Would you expect that reply to convince A?
Or would you just accept that A might go on believing that there's something important and ineffable left to explain about containment, and there's not much you can do about it?
Or something else?
It is for A to state what the remaining problem actually is. And qualiphiles can do that
D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside -- the qualia.
If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.
OK, maybe I'm getting a bit NSFW here...
I realize that non-materialistic "intrinsic qualities" of qualia, which we perceive but which aren't causes of our behavior, are incoherent. What I don't fully understand is why have I any qualia at all. Please see my sibling comment.
Tentatively:
If it's accepted that GREEN* and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything, beyond structure, which needs explaining?
I think this is the gist of Dennett's dissolution attempts. Once you've explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there's anything else?
Phenomenology doesn't involve anything beyond structure. But my experience seems to.
(nods) Yes, that's consistent with what I've heard others say.
Like you, I don't understand the question and have no idea of what an answer to it might look like, which is why I say I'm not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I'm not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn't been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can't prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn't feel like the sort of process Dennett described. Dennett replied "How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!"
I recognize that the traditional reply to this is "No! The sort of process Dennett describes doesn't feel like anything at all! It has no qualia, it has no subjective experience!"
To which my response is mostly "Why should I believe that?" An acceptable alternative seems to be that subjective experience ("qualia", if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object ("prescience", if you like) is a property of certain kinds of computation.
To which one is of course free to reply "but how could prescience -- er, I mean qualia -- possibly be an aspect of computation??? It just doesn't make any sense!!!" And I shrug.
Sure, if I say in English "prescience is an aspect of computation," that sounds like a really weird thing to say, because "prescience" and "computation" are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn't seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
Thanks for your reply and engagement.
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that "that's what that kind of process feels like".
What I don't understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don't know if there is a further question beyond that. As I said, I don't know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it's just the case that "being mystified about qualia" is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process "feel like" anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I'd be no different than any existing cat, and which I wouldn't remember on becoming human again?
I agree. To clarify, I believe all of these propositions:
Shouldn't you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?
What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate "being either clone" or as I would call it, "not being able to tell which clone I am".
My first instinctive response is "be wary of theories of personal identity where your future depends on a coin flip". You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong". That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
No, I'm not saying that.
I'm saying: first both clones believe "anticipate X with 50% probability". Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe "I experienced X with ~1 probability" and the other "I experienced ~X with ~1 probability".
I think we need to unpack "experiencing" here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience "X is going on now", then I anticipate ever experiencing that with 50% probability.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?
In the sense of the experience not happening if that circuit doesn't work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?
Quick clarifying question: How small does something need to be for you to consider it a "circuit"?
It's more a matter of discreetness than smallness: I would say I need to be able to identify the loop.
I think it does. It really is a virtuoso work of philosophy, and Dennett helpfully front-loaded it by putting his most astonishing argument in the first chapter. Anecdotally, I was always suspicious of arguments against qualia until I read what Dennett had to say on the subject. He brings in plenty of examples from philosophy, from psychological and scientific experiments, and even from literature to make things nice and concrete, and he really seems to understand the exact ways in which his position is counter-intuitive and makes sure to address the average person's intuitive objections in a fair and understanding way.
I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.