B_For_Bandana comments on Rationality Quotes June 2013 - Less Wrong

3 Post author: Thomas 03 June 2013 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (778)

You are viewing a single comment's thread. Show more comments above.

Comment author: B_For_Bandana 03 June 2013 01:36:03AM 4 points [-]

I'm someone who still finds subjective experience mysterious, and I'd like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?

Comment author: TheOtherDave 03 June 2013 02:41:24AM 8 points [-]

I've had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)

That said, it's a pretty fun read. If the subject interests you, I'd recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.

Comment author: DanArmak 08 June 2013 07:11:41PM *  2 points [-]

The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)

He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don't know how to translate that into an explanation of why I am conscious.

The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).

A perfect and complete explanation of of the behavior of humans, still doesn't seem to bridge the gap from "objective" to "subjective" experience.

I don't claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn't seem to answer the question or dissolve it.

Comment author: bojangles 08 June 2013 08:32:17PM *  3 points [-]

Here's how I got rid of my gut feeling that qualia are both real and ineffable.

First, phrasing the problem:

Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience - for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.

What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call "green" if you could access it, but since I learned my colour words by looking at firetrucks, I still call it "red".

If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the "atoms" of experience additionally have intrinsic natures (I'll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.

You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren't real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.

An attempt at a solution:

Take another experiential "spectrum": pleasure vs. displeasure. Spectrum inversion is harder, I'd say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really "ultimately" being UNPLEASANT for her.

Anyway, if pleasure-displeasure can't be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren't really all you need to represent colour experience. Colour experience doesn't, and can't, ever occur isolated from other cognition.

For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.

If the monkey in the green (to it, RED') room gets antsy, or the monkey in the red (to it, GREEN') room doesn't, then that means the spectrum-inversion was causal and ineffable qualia don't exist.

But if the monkey in the green room doesn't get antsy, or the monkey in the red room does, then it hasn't been a full spectrum inversion. RED' without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you'd have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.

This isn't knockdown, but it convinced me.

Comment author: DavidAgain 19 June 2013 01:18:57PM 3 points [-]

I'm not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.

What you've done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what's required to 'make the inversion succesfully' is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?

It seems intuitive to assume 'red' and 'green' remain the same in normal conditions: but I'm left totally lost as to what 'red' would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel 'what is it like to be a bat' problem, and I've never understood how that dissolves.

It's been a long time since I read Dennett, but I was in the camp of 'not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought'. No-one's ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.

If the hard problem of consciousness has really been solved I'd really like to know!

Comment author: TheOtherDave 19 June 2013 01:35:37PM 6 points [-]

Consider the following dialog:
A: "Why do containers contain their contents?"
B: "Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe."
A: "Yes, of course, I know that, but why does that lead to containment?"
B: "I don't quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this --"
A: "No, no, I understand that stuff. I've been studying containment for years; I understand the simple problem of containment quite well. I'm asking about the hard problem of containment: how does containment arise from those merely mechanical things?"
B: "Huh? Those 'merely mechanical things' are just what containment is. If there's no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?"
A: "That's an admirable formulation of the hard problem of containment, but it doesn't solve it."

How would you reply to A?

Comment author: DavidAgain 19 June 2013 01:54:04PM 1 point [-]

But I don't think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don't see how it pierces through to consciousness as experienced, and linked questions such as 'what is it like to be a bat?' or 'how do I know my green isn't your red'

It would help if you could sum up the merely mechnical things that are 'just what consciousness is' in Dennett's (or your!) sense. I've never been clear on what confident materialists are saying on this: I'm sometimes left with the impression that they're denying that we have subjective experience, sometimes that they're saying it's somehow an inherent quality of other things, sometimes that it's an incidental byproduct. All of these seem to be problematic to me.

Comment author: TheOtherDave 19 June 2013 03:05:46PM -1 points [-]

It would help if you could sum up the merely mechanical things that are 'just what consciousness is' in Dennett's (or your!) sense.

I don't think it would, actually.

The merely mechanical things that are 'just what consciousness is' in Dennett's sense are the "soft problem of consciousness" in Chalmers' sense; I don't expect any amount of summarizing or detailing the former to help anyone feel like the "hard problem of consciousness" has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the "hard problem of containment" has been addressed.

But, since you asked: I'm not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you're looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don't think that's what you're looking for.

As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I'm not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?

And: how would you reply to A?

Comment author: DavidAgain 19 June 2013 03:51:04PM 1 point [-]

I may not remember Chalmer's soft problem well enough either for reference of that to help!

If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.

It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we'd say 'it now has conscious experiences'. Or whether we'd talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can't respond to it.

With a container, you describe various qualities and that leaves the question 'can it contain things': do things stay in it when put there. You're adding a sort of purpose-based functional classification to a physical object. When we ask 'is something conscious', we're not asking about a function that it can perform. On a similar note, I don't think we're trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We're not chasing some over-abstracted ideal of consciousness, we're trying to explain an experienced reality.

So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y. You're over-idealising a useful functional concept'. The same is not true of consciousness, because it's not (just) a function.

It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...

Comment author: TheOtherDave 19 June 2013 05:32:07PM -1 points [-]

I'm splitting up my response to this into several pieces because it got long. Some other stuff:

what, in light of a Dennett-type approach, we can identify as conscious or not.

The process isn't anything special, but OK, since you ask.

Let's assert for simplicity that "I" has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I've observed myself subjectively experiencing.

I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that's evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.

I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that's evidence that E3 also demonstrates subjective experience, even though E3 doesn't behave the way I do.

Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.

I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks "is X conscious?" for various Xes:

I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...

If I interpret "conscious" as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I've attributed to subjective experience... behaviors, anatomical structures, formal structures, etc... and compare it to my accumulated knowledge to make a decision.

Isn't that how you answer such questions as well?

If not, then I'll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?

Comment author: TheOtherDave 19 June 2013 05:31:56PM -1 points [-]

I'm splitting up my response to this into several pieces because it got long. Some other stuff:

Presumably a consequence that itself has consequences: experiences can be used in causal explanations?

I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.

But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat.

Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don't posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.

And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.

Certainly.

It seems subjective experience is just being ignored

In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples... mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren't necessary to explain the examples, there's nothing wrong with ignoring these things.

On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don't presume Y... e.g., confabulation, automatic writing, etc. But that needn't be true for all reports. Indeed, it would be surprising if it were.)

"But we don't know what Xes give rise to the Y of subjective experience, so we don't fully understand subjective experience!" Well, yes, that's true. We don't fully understand fluency in Russian, either. But we don't go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation... though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.

"But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can't imagine what a mechanical explanation of subjective experience would be like." Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity... how could a machine speak Russian? I'm not sure how I could go about answering such incredulity convincingly, but I don't thereby conclude that machines can't speak Russian. My incredulity may be resistant to my reason, but it doesn't therefore compel or override my reason.

Comment author: TheOtherDave 19 June 2013 05:31:41PM -1 points [-]

I'm splitting up my response to this into several pieces because it got long.

The key bit, IMHO:

So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y.

And I would agree with you.

and that leaves the question 'can it contain things': [..] The same is not true of consciousness, because it's not (just) a function.

"No," replies A, "you miss the point completely. I don't ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function," A insists, "though I understand you want to treat it as one. No, containership is a fundamental essence. You can't simply ignore the hard question of "is X a container?" in favor of thinking about simpler, merely functional questions like "can X contain Y?". And, while we're at it," A coninues, "what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can't tell the difference, but that doesn't mean there isn't a difference."

I take it you don't find A's argument convincing, and neither do I, but it's not clear to me what either of us could say to A that A would find at all compelling.

Comment author: Juno_Watt 19 June 2013 06:02:37PM 0 points [-]

and I am saying that those experiences are a consequence of our neurobiology

That's such a broad statement, it could cover some forms of dualism.

Comment author: TheOtherDave 19 June 2013 06:37:12PM -1 points [-]

Agreed.

Comment author: Kawoomba 19 June 2013 02:21:03PM 0 points [-]

That's funny, David again and the other David arguing about the hard versus the "soft" problem of consciousness. Have you two lost your original?

I think A and B are sticking different terminology on a similar thing. A laments that the "real" problem hasn't been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:

A believes there are aspects of the problem of con(tainment|sciousness) that didn't get explained away by a "mechanistic" model.

B believes that a (probably reductionist) model suffices, "this configuration of matter/energy can be called 'conscious'" is not fundamentally different from "this configuration of matter/energy can be called 'a particle'". If you're content with such an explanation for the latter, why not the former? ...

However, with many Bs I find that even accepting a matter-of-fact workable definition of "these states correspond to consciousness" is used as a stop sign more so than as a starting point.

Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.

Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?

Anthropic considerations apply: Even if anything had a "value" for "subjective experience", we would know only about our own, and probably only ascribe that property to similar 'things' (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? "What an algorithm feels like on the inside" - any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.

So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It's not clear why qualia should depend on "only things that can communicate can experience qualia", for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could -- if it did experience it. Occam's Razor may prefer "everything can experience" to "only very special configurations of matter can experience", keeping in mind that the internal structure of a waterfall is just as complex as a human brain.

It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la "I can work with that, what more do I want?". "Here be dragons" is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn't a reason to deny their existence.

Comment author: TheOtherDave 19 June 2013 03:38:47PM 0 points [-]

Have you two lost your original?

I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as "Dave -- no, not that Dave, the other one."

Comment author: Desrtopa 19 June 2013 03:46:29PM 0 points [-]

I always assumed that the name was originally to distinguish you from David Gerard.

Comment author: TheOtherDave 19 June 2013 03:24:05PM -1 points [-]

Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.

In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don't see any value to asking the question. If it makes you feel better if I don't deny their existence, well, OK, I don't deny their existence, but I really can't see why anyone should care one way or the other.

In any case, I don't agree that the B's studying conscious experience fail to explore further questions. Quite the contrary, they've made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don't explore the particular questions you're talking about here.

And it's not clear to me that the A's exploring those questions are accomplishing anything.

If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?

So, A asks "If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?"
How would you reply to A?

My response is something like "We know that certain configurations of physical objects give rise to containment. Sure, it's not impossible that "unprocessed containment" exists in other systems, and we just haven't ever noticed it, but why are you even asking that question?"

Comment author: Juno_Watt 19 June 2013 05:59:00PM 0 points [-]

There's nothing left to explain about containment. There's something left to explain about consc.

Comment author: TheOtherDave 19 June 2013 06:39:17PM 0 points [-]

Would you expect that reply to convince A?
Or would you just accept that A might go on believing that there's something important and ineffable left to explain about containment, and there's not much you can do about it?
Or something else?

Comment author: Juno_Watt 24 June 2013 06:53:33PM -1 points [-]

Would you expect that reply to convince A?

It is for A to state what the remaining problem actually is. And qualiphiles can do that

D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside -- the qualia.

Comment author: shminux 24 June 2013 08:46:10PM *  -2 points [-]

If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.

OK, maybe I'm getting a bit NSFW here...

Comment author: DanArmak 08 June 2013 08:51:44PM 2 points [-]

I realize that non-materialistic "intrinsic qualities" of qualia, which we perceive but which aren't causes of our behavior, are incoherent. What I don't fully understand is why have I any qualia at all. Please see my sibling comment.

Comment author: bojangles 08 June 2013 09:32:26PM *  -1 points [-]

Tentatively:

If it's accepted that GREEN* and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything, beyond structure, which needs explaining?

I think this is the gist of Dennett's dissolution attempts. Once you've explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there's anything else?

Comment author: DanArmak 09 June 2013 09:05:13AM 0 points [-]

Phenomenology doesn't involve anything beyond structure. But my experience seems to.

Comment author: TheOtherDave 08 June 2013 07:37:50PM *  1 point [-]

(nods) Yes, that's consistent with what I've heard others say.

Like you, I don't understand the question and have no idea of what an answer to it might look like, which is why I say I'm not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I'm not clear how it differs from the question you/they want answered.

Mostly I suspect that the belief that there is a second question to be answered that hasn't been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can't prove it.

Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn't feel like the sort of process Dennett described. Dennett replied "How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!"

I recognize that the traditional reply to this is "No! The sort of process Dennett describes doesn't feel like anything at all! It has no qualia, it has no subjective experience!"

To which my response is mostly "Why should I believe that?" An acceptable alternative seems to be that subjective experience ("qualia", if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object ("prescience", if you like) is a property of certain kinds of computation.

To which one is of course free to reply "but how could prescience -- er, I mean qualia -- possibly be an aspect of computation??? It just doesn't make any sense!!!" And I shrug.

Sure, if I say in English "prescience is an aspect of computation," that sounds like a really weird thing to say, because "prescience" and "computation" are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn't seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.

When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.

Comment author: DanArmak 08 June 2013 08:42:41PM *  1 point [-]

Thanks for your reply and engagement.

How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!

I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that "that's what that kind of process feels like".

What I don't understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.

I do understand why it makes sense for an evolved human to have such beliefs. I don't know if there is a further question beyond that. As I said, I don't know what an answer would even look like.

Perhaps I should just accept this and move on. Maybe it's just the case that "being mystified about qualia" is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.

However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.

Does being like some other kind of process "feel like" anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I'd be no different than any existing cat, and which I wouldn't remember on becoming human again?

When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.

I agree. To clarify, I believe all of these propositions:

  • Full materialism
  • Humans are physical systems that have self-awareness ("consciousness") and talk about it
  • That isn't a separate fact that could be otherwise (p-zombies); it's highly entangled with how human brains operate
  • Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
  • If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn't, it would be a very poor emulation!)
  • If I am precisely cloned, I should anticipate either clone's experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly "switch" to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I'm less sure about this because it's never happened yet. And I don't understand quantum mechanics, so I can't properly appreciate the arguments that say we're already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
Comment author: FeepingCreature 16 June 2013 03:09:55PM 1 point [-]

If I am precisely cloned, I should anticipate either clone's experience with 50% probability

Shouldn't you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?

Comment author: DanArmak 16 June 2013 06:13:26PM 0 points [-]

What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.

If they live identical lives forever, then I can anticipate "being either clone" or as I would call it, "not being able to tell which clone I am".

Comment author: FeepingCreature 16 June 2013 09:22:59PM *  -1 points [-]

My first instinctive response is "be wary of theories of personal identity where your future depends on a coin flip". You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong". That seems off.

I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.

Comment author: DanArmak 17 June 2013 09:59:56AM 1 point [-]

You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong".

No, I'm not saying that.

I'm saying: first both clones believe "anticipate X with 50% probability". Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe "I experienced X with ~1 probability" and the other "I experienced ~X with ~1 probability".

I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability.

I think we need to unpack "experiencing" here.

I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.

If X takes nontrivial time, such that one can experience "X is going on now", then I anticipate ever experiencing that with 50% probability.

Comment author: FeepingCreature 17 June 2013 01:54:24PM *  2 points [-]

I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.

What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X.

But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there's some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I'm not sure how that affects things myself.

Comment author: Armok_GoB 10 June 2013 11:51:48PM 1 point [-]

One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.

Comment author: TheOtherDave 08 June 2013 10:33:38PM 0 points [-]

Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away

Sure, that makes sense.

As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)

But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).

Comment author: [deleted] 08 June 2013 10:38:56PM 1 point [-]

As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience

Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?

Comment author: TheOtherDave 08 June 2013 11:46:22PM 1 point [-]

In the sense of the experience not happening if that circuit doesn't work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.

Comment author: [deleted] 09 June 2013 01:35:46AM 1 point [-]

I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?

Comment author: TheOtherDave 09 June 2013 03:57:50AM 0 points [-]

I am having trouble knowing how to answer your question, because I'm not sure what you're asking.
We have identified neural structures that are implicated in various specific things that brains do.
Does that answer your question?

Comment author: ialdabaoth 08 June 2013 10:49:52PM 1 point [-]

Quick clarifying question: How small does something need to be for you to consider it a "circuit"?

Comment author: [deleted] 09 June 2013 12:49:57AM 0 points [-]

It's more a matter of discreetness than smallness: I would say I need to be able to identify the loop.

Comment author: ialdabaoth 09 June 2013 01:04:39AM 1 point [-]

Second clarifying question, then: Can you describe what 'identifying the loop' would look like?

Comment author: tingram 03 June 2013 01:42:43AM 1 point [-]

I think it does. It really is a virtuoso work of philosophy, and Dennett helpfully front-loaded it by putting his most astonishing argument in the first chapter. Anecdotally, I was always suspicious of arguments against qualia until I read what Dennett had to say on the subject. He brings in plenty of examples from philosophy, from psychological and scientific experiments, and even from literature to make things nice and concrete, and he really seems to understand the exact ways in which his position is counter-intuitive and makes sure to address the average person's intuitive objections in a fair and understanding way.

Comment author: nigerweiss 06 June 2013 09:39:32AM 1 point [-]

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.