RobbBB comments on Reality is weirdly normal - Less Wrong

33 Post author: RobbBB 25 August 2013 07:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ghatanathoah 27 August 2013 07:24:30AM *  0 points [-]

Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn't change that fact.

It does if our inability to simulate her well messes with our intuitions. If, as I conjectured, we tend to translate "omniscient person" with "scholar with lots of book-learning" then our intuitions will reflect that, and will hence be wrong.

Consider the Marianna variant.....But she still lacks the relevant items of knowledge about what other people experience

Is Marianna omniscient about light and neuroscience like Mary? If she is, she'd be able to figure out which color is which fairly easily.

If it's merely a matter of qualia being complicated, then shouldn't all other complicated systems yield relevantly identical Hard Problem intuitions?

It's not just a matter of qualia being complicated, it's a matter of the human brain being bad at communicating certain things, of which qualia are only one thing of many. And this isn't just an issue of processing power and the complexity of something being processed, it's an issue of software problems. There are certain problems we have trouble processing regardless of what level of power we have, because of our mind's internal architecture. Wei Dei puts it well when he says:

...a quale is like a handle to a kernel object in programming. Subconscious brain corresponds to the OS kernel, and conscious brain corresponds to user-space. When you see red, you get a handle to a "redness" object, which you can perform certain queries and operations on, such as "does this make me feel hot or cold", or "how similar is this color to this other color" but you can't directly access the underlying data structure. Nor can the conscious brain cause the redness object to be serialized into a description that can be deserialized in another brain to recreate the object. Nor can Mary instantiate a redness object in her brain by studying neuroscience.

Furthermore, there are in fact other things that humans have a lot of difficulty communicating besides qualia. For instance, it's common knowledge that people with a few days of job experience are much better at doing jobs than people who have spent months reading about the job.

My intuition is that making Mary superhuman doesn't change that experiencing red seems to narrow down the possibilities for her.

I disagree. If Mary was a superhuman she could study what functions of the brain cause us to experience "qualia," and then study the memories these processes generated. She could then generate such memories in her own brain, giving her the knowledge of what qualia feel like without ever experiencing them. She would see red and not be surprised at all.

If qualia were not a physical part of the brain, duplicating the memories of someone who had experienced them would not have this effect. However, I think it very likely that doing so would have this effect.

Can you explain why this intuition persists for me, when (as far as I can tell) it doesn't for any other complex system?

Because, as I said before, our emotions are "black boxes" that humans are very bad at understanding and explaining. Their Kolmogorov complexity is extraordinarily high, but we feel like they are simple because of our familiarity with them.

Maybe, but in that case the challenge is to explain, at least schematically, what superhuman power Mary obtains that lets her solve the Hard Problem.

I think the ability to study and modify her own source code and memory, as well as the source code and memory of others is probably all she'd need, but I could be wrong.

Comment author: RobbBB 27 August 2013 09:28:15AM *  0 points [-]

"My intuition is that making Mary superhuman doesn't change that experiencing red seems to narrow down the possibilities for her."

"I disagree."

You... disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I'm misunderstanding or misrepresenting my own intuitions?

I'm reporting on psychological data about what my intuitions are indicating to me. I'm not a dualist, so I'm not (yet) making any assertions about what Mary would actually do or say or know. I'm explaining what output my simulator is giving me when I run the thought experiment.

If Mary was a superhuman she could study what functions of the brain cause us to experience "qualia," and then study the memories these processes generated. She could then generate such memories in her own brain, giving her the knowledge of what qualia feel like without ever experiencing them.

You're assuming that all superhumans intelligent enough to understand the biophysics of color vision will also necessarily have a module that allows them to self-modify in a way that they have whatever first-person subjective experience they wish. There's no reason to assume that. As long as a Mary without this capacity (but with the third-person biophysics-comprehending capacity) is possible, the argument goes through. The fact that a Mary that can spontaneously generate its own experience of redness is also possible doesn't make any progress toward refuting or dissolving the Mary hunch.

It sounds to me like you've been reading too much Dennett. Dennett is not a careful or patient dissector of the Hard Problem. The entire RoboMary paper, for instance, is a non sequitur in relation to the arguments it's meant to refute. It's fun and interesting, but it's talking about a different subject matter.

If qualia were not a physical part of the brain, duplicating the memories of someone who had experienced them would not have this effect.

That's not true at all. Most forms of dualism allow Mary to generate the relevant mental states by manipulating the physical states they are causally tied to.

"Can you explain why this intuition persists for me, when (as far as I can tell) it doesn't for any other complex system?"

"Because, as I said before, our emotions are 'black boxes' that humans are very bad at understanding and explaining."

This doesn't look to me like an explanation yet, even an outline of one. In fact, it looks like an appeal to the Black Box black box: 'Black box' is being used as a special word meant to pick out some uniquely important and effective category of Unknown Thingie. But just saying 'we don't understand emotions yet' doesn't tell me anything about why emotions appear irreducible to me, while other things I don't understand do seem reducible to me.

Their Kolmogorov complexity is extraordinarily high, but we feel like they are simple because of our familiarity with them.

I don't feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations 'qualia seem irreducible because we don't know enough about them yet' and 'qualia seem irreducible because we don't realize how complicated they are'. But neither of these explanations makes me any less confused, and they're both incredibly vague. I think this is a legitimate place to insist that we say not "complexity".

I think the ability to study and modify her own source code and memory, as well as the source code and memory of others is probably all she'd need,

Why, specifically, would any of those four abilities help? Are all four needed? Are some more important than others? Why, for instance, wouldn't just studying my own source code and memory (without being able to do radical surgery on it) suffice for knowing the phenomenal character of redness, or the phenomenal character of a bat's echolocation...?

Comment author: Ghatanathoah 28 August 2013 12:52:04AM 0 points [-]

You... disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I'm misunderstanding or misrepresenting my own intuitions?

I mean my intuition is different.

I don't feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations 'qualia seem irreducible because we don't know enough about them yet' and 'qualia seem irreducible because we don't realize how complicated they are'.

Alright, I'll try to stop hopping and nail down what I'm saying:

  1. I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of "experiential knowledge" found in the unconscious "black box" parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.

  2. I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.

  3. In addition to making qualia seem irreducible, this phenomenon explains other things, such as the fact that many activities are easier to learn to do by experience.

I've never actually read any Denett, except for short summaries of some of his criticisms written by other people. One person who has influenced me a lot is Thomas Sowell, who frequently argues that the most important knowledge is implicit and extremely difficult, if not impossible, to articulate into verbal form. He does this in terms of economics, but when I started reading about the ineffability of qualia I immediately began to think "This probably has a similar explanation."

Comment author: Juno_Watt 28 August 2013 01:06:12AM 0 points [-]

I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.

Mary isn't a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.

I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of "experiential knowledge" found in the unconscious "black box" parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.

Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.

Comment author: nshepperd 28 August 2013 03:26:54AM 4 points [-]

If you're asserting that Mary does not have the software problem that makes it impossible to derive "experential knowledge" from verbal data, then the answer to the puzzle is "Yes, Mary does know what red looks like, and won't be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren't capable of imagining that kind of architecture."

Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn't mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn't have access to the OS API. "Intelligence" of the program isn't a factor (this is how we're able to run high-speed javascript in browsers without every JS program being a severe security risk).

Comment author: Ghatanathoah 29 August 2013 04:22:59AM 1 point [-]

Mary isn't a normal human.

If this is the case then, as I said before, my intuition that she would not understand qualia disappears.

Comment author: Juno_Watt 31 August 2013 08:45:40AM -1 points [-]

my intuition that [Mary] would not understand qualia disappears.

For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.