You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Capla comments on Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) - Less Wrong Discussion

2 Post author: Capla 17 November 2014 05:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: Capla 20 November 2014 10:18:39PM 0 points [-]

How would we define "conscious" in order to ask the question?

Comment author: lmm 20 November 2014 11:46:57PM 1 point [-]

The same way we define it when asking other people?

Comment author: Capla 21 November 2014 01:54:13AM 0 points [-]

Which is what, specifically?

I think we do it indexicly. I use a word in context, and since you have a parallel expedience in the same context, I never have to make clear exactly (at least in terms of Intension) what I mean, you have the same experience and some can infer the label. Ask an automation "do you have emotions?" and it may observe human use of the word emotion, conclude that "emotion" is an automatic behavioral response to conditions and changes in conditions that affect one's utility function, and declare that, yes it does have emotion. Yet, of course this completely misses what we meant by emotion, which is a subjective quality of experience.

Can you make a being come to understand the concept of subjectivity, it doesn't itself embody a subjective perspective?

Alternatively, if you asked me “What is red?” I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient, though if you know what the word “No” means, the truly strict would insist that I point to the sky and say “No.”

This only communicates if the person you are trying to explain "red" to can perceive color.

The problem is, that my subjective experience of red is always accompanied by a particular range of wavelengths of light. Yet, when I say the word red, I don't mean the photons that are of that frequency, I mean the subjective experience that those photons cause. But, since the one always accompanies the other, someone naive of color might think I, mean the mathematical features of the waves reflected from the objects to which I'm pointing.

Comment author: lmm 21 November 2014 09:07:27AM 0 points [-]

If you can't express the question then you can't be confident other people understand you either. Remember that some people just don't have e.g. visual imagination, and don't realise there's anything unusual about them.

Now I'm wondering whether I'm conscious, in your sense. I mean, I feel emotion, but it seems to adequately correspond to your "automaton" version. I experience what I assume is consciousness, but it seems to me that that's just how a sufficiently advanced self-monitoring system would feel from the inside.

Comment author: Capla 21 November 2014 05:45:37PM *  1 point [-]

Yes. I'm wondering if these dispute simply resolve to having different subjective experiences of what it means to be alive. In fact maybe the mistake is assuming that p-zombies don't exist. Maybe some humans are p-zombies!

However,

that's just how a sufficiently advanced self-monitoring system would feel from the inside.

seems like almost a contradiction in terms. Can a self monitoring system become sufficiently advanced without feeling anything (just as my computer computes, but I suppose, doesn't feel)?

Comment author: lmm 21 November 2014 08:00:22PM -1 points [-]

Can a self monitoring system become sufficiently advanced without feeling anything (just as my computer computes, but I suppose, doesn't feel)?

I think not. But I think that makes it entirely unsurprising, obvious even, that a more advanced computer would feel.

Comment author: Capla 21 November 2014 08:22:04PM 1 point [-]

If so, I want to know why.

Comment author: lmm 23 November 2014 10:52:54PM -1 points [-]

Because it seems like the most plausible explanation for the fact that I feel, to the extent that I do. (also it explains the otherwise quite confusing result that our decision-making processes activate after we've acted for many kinds of actions, even though we feel like our decision determined the action).

Comment author: Capla 24 November 2014 01:23:12AM *  1 point [-]

I don't know what that second thing has to do with consciousness.