Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

tadasdatys comments on Steelmanning the Chinese Room Argument - Less Wrong Discussion

5 Post author: cousin_it 06 July 2017 09:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (313)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 17 July 2017 01:48:30PM *  2 points [-]

You assume that all my knowledge about humans comes from observing their behavior. That's not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn't exist.

You are right that communicating such knowledge to an AI is hard. But we must find a way.

Comment author: tadasdatys 17 July 2017 05:21:19PM 0 points [-]

You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don't know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?

The knowledge that "only the first person is really suffering" has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.

Comment author: cousin_it 17 July 2017 05:32:54PM *  2 points [-]

You said:

At best, "X is conscious" means "X has behaviors in some sense similar to a human's".

I'm trying to show that's not good enough. Seeing red isn't the same as claiming to see red, feeling pain isn't the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren't reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other's experiences by similarity, but that doesn't work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That's the point of the post.

You could try to patch the problem by making the AI create only agents that aren't too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn't accept that solution. We need to figure out internal experience the hard way.

Comment author: tadasdatys 18 July 2017 07:46:07AM 0 points [-]

Seeing red isn't the same as claiming to see red

A record player looping the words "I see red" is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that's red, plays the same "I see red" sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.

Comment author: g_pepper 18 July 2017 12:36:12PM 1 point [-]

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family).

I don't see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain. Can you clarify that inference?

suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution

I don't see anything magical about consciousness - it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don't as-of-yet have an objective metric for consciousness in others does not make it magical.

Comment author: tadasdatys 18 July 2017 01:36:01PM 0 points [-]

it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain

No, I'm saying that "feels pain" is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that "feels pain" is very different from "deserves human rights".

no one on this thread has suggested a supernatural explanation for it

No one has suggested any explanation for it at all. And I do use "magical" in a loose sense.

Comment author: TheAncientGeek 18 July 2017 01:48:27PM 3 points [-]

No, I'm saying that "feels pain" is not a meaningful category.

So what do pain killers do? Nothing?

Comment author: tadasdatys 18 July 2017 02:06:18PM 0 points [-]

Move a human from one internal state to another, that they prefer. "Preference" is not without it's own complications, but it's a lot more general than "pain".

To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It's only a problem when we ask whether robots feel pain.

Comment author: TheAncientGeek 18 July 2017 02:24:26PM 1 point [-]

I''m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word "pain".

Comment author: tadasdatys 18 July 2017 06:05:09PM 0 points [-]

There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, "purple is bitter", the first way is clearly more appropriate. If we look at "robot feels pain", it's hard for me to tell, which way I prefer.

Comment author: entirelyuseless 18 July 2017 02:22:09PM 0 points [-]

"Meaning." You keep using that word. I do not think it means what you think it means.

Comment author: tadasdatys 18 July 2017 05:59:00PM 0 points [-]

You'll have to be more specific with your criticism.

Comment author: TheAncientGeek 18 July 2017 11:12:45AM 1 point [-]

Your solution seems to consist of adopting an ethics that is explicitly non-universal.

Comment author: TheAncientGeek 18 July 2017 01:24:28PM *  0 points [-]

.,...has very little to do with your internal experience, it comes entirely from real observation ..

There's a slippery slope there. You start with "very little X" and slide to "entirely non-X".

Comment author: tadasdatys 18 July 2017 01:52:46PM 0 points [-]

"very little" is a polite way to say "nothing". It makes sense, especially next to the vague "has to do with" construct. So there is no slope here.

To clarify, are you disagreeing with me?

Comment author: TheAncientGeek 18 July 2017 02:26:42PM 0 points [-]

Your argument is either unsound or invalid, but I'm not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.

Comment author: tadasdatys 18 July 2017 03:24:23PM 0 points [-]

detecting

Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5

There is no detecting going on. If you're clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That's why I used "little" instead of "nothing".

Comment author: TheAncientGeek 18 July 2017 03:34:08PM 0 points [-]

But I wasn't talking about the CR, I was talking in general.