Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

cousin_it comments on Steelmanning the Chinese Room Argument - Less Wrong Discussion

4 Post author: cousin_it 06 July 2017 09:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (135)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 17 July 2017 05:32:54PM *  2 points [-]

You said:

At best, "X is conscious" means "X has behaviors in some sense similar to a human's".

I'm trying to show that's not good enough. Seeing red isn't the same as claiming to see red, feeling pain isn't the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren't reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other's experiences by similarity, but that doesn't work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That's the point of the post.

You could try to patch the problem by making the AI create only agents that aren't too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn't accept that solution. We need to figure out internal experience the hard way.

Comment author: tadasdatys 18 July 2017 07:46:07AM 0 points [-]

Seeing red isn't the same as claiming to see red

A record player looping the words "I see red" is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that's red, plays the same "I see red" sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.

Comment author: g_pepper 18 July 2017 12:36:12PM 1 point [-]

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family).

I don't see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain. Can you clarify that inference?

suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution

I don't see anything magical about consciousness - it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don't as-of-yet have an objective metric for consciousness in others does not make it magical.

Comment author: tadasdatys 18 July 2017 01:36:01PM 0 points [-]

it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain

No, I'm saying that "feels pain" is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that "feels pain" is very different from "deserves human rights".

no one on this thread has suggested a supernatural explanation for it

No one has suggested any explanation for it at all. And I do use "magical" in a loose sense.

Comment author: TheAncientGeek 18 July 2017 01:48:27PM 3 points [-]

No, I'm saying that "feels pain" is not a meaningful category.

So what do pain killers do? Nothing?

Comment author: tadasdatys 18 July 2017 02:06:18PM 0 points [-]

Move a human from one internal state to another, that they prefer. "Preference" is not without it's own complications, but it's a lot more general than "pain".

To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It's only a problem when we ask whether robots feel pain.

Comment author: TheAncientGeek 18 July 2017 02:24:26PM 1 point [-]

I''m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word "pain".

Comment author: tadasdatys 18 July 2017 06:05:09PM 0 points [-]

There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, "purple is bitter", the first way is clearly more appropriate. If we look at "robot feels pain", it's hard for me to tell, which way I prefer.

Comment author: TheAncientGeek 18 July 2017 06:36:18PM 0 points [-]

I don't think you have established any problem of meaning , so the question of which problem doesn't arise.

Comment author: tadasdatys 18 July 2017 07:30:27PM 0 points [-]

Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated.

Here is my claim that "robot feels pain" is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer "is purple bitter" either way, and that would tell you nothing about the color purple. Likewise, I could answer "does this robot feel pain" and that would tell you nothing about the robot or what you should do with it. At best, a "yes" would mean that the robot can detect pressure or damage, and then say "ouch" or run away. But that's clearly not the kind of pain we're talking about.

Comment author: entirelyuseless 18 July 2017 02:22:09PM 0 points [-]

"Meaning." You keep using that word. I do not think it means what you think it means.

Comment author: tadasdatys 18 July 2017 05:59:00PM 0 points [-]

You'll have to be more specific with your criticism.

Comment author: entirelyuseless 19 July 2017 01:04:26AM 1 point [-]

"Meaning" refers to the fact that words are about things, and they are about whatever people want to talk about. You seem to be using the word rather differently, e.g. perhaps to refer to how you would test whether something is true, since you said that the word "pain" is meaningless applied to a robot since we have no way to test whether it feels pain. Or you have the idea that words are meaningless if they do not imply something in the "real world," by which you understand an objective description. But since people talk about whatever they want to talk about, words can also signify subjective perceptions, and they do.

Comment author: tadasdatys 19 July 2017 05:55:24AM 0 points [-]

For starters, do we agree that the phrase "purple is bitter" is meaningless? Or at least that some grammatically correct strings of words can have no meaning?

Comment author: TheAncientGeek 18 July 2017 11:12:45AM 1 point [-]

Your solution seems to consist of adopting an ethics that is explicitly non-universal.