Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Steelmanning the Chinese Room Argument

4 Post author: cousin_it 06 July 2017 09:37AM

(This post grew out of an old conversation with Wei Dai.)

Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.

Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?

Clearly the only reasonable answer is "no, not in general".

Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?

Again, clearly, the only reasonable answer is "not in general".

Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?

A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!

Comments (136)

Comment author: MrMind 06 July 2017 10:00:34AM 3 points [-]

Clearly the only reasonable answer is "no, not in general".

I challenge this.
Either you relax the communication channel in such a way that I can access other kinds of information (brain scans, purchase history, body language, etc.) or you do not get to say "not in general", because there's nothing general about two people communicating only through a terminal.

To me it's like you're saying "can you tell me how a cake smells by a picture? No! So I'm not sure that smells are really communicable". Hm.

Comment author: cousin_it 06 July 2017 10:25:34AM *  2 points [-]

I think the point does hold in general. There can be many possible internal experiences corresponding to any given input-output map. To the extent that's true, Chinese Room type arguments stay unresolved. Unless you define high resolution brain scans as part of input/output, but that seems far from the spirit of Chinese Room.

Comment author: Manfred 06 July 2017 02:57:26PM 1 point [-]

What if the person claims to be able to add numbers? If you ask them about 2+2 and they answer 4, maybe they were pre-ordered with that response, but if you get them to add a few dozen poisson-distributed numbers, maybe you start believing they're actually implementing the algorithm. This relies on the important distinction between telling two things apart certainly and gathering evidence.

Comment author: cousin_it 06 July 2017 03:06:40PM *  1 point [-]

Unlike with addition, I don't think we understand consciousness well enough to create a sequence of questions such that the simplest algorithm answering them would be conscious. It's not clear to me that such a sequence even exists. If we found one, it would be a big step for FAI.

Comment author: Manfred 06 July 2017 05:21:39PM 0 points [-]

Hm, this is an interesting question to think about. I lean more towards the camp of construing consciousness as broad and pretty easy to attain, but only a small part of a mind's value. As long as we can push down the probability of lookup tables and push up the probability of self-reflection and abstract thinking.

Weird example I'd label as conscious: an AI that can observe us, trying to fool us in a particular way: Our brains compute expectations of what kinds of things a conscious correspondent would say, then the AI can observe these expectations and compute something consistent both with our expectations and its past responses. Most of the computation of a mind is there, but packaged differently and spread over multiple media - the text responses no longer reflect consciousness if the AI loses its observation channel.

Comment author: Dr_Manhattan 06 July 2017 01:55:48PM 1 point [-]

This post grew out of an old conversation with Wei Dai

Since physical existence of Wei is highly doubtful can we have a link to the conversation?

Comment author: arundelo 06 July 2017 02:03:38PM *  1 point [-]

I bet cousin_it didn't link it because it's not on the (public) internet. Edit: Nope!

physical existence of Wei is highly doubtful

People have met Wei Dai in meatspace, if that's what you're talking about. Edit: As confirmed by cousin_it.

Comment author: cousin_it 06 July 2017 02:01:00PM *  1 point [-]

See here, Ctrl+F "optimize". Didn't think it would be still accessible. Added the link to the post.

I've met Wei in person and can assure you that he's as real as I am :-)

Comment author: WalterL 06 July 2017 02:55:48PM 4 points [-]

" can assure you that he's as real as I am :-)"

This just moves the dilemma back one level!

Comment author: cousin_it 06 July 2017 03:04:55PM 3 points [-]

Heh. Lots of people on LW have met me. There's at least one in this thread :-)

Comment author: WalterL 06 July 2017 04:50:40PM 1 point [-]

That just moves the dilemma back one level!

Comment author: Kaj_Sotala 06 July 2017 11:40:01AM 1 point [-]
Comment author: cousin_it 06 July 2017 11:45:43AM *  0 points [-]

Yeah, and it doesn't even seem all that different from mine :-)

Comment author: Wei_Dai 06 July 2017 08:50:26PM 0 points [-]

They actually seem pretty different to me. Searle's original claim was that computer programs won't have "intentionality" (which seems like a confused/useless concept but I haven't digged into it enough to be sure) even if they exhibit intelligent input-output behavior. Kaj's steelman claims that systems based on crude manipulations of suggestively named tokens likely won't be intelligent, whereas your (cousin_it's) steelman claims that a system may not be conscious even if it exhibits human-like (and hence intelligent) input-output behavior. These seem to go in very different directions.

Comment author: TheAncientGeek 13 July 2017 09:32:02AM *  2 points [-]

Intentionality" (which seems like a confused/useless concept

It's what parrots and chatbots uncontroversially have not got -- the ability to know what they are saying.

Comment author: cousin_it 07 July 2017 06:07:14AM *  1 point [-]

I guess the connection is that simple systems can seem surprisingly human-like. Phil Goetz made a similar point in We Are Eliza.

Comment author: turchin 06 July 2017 10:40:16AM 1 point [-]

The argument is too general, as it also proves that it is impossible to know that another biological human has conscious. Maybe nobody except me-now has it.

I knew a person who claimed that he could create 4-dimensional images in hid mind eye. I don't know should I believe him and how to check it.

Comment author: cousin_it 06 July 2017 10:44:21AM *  3 points [-]

Since other people are biologically similar to me, they probably say "I'm conscious" for the same reason as me, so it makes sense to believe them. The problem in Chinese Room is that the system is quite different from a human and might be lying about some things, so there's less reason to trust it when it claims to have human-like qualia.

Comment author: krkthor 06 July 2017 05:09:30PM 0 points [-]

Since other people are biologically similar to me

I can't agree with you because you can only assert that a person is biologically similar to you based on how they look and feel, barring cutting into them. If I were to design a robot that looked and felt and talked similar to a human being enough that you would have no way of discerning whether it's a real human or not, then you're saying that you would be inclined to believe them.

I admit I don't have an answer to this problem, I just don't agree with your statement.

Comment author: entirelyuseless 06 July 2017 01:36:47PM 0 points [-]

I would believe the computer, not because of accepting computationalism, but because when I imagine the situation happening in real life, I cannot imagine continuing to say to someone or something, "Actually, I'm not sure you're really conscious," when it acts like it is in every way.

I actually think the same thing is likely to happen to almost everyone (that is, in the end accepting that it is conscious), regardless of their previous philosophical views.

Comment author: cousin_it 06 July 2017 02:35:11PM *  0 points [-]

Yeah, that's how Justin Corwin won twenty AI-box experiments.

Comment author: entirelyuseless 07 July 2017 04:47:29AM 0 points [-]

Right. I've said before that we don't need the experiment. We already know people will let out an AI that seems decent and undeserving of being in a box.

Comment author: Jiro 14 July 2017 02:56:24PM 0 points [-]

Can I be sure that I'm conscious? Nobody can give me a description of consciousness which I can look at and say "sure, I have one of those." The best they can do is describe consciousness in terms of other things, which they can't give a description for either, which doesn't really help.

Comment author: g_pepper 14 July 2017 03:22:26PM 0 points [-]

Nobody can give me a description of consciousness

True, consciousness seems to defy precise definitions.

Can I be sure that I'm conscious?

It seems to me that consciousness as commonly understood is necessary for having first-person experiences of the sort that I have, and presumably you have also. And I suspect that pondering your own consciousness implies that you are in fact conscious.

Comment author: Jiro 15 July 2017 03:37:28PM 0 points [-]

But that just moves the question back a level. How do I know that some activity is "pondering your own consciousness"? You can't give me a description of "pondering your own consciousness" that can be used to determine if that is taking place.

Comment author: g_pepper 15 July 2017 07:48:03PM 0 points [-]

How do I know that some activity is "pondering your own consciousness"?

Isn't that what you were doing when you said "Can I be sure that I'm conscious"?

It seems to me that one's own consciousness is beyond dispute if one is able to think about things (including but not limited to one's own consciousness) and have first-person experiences. Even if one disputes the consciousness of others (for example, if one is a solipsist), I don't see how anyone can reasonably doubt his/her own consciousness.

Comment author: Jiro 16 July 2017 05:57:40AM *  0 points [-]

It's turtles all the way down. Just like you can't give me a description of consciousness, and you can't give me a description of "pondering your own consciousness", you can't give me a description of "first person experiences" either. You can't give me a description of any of these related concepts except in terms of other such concepts.

It's not so much that I'm doubting whether I'm conscious, but rather I'm doubting whether I can figure out whether I'm conscious. I can't figure out if I have something when you can't communicate to me exactly what it is that I may or may not have.

Comment author: entirelyuseless 16 July 2017 02:46:16PM 2 points [-]

Can you figure out whether there are chairs in your house? How? Suppose you say that there are. How do you know they are chairs and not something else? If you answer those questions, we can continue in the same way and ask how you know those answers are right and what they mean. You will never be able to explain any concept without using other concepts, and we can always say, "but what are those things?"

I would say there is no difference; consciousness is no harder to recognize than chairs (and in fact a bit easier.) If you think there is a difference, what is it?

Comment author: Jiro 17 July 2017 04:34:53AM *  0 points [-]

If I ask you to describe a chair, ultimately you'll describe it in terms of things I can perceive. "A chair is something made for sitting. Sitting is this thing I'm doing" and I can watch you sitting, therefore getting an idea of what sitting is. I can't watch your consciousness.

Comment author: entirelyuseless 17 July 2017 01:41:36PM 1 point [-]

But what is watching someone sitting, and what is "getting an idea of what sitting is"? Those aren't things which are easy to watch.

And if you say you can notice yourself watching someone sitting and notice yourself getting an idea of what sitting is, then you can notice yourself being conscious. So there shouldn't be any difficulty figuring out whether you are conscious. The difficulty (if there is one) would be figuring out whether someone else is conscious. And it is equally difficult to know whether someone else has an idea of what sitting is.

Comment author: Jiro 17 July 2017 07:11:58PM 0 points [-]

I think maybe I'm not being clear.

If you want to tell me what a chair is, you can point to a chair and its characteristics and I can look at it. I can then notice that when I look at that chair, and when I look at an object inside my house, they look pretty much the same. So I conclude that the object inside my house seems to be what you would call a chair. (Of course, you'd probably describe a chair in a more complicated way, but it would come down to a lot of instances of that.)

If I try to do that for consciousness, one of the intermediate steps is missing. I can't look at your consciousness, then look at mine, and say "hmm, they seem to be the same sort of thing". Each one is (or is purported to be) only visible to one person.

The fact that I can "notice myself being conscious" doesn't change this. I can't compare consciousnesses. While it's true that I can't directly compare my idea of sitting to your idea of sitting, I can go through the intermediary of asking you to sit, then comparing what I see when you sit to what I see when I sit.

Comment author: g_pepper 16 July 2017 01:46:54PM *  1 point [-]

It's not so much that I'm doubting whether I'm conscious, but rather I'm doubting whether I can figure out whether I'm conscious.

If you don't doubt you are conscious, I'm not sure why you would need to figure out whether you are conscious - it seems to me that you already know based on direct experience.

Just like you can't give me a description of consciousness, and you can't give me a description of "pondering your own consciousness", you can't give me a description of "first person experiences" either.

That these things are difficult to describe is not in dispute; that is what I meant when I said "consciousness seems to defy precise definitions". But, we can still talk about them as there seems to be a shared understanding of the concepts.

One need not have a precise definition of a thing to discuss and believe in that thing or to know that one is effected by that thing. For example, consider someone unschooled in physics beyond a grade-school level. He/she knows about gravity, knows that he/she is subject to the effects of gravity and can make (qualitative) predictions about the effects of gravity, even if he/she cannot say whether gravity is a force, a warping of spacetime, both of these things, neither of these things, or even understand the distinction. Similarly, there is enough of a common understanding of consciousness and first person experiences for a person to be confident that she/he is conscious and has first person experiences.

I do agree that the lack of precise definition (and, more importantly, the lack of measurable or externally observable manifestations) makes it impossible (at the present) for an observer to know whether some other entity is conscious.

Comment author: tadasdatys 17 July 2017 08:24:26AM 0 points [-]

The three examples deal with different kinds of things.

Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.

Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, "I have a skill to do X" means "I believe I'm better than most at X" and while it is a belief as good as the previous one, but it's a little less direct.

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

Comment author: TheAncientGeek 18 July 2017 01:19:43PM *  2 points [-]

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

You say that like its a good thing.

If you look for consciousness from the outside, you'll find nothing, or you'll find behaviour. That's because consciousness is on the inside, is about subjectivity.

You won't find penguins in the arctic, but that doesn't mean you get to define penguins as nonexisent, or redefine "penguin" to mean "polar bear".

Comment author: tadasdatys 18 July 2017 01:49:14PM 0 points [-]

You say that like its a good thing.

No, I'm not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.

If you look for consciousness from the outside, you'll find nothing, or you'll find behaviour. That's because consciousness is on the inside, is about subjectivity.

It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I'm under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don't need either? That you could have two physically identical agents, one conscious, the other not?

Comment author: TheAncientGeek 18 July 2017 02:14:05PM *  1 point [-]

It would be preferable to find consciousness in the real world.

Meaning the world of exteriors? If so, is that not question begging?

: Either reflected in behavior or in the physical structure of the brain.

Well, it;'s defintiely reflected in the physical structure of the brain, because you can tell whether someone is conscious with an FMRI scan.

I'm under the impression that cousin_it believes you can have the latter without the former. I say you must have both.

OK. Now you you have asserted it, how about justifying it.

Are you saying you don't need either? That you could have two physically identical agents, one conscious, the other not?

No. I am saying you shouldn't beg questions, and you shouldn't confuse the evidence for X with the meaning of X.

You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.

Comment author: tadasdatys 18 July 2017 02:43:40PM 0 points [-]

Meaning the world of exteriors?

No, meaning the material, physical world. I'm glad you agree it's there. Frankly, I have not a slightest clue what "exterior" means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I'm sure you didn't. But I'd rather not answer your other points, when I have no clue about what it is that we disagree about.

because you can tell whether someone is conscious with an FMRI scan.

No, you can tell if their brain is active. It's fine to define "consciousness" = "human brain activity", but that doesn't generalize well.

Comment author: TheAncientGeek 18 July 2017 03:06:12PM *  0 points [-]

I have not a slightest clue what "exterior" means.

It's where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.

No, you can tell if their brain is active. It's fine to define "consciousness" = "human brain activity",

We don't do that. We use a medical definition. "Consciousness" has a number of uses in science.

Comment author: tadasdatys 18 July 2017 06:17:54PM 0 points [-]

It's where you are willing to look, as opposed to where you are not.

That's hardly a definition. I think it's you who is begging the question here.

You keep insisting that cosnciousness can only be found in the behaviour of someone else

I have no idea where you got that. I explicitly state "I say you must have both", just a couple of posts above.

The state of being aware, or perceiving physical facts or mental concepts; a state of general wakefulness and responsiveness to environment; a functioning sensorium.

Here's a google result for "medical definition of consciousness". It is quite close to "brain activity", dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?

Comment author: TheAncientGeek 18 July 2017 06:53:31PM *  0 points [-]

I explicitly state "I say you must have both", just a couple of posts above

Behaviour alone versus behaviour plus brain scans doesn't make a relevant difference.. Brain scans are still objective data about someone else. It'sll an attempt to deal with subjectivity on an objective basis.

The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.

Comment author: entirelyuseless 18 July 2017 02:10:27PM 1 point [-]

"It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain."

"It would be preferable" expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.

Comment author: tadasdatys 18 July 2017 02:30:08PM *  0 points [-]

No, "it's preferable", same as "you should", is fine when there is a goal specified. e.g. "it's preferable to do X, if you want Y". Here, the goal is implicit - "not to have stupid beliefs". Hopefully that's a goal we all share.

By the way, "should" with implicit goals is quite common, you should be able to handle it. (Notice the second "should'. The implicit goal is now "to participate in normal human communication").

Comment author: entirelyuseless 18 July 2017 03:07:29PM 0 points [-]

We can understand that the word consciousness refers to something subjective (as it obviously does) without having stupid beliefs.

Comment author: tadasdatys 18 July 2017 06:43:43PM 0 points [-]

Subjective is not the opposite of physical.

Comment author: entirelyuseless 19 July 2017 01:06:46AM 0 points [-]

Indeed.

"Subjective perception," is opposite, in the relevant way, to "objective description."

Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.

The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.

Comment author: tadasdatys 19 July 2017 05:52:44AM 0 points [-]

physicality is utterly irrelevant to it.

No, physical things have objective descriptions.

Can you find another subjective concept that does not have an objective description? I'm predicting that we disagree about what "objective description" means.

Comment author: entirelyuseless 19 July 2017 02:18:09PM 0 points [-]

Yes, I can find many others. "You seem to me to be currently mistaken," does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.

"No, physical things have objective descriptions."

If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.

Comment author: g_pepper 18 July 2017 02:27:14PM 0 points [-]

It would be preferable to find consciousness in the real world.

I find myself to be conscious every day. I don't understand what you find "unreal" about direct experience.

Comment author: tadasdatys 18 July 2017 03:18:58PM 0 points [-]

Here's what I think happened.

You observed something interesting happening in your brain, you labeled it "consciousness".
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans "conscious".
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it "not conscious".
Then you observed a robot, and you asked "is it conscious?". If you asked the full question - "are the things happening in a robot similar to the things happening in my brain" - it would be obvious that you won't get a yes/no answer. They're similar in some ways and different in others.

Comment author: TheAncientGeek 18 July 2017 05:32:39PM *  0 points [-]

But if you go back to the original question, you can't rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn't superior, it's potentially misleading.

Comment author: tadasdatys 18 July 2017 06:52:33PM 0 points [-]

I can rule out that the robot is conscious, because the word "conscious" has very little meaning. It's a label of an artificial category. You can redefine "conscious" to include or exclude the robot, but that doesn't change reality in any way. The robot is exactly as "conscious" as you are "roboticious". You can either ask questions about brain activity and function, or you can ask no questions at all.

Comment author: TheAncientGeek 19 July 2017 01:43:20PM *  0 points [-]

I can rule out that the robot is conscious, because the word "conscious" has very little meaning.

To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.

You can redefine "conscious" to include or exclude the robot, but that doesn't change reality in any way.

The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it's central feature: subjectivity.

You can either ask questions about brain activity and function, or you can ask no questions at all.

Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.

Comment author: tadasdatys 19 July 2017 05:31:57PM 0 points [-]

having a first person perspective

What does "not having a first person perspective" look like?

gerrymnadered definition of meaning

I find my definition of meaning (of statements) very natural. Do you want to offer a better one?

subjectivity

I think you use that word as equivalent to consciousness, not as a property that consciousness has.

I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste.

All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don't think that's what you mean by consciousness.

Comment author: TheAncientGeek 20 July 2017 03:18:08PM *  1 point [-]

All of these things have perfectly good physical representations.

Not if "perfectly good" means "known".

Comment author: g_pepper 18 July 2017 04:10:11PM *  0 points [-]

You observed something interesting happening in your brain, you labeled it "consciousness". You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans "conscious".

Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don't think that conscious minds other than their own exist.

You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it "not conscious".

That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.

Then you observed a robot, and you asked "is it conscious?". If you asked the full question - "are the things happening in a robot similar to the things happening in my brain" - it would be obvious that you won't get a yes/no answer. They're similar in some ways and different in others.

I'm not sure that "is the robot conscious" is really equivalent to "are the things happening in a robot similar to the things happening in my brain". It could be that some things happening in the robot's brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot's brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult - we don't have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.

I would be willing to answer "no" to the "is the robot conscious" question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.

I don't see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.

Comment author: tadasdatys 18 July 2017 07:16:23PM 0 points [-]

Let me say it differently. There is a category in your head called "conscious entities". Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn't really have to be hard and binary. I'm saying that "conscious entities" is an extrapolated category. It includes yourself, and it excludes inanimate objects. That's something we all agree on (even "inanimate objects" may be a little shaky).

My point is that this is the whole specification of "conscious entities". There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It's all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You'll be better off using another label for your definition.

Comment author: g_pepper 19 July 2017 12:47:41PM 0 points [-]

That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer's is conscious is not really in question - pretty much everyone on this thread has said that. It doesn't follow that I should drop the term or a "use another label"; there is a common understanding of the term "conscious" that makes it useful even if we can't know whether "X is conscious" is true in many cases.

Comment author: tadasdatys 19 July 2017 01:54:55PM 0 points [-]

it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer's is conscious

There is a big gap between "difficult" and "impossible". If a thing is "difficult to measure", then you're supposed to know in principle what sort of measurement you'd want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is "impossible to measure", then the thing is likely bullshit.

there is a common understanding of the term "conscious"

What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?

Comment author: g_pepper 19 July 2017 08:16:44PM 0 points [-]

If a thing is "impossible to measure", then the thing is likely bullshit.

In the case of consciousness, we are talking about subjective experience. I don't think that the fact that we can't measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can't get the answer to either of those things via measurement, but I don't think that they are bullshit questions (albeit they are not particularly useful questions).

What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?

In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.

Comment author: cousin_it 17 July 2017 10:43:06AM *  2 points [-]

Let's try another situation. Imagine two people in sealed rooms. You press a button and both of them scream in pain. However you know that only the first person is really suffering, while the second one is pretending and the button actually gives him pleasure. The two rooms have the same reaction to pressing the button, but the moral value of pressing the button is different. If you propose an AI that ignores all such differences in principle, and assigns moral value only based on external behavior without figuring out the nature of pain/pleasure/other qualia, then I won't invest in your AI because it will likely lead to horror.

Hence the title "steelmanning the chinese room argument". To have any shot at FAI, we need to figure out morality the hard way. Playing rationalist taboo isn't good enough. The hope of reducing all morally relevant properties (not just consciousness) to outward behavior is just that - a hope. You have zero arguments why it's true, and the post gives several arguments why it's false. Don't bet the world on it.

Comment author: tadasdatys 17 July 2017 11:49:35AM 0 points [-]

However you know that only the first person is really suffering <...>

Let's pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.

<...> an AI that ignores all such differences in principle <...>

If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word "ignore" is completely inappropriate here. You can't ignore something you can't know. It's usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn't. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn't trust what people tell it?

Comment author: cousin_it 17 July 2017 01:48:30PM *  2 points [-]

You assume that all my knowledge about humans comes from observing their behavior. That's not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn't exist.

You are right that communicating such knowledge to an AI is hard. But we must find a way.

Comment author: tadasdatys 17 July 2017 05:21:19PM 0 points [-]

You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don't know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?

The knowledge that "only the first person is really suffering" has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.

Comment author: cousin_it 17 July 2017 05:32:54PM *  2 points [-]

You said:

At best, "X is conscious" means "X has behaviors in some sense similar to a human's".

I'm trying to show that's not good enough. Seeing red isn't the same as claiming to see red, feeling pain isn't the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren't reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other's experiences by similarity, but that doesn't work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That's the point of the post.

You could try to patch the problem by making the AI create only agents that aren't too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn't accept that solution. We need to figure out internal experience the hard way.

Comment author: tadasdatys 18 July 2017 07:46:07AM 0 points [-]

Seeing red isn't the same as claiming to see red

A record player looping the words "I see red" is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that's red, plays the same "I see red" sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.

Comment author: g_pepper 18 July 2017 12:36:12PM 1 point [-]

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family).

I don't see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain. Can you clarify that inference?

suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution

I don't see anything magical about consciousness - it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don't as-of-yet have an objective metric for consciousness in others does not make it magical.

Comment author: tadasdatys 18 July 2017 01:36:01PM 0 points [-]

it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain

No, I'm saying that "feels pain" is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that "feels pain" is very different from "deserves human rights".

no one on this thread has suggested a supernatural explanation for it

No one has suggested any explanation for it at all. And I do use "magical" in a loose sense.

Comment author: TheAncientGeek 18 July 2017 01:48:27PM 3 points [-]

No, I'm saying that "feels pain" is not a meaningful category.

So what do pain killers do? Nothing?

Comment author: TheAncientGeek 18 July 2017 11:12:45AM 1 point [-]

Your solution seems to consist of adopting an ethics that is explicitly non-universal.

Comment author: TheAncientGeek 18 July 2017 01:24:28PM *  0 points [-]

.,...has very little to do with your internal experience, it comes entirely from real observation ..

There's a slippery slope there. You start with "very little X" and slide to "entirely non-X".

Comment author: tadasdatys 18 July 2017 01:52:46PM 0 points [-]

"very little" is a polite way to say "nothing". It makes sense, especially next to the vague "has to do with" construct. So there is no slope here.

To clarify, are you disagreeing with me?

Comment author: TheAncientGeek 18 July 2017 02:26:42PM 0 points [-]

Your argument is either unsound or invalid, but I'm not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.

Comment author: tadasdatys 18 July 2017 03:24:23PM 0 points [-]

detecting

Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5

There is no detecting going on. If you're clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That's why I used "little" instead of "nothing".

Comment author: TheAncientGeek 18 July 2017 03:34:08PM 0 points [-]

But I wasn't talking about the CR, I was talking in general.

Comment author: lmn 19 July 2017 11:56:55PM 1 point [-]

Finally, being conscious doesn't mean anything at all. It has no relationship to reality.

What do you mean by "reality"? If you're an empiricist, as it looks like you are, you mean "that which influinces our observations". Now what is an "observation"? Good luck answering that question without resorting to qualia.

Comment author: tadasdatys 20 July 2017 06:03:27AM 0 points [-]

"observation" is what your roomba does to find the dirt on your floor.

Comment author: lmn 20 July 2017 10:31:17PM 1 point [-]

How do you know? Does a falling rock also observe the gravitational field?

Comment author: gjm 20 July 2017 11:21:52AM 0 points [-]

I agree with much of what you say but I am not sure it implies for cousin_it's position what you think it does.

I'm sure it's true that, as you put it elsewhere in the thread, consciousness is "extrapolated": calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.

But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.

For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being's capacity for conscious thought while leaving autonomic functions like breathing normal.

(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)

So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.

I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn't depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.

You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.

Comment author: tadasdatys 20 July 2017 04:00:33PM 0 points [-]

Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as "behavior", but then I never wanted "behavior" to literally mean "hand movements".

I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger's cat-like process. But I wouldn't find that very interesting either. Yes, you can hide information, but it's not just information about consciousness you're hiding, but also about "ability to do arithmetic" and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.

Comment author: Dr_Manhattan 06 July 2017 02:09:49PM 0 points [-]

Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?

I think you're doing some priming here by adding "dutifully".

I believe that causally the output "I see red" is connected to the actual experience of seeing red; while it's possible (depending on the level of optimization) that the optimized upload is saying "icey led" with an accent to sound like the expected output it still seems more plausible that the brain structure generating the red experience is preserved (maintaining a causal representation of reality is generally more optimal/compressed than maintaining a lookup table)

Comment author: cousin_it 06 July 2017 02:27:17PM *  1 point [-]

You wouldn't think that a book or Eliza program saying "I see red" were conscious, right? The question is whether optimizing an upload can make it close to an Eliza program for some topics. I think it's possible, given how little we can say about consciousness (i.e. how few different responses we'd need to code into the Eliza program).

Comment author: Dr_Manhattan 07 July 2017 12:33:55PM 0 points [-]

Not disagreeing in principle, it depends on the degree of optimization and the set of data you expect the upload to have low error on. Eliza will succeed on a very small set of data but will fail quickly on anything close to real-life. It's possible that there's a more compact representation that results in "I see red" than the DAG with consciousness in it, but I don't think it's that easy to optimize out without breaking other tests. BTW you've read Blindsight right? Great scifi on this topic basically (with aliens instead of uploads)