Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

tadasdatys comments on Steelmanning the Chinese Room Argument - Less Wrong Discussion

5 Post author: cousin_it 06 July 2017 09:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread.

Comment author: tadasdatys 17 July 2017 08:24:26AM 0 points [-]

The three examples deal with different kinds of things.

Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.

Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, "I have a skill to do X" means "I believe I'm better than most at X" and while it is a belief as good as the previous one, but it's a little less direct.

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

Comment author: TheAncientGeek 18 July 2017 01:19:43PM *  2 points [-]

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

You say that like its a good thing.

If you look for consciousness from the outside, you'll find nothing, or you'll find behaviour. That's because consciousness is on the inside, is about subjectivity.

You won't find penguins in the arctic, but that doesn't mean you get to define penguins as nonexisent, or redefine "penguin" to mean "polar bear".

Comment author: tadasdatys 18 July 2017 01:49:14PM 0 points [-]

You say that like its a good thing.

No, I'm not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.

If you look for consciousness from the outside, you'll find nothing, or you'll find behaviour. That's because consciousness is on the inside, is about subjectivity.

It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I'm under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don't need either? That you could have two physically identical agents, one conscious, the other not?

Comment author: TheAncientGeek 18 July 2017 02:14:05PM *  1 point [-]

It would be preferable to find consciousness in the real world.

Meaning the world of exteriors? If so, is that not question begging?

: Either reflected in behavior or in the physical structure of the brain.

Well, it;'s defintiely reflected in the physical structure of the brain, because you can tell whether someone is conscious with an FMRI scan.

I'm under the impression that cousin_it believes you can have the latter without the former. I say you must have both.

OK. Now you you have asserted it, how about justifying it.

Are you saying you don't need either? That you could have two physically identical agents, one conscious, the other not?

No. I am saying you shouldn't beg questions, and you shouldn't confuse the evidence for X with the meaning of X.

You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.

Comment author: tadasdatys 18 July 2017 02:43:40PM 0 points [-]

Meaning the world of exteriors?

No, meaning the material, physical world. I'm glad you agree it's there. Frankly, I have not a slightest clue what "exterior" means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I'm sure you didn't. But I'd rather not answer your other points, when I have no clue about what it is that we disagree about.

because you can tell whether someone is conscious with an FMRI scan.

No, you can tell if their brain is active. It's fine to define "consciousness" = "human brain activity", but that doesn't generalize well.

Comment author: TheAncientGeek 18 July 2017 03:06:12PM *  0 points [-]

I have not a slightest clue what "exterior" means.

It's where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.

No, you can tell if their brain is active. It's fine to define "consciousness" = "human brain activity",

We don't do that. We use a medical definition. "Consciousness" has a number of uses in science.

Comment author: tadasdatys 18 July 2017 06:17:54PM 0 points [-]

It's where you are willing to look, as opposed to where you are not.

That's hardly a definition. I think it's you who is begging the question here.

You keep insisting that cosnciousness can only be found in the behaviour of someone else

I have no idea where you got that. I explicitly state "I say you must have both", just a couple of posts above.

The state of being aware, or perceiving physical facts or mental concepts; a state of general wakefulness and responsiveness to environment; a functioning sensorium.

Here's a google result for "medical definition of consciousness". It is quite close to "brain activity", dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?

Comment author: TheAncientGeek 18 July 2017 06:53:31PM *  0 points [-]

I explicitly state "I say you must have both", just a couple of posts above

Behaviour alone versus behaviour plus brain scans doesn't make a relevant difference.. Brain scans are still objective data about someone else. It'sll an attempt to deal with subjectivity on an objective basis.

The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.

Comment author: entirelyuseless 18 July 2017 02:10:27PM 1 point [-]

"It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain."

"It would be preferable" expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.

Comment author: tadasdatys 18 July 2017 02:30:08PM *  0 points [-]

No, "it's preferable", same as "you should", is fine when there is a goal specified. e.g. "it's preferable to do X, if you want Y". Here, the goal is implicit - "not to have stupid beliefs". Hopefully that's a goal we all share.

By the way, "should" with implicit goals is quite common, you should be able to handle it. (Notice the second "should'. The implicit goal is now "to participate in normal human communication").

Comment author: entirelyuseless 18 July 2017 03:07:29PM 0 points [-]

We can understand that the word consciousness refers to something subjective (as it obviously does) without having stupid beliefs.

Comment author: tadasdatys 18 July 2017 06:43:43PM 0 points [-]

Subjective is not the opposite of physical.

Comment author: entirelyuseless 19 July 2017 01:06:46AM 0 points [-]

Indeed.

"Subjective perception," is opposite, in the relevant way, to "objective description."

Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.

The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.

Comment author: tadasdatys 19 July 2017 05:52:44AM 0 points [-]

physicality is utterly irrelevant to it.

No, physical things have objective descriptions.

Can you find another subjective concept that does not have an objective description? I'm predicting that we disagree about what "objective description" means.

Comment author: entirelyuseless 19 July 2017 02:18:09PM 0 points [-]

Yes, I can find many others. "You seem to me to be currently mistaken," does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.

"No, physical things have objective descriptions."

If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.

Comment author: g_pepper 18 July 2017 02:27:14PM 0 points [-]

It would be preferable to find consciousness in the real world.

I find myself to be conscious every day. I don't understand what you find "unreal" about direct experience.

Comment author: tadasdatys 18 July 2017 03:18:58PM 0 points [-]

Here's what I think happened.

You observed something interesting happening in your brain, you labeled it "consciousness".
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans "conscious".
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it "not conscious".
Then you observed a robot, and you asked "is it conscious?". If you asked the full question - "are the things happening in a robot similar to the things happening in my brain" - it would be obvious that you won't get a yes/no answer. They're similar in some ways and different in others.

Comment author: TheAncientGeek 18 July 2017 05:32:39PM *  0 points [-]

But if you go back to the original question, you can't rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn't superior, it's potentially misleading.

Comment author: tadasdatys 18 July 2017 06:52:33PM 0 points [-]

I can rule out that the robot is conscious, because the word "conscious" has very little meaning. It's a label of an artificial category. You can redefine "conscious" to include or exclude the robot, but that doesn't change reality in any way. The robot is exactly as "conscious" as you are "roboticious". You can either ask questions about brain activity and function, or you can ask no questions at all.

Comment author: TheAncientGeek 19 July 2017 01:43:20PM *  0 points [-]

I can rule out that the robot is conscious, because the word "conscious" has very little meaning.

To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.

You can redefine "conscious" to include or exclude the robot, but that doesn't change reality in any way.

The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it's central feature: subjectivity.

You can either ask questions about brain activity and function, or you can ask no questions at all.

Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.

Comment author: tadasdatys 19 July 2017 05:31:57PM 0 points [-]

having a first person perspective

What does "not having a first person perspective" look like?

gerrymnadered definition of meaning

I find my definition of meaning (of statements) very natural. Do you want to offer a better one?

subjectivity

I think you use that word as equivalent to consciousness, not as a property that consciousness has.

I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste.

All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don't think that's what you mean by consciousness.

Comment author: TheAncientGeek 20 July 2017 03:18:08PM *  1 point [-]

All of these things have perfectly good physical representations.

Not if "perfectly good" means "known".

Comment author: g_pepper 18 July 2017 04:10:11PM *  0 points [-]

You observed something interesting happening in your brain, you labeled it "consciousness". You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans "conscious".

Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don't think that conscious minds other than their own exist.

You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it "not conscious".

That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.

Then you observed a robot, and you asked "is it conscious?". If you asked the full question - "are the things happening in a robot similar to the things happening in my brain" - it would be obvious that you won't get a yes/no answer. They're similar in some ways and different in others.

I'm not sure that "is the robot conscious" is really equivalent to "are the things happening in a robot similar to the things happening in my brain". It could be that some things happening in the robot's brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot's brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult - we don't have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.

I would be willing to answer "no" to the "is the robot conscious" question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.

I don't see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.

Comment author: tadasdatys 18 July 2017 07:16:23PM 0 points [-]

Let me say it differently. There is a category in your head called "conscious entities". Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn't really have to be hard and binary. I'm saying that "conscious entities" is an extrapolated category. It includes yourself, and it excludes inanimate objects. That's something we all agree on (even "inanimate objects" may be a little shaky).

My point is that this is the whole specification of "conscious entities". There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It's all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You'll be better off using another label for your definition.

Comment author: g_pepper 19 July 2017 12:47:41PM 0 points [-]

That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer's is conscious is not really in question - pretty much everyone on this thread has said that. It doesn't follow that I should drop the term or a "use another label"; there is a common understanding of the term "conscious" that makes it useful even if we can't know whether "X is conscious" is true in many cases.

Comment author: tadasdatys 19 July 2017 01:54:55PM 0 points [-]

it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer's is conscious

There is a big gap between "difficult" and "impossible". If a thing is "difficult to measure", then you're supposed to know in principle what sort of measurement you'd want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is "impossible to measure", then the thing is likely bullshit.

there is a common understanding of the term "conscious"

What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?

Comment author: g_pepper 19 July 2017 08:16:44PM 0 points [-]

If a thing is "impossible to measure", then the thing is likely bullshit.

In the case of consciousness, we are talking about subjective experience. I don't think that the fact that we can't measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can't get the answer to either of those things via measurement, but I don't think that they are bullshit questions (albeit they are not particularly useful questions).

What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?

In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.

Comment author: cousin_it 17 July 2017 10:43:06AM *  2 points [-]

Let's try another situation. Imagine two people in sealed rooms. You press a button and both of them scream in pain. However you know that only the first person is really suffering, while the second one is pretending and the button actually gives him pleasure. The two rooms have the same reaction to pressing the button, but the moral value of pressing the button is different. If you propose an AI that ignores all such differences in principle, and assigns moral value only based on external behavior without figuring out the nature of pain/pleasure/other qualia, then I won't invest in your AI because it will likely lead to horror.

Hence the title "steelmanning the chinese room argument". To have any shot at FAI, we need to figure out morality the hard way. Playing rationalist taboo isn't good enough. The hope of reducing all morally relevant properties (not just consciousness) to outward behavior is just that - a hope. You have zero arguments why it's true, and the post gives several arguments why it's false. Don't bet the world on it.

Comment author: tadasdatys 17 July 2017 11:49:35AM 0 points [-]

However you know that only the first person is really suffering <...>

Let's pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.

<...> an AI that ignores all such differences in principle <...>

If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word "ignore" is completely inappropriate here. You can't ignore something you can't know. It's usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn't. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn't trust what people tell it?

Comment author: cousin_it 17 July 2017 01:48:30PM *  2 points [-]

You assume that all my knowledge about humans comes from observing their behavior. That's not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn't exist.

You are right that communicating such knowledge to an AI is hard. But we must find a way.

Comment author: tadasdatys 17 July 2017 05:21:19PM 0 points [-]

You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don't know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?

The knowledge that "only the first person is really suffering" has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.

Comment author: cousin_it 17 July 2017 05:32:54PM *  2 points [-]

You said:

At best, "X is conscious" means "X has behaviors in some sense similar to a human's".

I'm trying to show that's not good enough. Seeing red isn't the same as claiming to see red, feeling pain isn't the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren't reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other's experiences by similarity, but that doesn't work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That's the point of the post.

You could try to patch the problem by making the AI create only agents that aren't too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn't accept that solution. We need to figure out internal experience the hard way.

Comment author: tadasdatys 18 July 2017 07:46:07AM 0 points [-]

Seeing red isn't the same as claiming to see red

A record player looping the words "I see red" is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that's red, plays the same "I see red" sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.

Comment author: g_pepper 18 July 2017 12:36:12PM 1 point [-]

You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family).

I don't see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain. Can you clarify that inference?

suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution

I don't see anything magical about consciousness - it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don't as-of-yet have an objective metric for consciousness in others does not make it magical.

Comment author: tadasdatys 18 July 2017 01:36:01PM 0 points [-]

it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain

No, I'm saying that "feels pain" is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that "feels pain" is very different from "deserves human rights".

no one on this thread has suggested a supernatural explanation for it

No one has suggested any explanation for it at all. And I do use "magical" in a loose sense.

Comment author: TheAncientGeek 18 July 2017 01:48:27PM 3 points [-]

No, I'm saying that "feels pain" is not a meaningful category.

So what do pain killers do? Nothing?

Comment author: TheAncientGeek 18 July 2017 11:12:45AM 1 point [-]

Your solution seems to consist of adopting an ethics that is explicitly non-universal.

Comment author: TheAncientGeek 18 July 2017 01:24:28PM *  0 points [-]

.,...has very little to do with your internal experience, it comes entirely from real observation ..

There's a slippery slope there. You start with "very little X" and slide to "entirely non-X".

Comment author: tadasdatys 18 July 2017 01:52:46PM 0 points [-]

"very little" is a polite way to say "nothing". It makes sense, especially next to the vague "has to do with" construct. So there is no slope here.

To clarify, are you disagreeing with me?

Comment author: TheAncientGeek 18 July 2017 02:26:42PM 0 points [-]

Your argument is either unsound or invalid, but I'm not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.

Comment author: tadasdatys 18 July 2017 03:24:23PM 0 points [-]

detecting

Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5

There is no detecting going on. If you're clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That's why I used "little" instead of "nothing".

Comment author: TheAncientGeek 18 July 2017 03:34:08PM 0 points [-]

But I wasn't talking about the CR, I was talking in general.

Comment author: lmn 19 July 2017 11:56:55PM 1 point [-]

Finally, being conscious doesn't mean anything at all. It has no relationship to reality.

What do you mean by "reality"? If you're an empiricist, as it looks like you are, you mean "that which influinces our observations". Now what is an "observation"? Good luck answering that question without resorting to qualia.

Comment author: tadasdatys 20 July 2017 06:03:27AM 0 points [-]

"observation" is what your roomba does to find the dirt on your floor.

Comment author: lmn 20 July 2017 10:31:17PM 1 point [-]

How do you know? Does a falling rock also observe the gravitational field?

Comment author: tadasdatys 21 July 2017 10:26:29AM 0 points [-]

How do you know?

The same way I know what a chair is.

Does a falling rock also observe the gravitational field?

I'd have to say no here, but if you asked about plants observing light or even ice observing heat, I'd say "sure, why not". There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.

Comment author: lmn 23 July 2017 06:16:25PM 0 points [-]

I'd have to say no here, but if you asked about plants observing light or even ice observing heat, I'd say "sure, why not". There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.

What are you basing this distinction on? More importantly, how is whatever you're basing this distinction on relevant to grounding the concept of empirical reality?

Using Eliezer's formulation of "making beliefs pay rents in anticipated experiences" may make the relevant point clearer here. Specifically, what's an "experience"?

Comment author: gjm 20 July 2017 11:21:52AM 0 points [-]

I agree with much of what you say but I am not sure it implies for cousin_it's position what you think it does.

I'm sure it's true that, as you put it elsewhere in the thread, consciousness is "extrapolated": calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.

But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.

For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being's capacity for conscious thought while leaving autonomic functions like breathing normal.

(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)

So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.

I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn't depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.

You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.

Comment author: tadasdatys 20 July 2017 04:00:33PM 0 points [-]

Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as "behavior", but then I never wanted "behavior" to literally mean "hand movements".

I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger's cat-like process. But I wouldn't find that very interesting either. Yes, you can hide information, but it's not just information about consciousness you're hiding, but also about "ability to do arithmetic" and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.

Comment author: gjm 21 July 2017 12:35:46PM 0 points [-]

OK, so what did you mean by "behaviour" if it includes things you can only discover with an fMRI scan? (Possible "extreme" case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don't think many here would disagree with that.)

If the criteria for consciousness include things you can't observe "normally" but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering "yes" to that last question.

My point wasn't about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of "behaviours". It now seems like you weren't intending to deny that at all; but in that case I no longer understand how what you're saying is relevant to the OP.

Comment author: tadasdatys 22 July 2017 09:17:37AM 0 points [-]

what did you mean by "behaviour"

The word behavior doesn't really feature much in the ongoing discussions I have. My first post was an answer to OP, not meant as a stand-alone truth. But obviously, If "consciousness" means anything, it's a thing that happens in the brain - I'd say it's the thing that makes complex and human-like behaviors possible.

If the criteria for consciousness include things you can't observe "normally" <...>

Normally is the key word here. There is nothing normal about your scenario. I need an fmri can for it, because there is nothing else that I can observe. Compared to that, the human in a box communicating through speech is very normal and quite sufficient. Unless the human is mute or malicious. Then I might need more complex tools.

much of the relevant information is already hidden

It's obscured, sure. But truly hiding information is hard. Speech isn't that narrow of a window, by the way. Now, if I had to communicate with the agent in the box by sending one bit of information back and forth, that would be more of a problem.