Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

John_Mlynarski comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread.

Comment author: John_Mlynarski 27 April 2017 01:36:50AM *  1 point [-]

"Is a human mind the simplest possible mind that can be sentient?" Of course not. Plenty of creatures with simpler minds are plainly sentient. If a tiger suddenly leaps out at you, you don't operate on the assumption that the tiger lacks awareness; you assume that the tiger is aware of you. Nor do you think "This tiger may behave as if it has subjective experiences, but that doesn't mean that it actually possesses internal mental states meaningfully analogous to wwhhaaaa CRUNCH CRUNCH GULP." To borrow from one of your own earlier arguments.

If you are instead sitting comfortably in front of a keyboard and monitor with no tiger in front of you, it's easy to come up with lots of specious arguments that tigers aren't really conscious, but so what? It's also easy to come up with lots of specious arguments that other humans aren't really conscious. Using such arguments as a basis for actual ethical decision-making strikes me as a bad idea, to put it mildly. What you've written here seems disturbingly similar to a solipsist considering the possibility that he could, conceivably, produce an imaginary entity sophisticated enough to qualify as having a mind of its own. Technically, it's sort of making progress, but....

When I first read your early writing, the one thing that threw me was an assertion that "Animals are the moral equivalent of rocks." At least, I hope that I'm not falsely attributing that to you; I can't track down the source, so I apologize if I'm making a mistake. But my recollection is of its standing out from your otherwise highly persuasive arguments as such blatant unsupported personal prejudice. No was evidence given in favor of this idea and it was followed by a parenthetical that clearly indicated that it was just wishful thinking; it really only made any sense in light of a different assertion that spotting glaring holes in other people's arguments isn't really indicative of any sort of exceptional competence except when dealing with politically and morally neutral subject matter.

Your post and comments here seem to conflate, under the label of "personhood," having moral worth and having a mind somehow closely approximating that of an adult human being. Equating these seems phenomenally morally dubious for any number of reasons; it's hard to see how it doesn't go directly against bedrock fairness, for example.

Comment author: arundelo 27 April 2017 11:20:38PM 0 points [-]

Eliezer probably means "sapient":

"Sentience is commonly used in science fiction and fantasy as synonymous with sapience, although the words aren't synonyms."

(Or maybe by "is sentient", he means to say, "is a person in the moral sense".)

Comment author: TheAncientGeek 28 April 2017 06:36:41AM 0 points [-]

Well, sentient means feeling and sapient means knowing, and that's about all there is to it...neither term is technical precise, although they are often bandied around as though they are.

Comment author: John_Mlynarski 12 May 2017 01:53:41AM *  0 points [-]

But saying that e.g. rats are not sentient in the context of concern about the treatment of sentient beings is like saying that Negroes are not men in the context of the Declaration of Independence. Not only are the purely semantic aspects dubious, but excluding entities from a moral category on semantic grounds seems like a severe mistake regardless.

Words like "sentience" and especially "consciousness" are often used to refer to the soul without sounding dogmatic about it. You can tell this from the ways people use them: "Would a perfect duplicate of you have the same consciousness?", "Are chimps conscious?", etc. You can even use such terminology in such ways if you're a materialist who denies the existence of souls. You'd sound crazy talking about souls like they're real things if you say that there are no such things as souls, wouldn't you? Besides, souls are supernatural. Consciousness, on the other hand, is an emergent phenomenon, which sounds much more scientific.

Is there good reason to think that there is some sort of psychic élan vital? It strikes me as probably being about as real as phlogiston or luminiferous aether; i.e. you can describe phenomena in terms of the concept, and it doesn't necessarily prevent you from doing so basically correctly, but you can do better without it.

And, of course, in the no-nonsense senses of the terms, rats are sentient, conscious, aware, or however else you want to put it. Not all of the time, of course. They can also be asleep or dead or other things, as can humans, but rats are often sentient. And it's not hard to tell that plenty of non-humans also experience mental phenomena, which is why it's common knowledge that they do.

I can't recall ever seeing an argument that mistreating minds without self-awareness or metacognition or whatever specific mental faculty is arbitrarily singled out, is kinder or more just or in any normal sense more moral than mistreating a mind without it. And you can treat any position as a self-justifying axiom, so doing so doesn't work out to an argument for the position's truth in anything but a purely relativist sense.

It is both weird and alarming to see Eliezer arguing against blindly assuming that a mind is too simple to be "sentient" while also pretty clearly taking the position that anything much simpler than our own minds isn't. It really seems like he rather plainly isn't following his own advice, and that that could happen without him realizing it is very worrying. He has admitted that this is something he's confused about and is aware that others are more inclusive, but that doesn't seem to have prompted him to rethink his position all that much, which suggests that Eliezer is really confused about this in a way that may be hard to correct.

Looking for a nonperson predicate is kind of seeking an answer to the question "Who is it okay to do evil things to?" I would like to suggest that the correct answer is "No one", and that asking the question in the first place is a sign that you made a big mistake somewhere if you're trying to avoid being evil.

If having the right not to have something done to you just means that it's morally wrong to do that thing to you, then everything has rights. Making a rock suffer against it will would be, if anything, particularly evil, as it would require you to go out of your way to give the rock a will and the capacity to suffer. Obviously, avoiding violating anything's rights requires an ability to recognize what something wills, what will cause it to suffer, and/or etc. Those are useful distinctions. But it seems like Eliezer is talking about something different.

Has he written anything more recently on this subject?

Comment author: Jiro 30 April 2017 07:20:24PM *  1 point [-]

Nor do you think "This tiger may behave as if it has subjective experiences, but that doesn't mean that it actually possesses internal mental states meaningfully analogous to wwhhaaaa CRUNCH CRUNCH GULP."

That's only true trivially. If I don't have tiime to think anything about the tiger's awareness at all, I don't have time to think of it negatively.

Also, I play video games all the time where I say things like "it wants to attack the more powerful character first, maybe I can trick it by luring it away using that character". By your reasoning, I must believe that video game characters have awareness. I don't go around saying "it may behave as if it wants to go after the most powerful character, but that doesn't mean that it actually possesses subjective experiences, and I want it to react in a way which corresponds to being tricked if only it had been an entity with subjective experiences".

Comment author: John_Mlynarski 11 May 2017 10:50:06PM *  0 points [-]

It seems that you anticipate as if you believe in something that you don't believe you believe.

It's in that anticipatory, non-declarative sense that one believes in the awareness of tigers as well as video game characters, regardless of one's declarative beliefs, and even if one has no time for declarative beliefs.

Comment author: Jiro 12 May 2017 08:32:23AM *  1 point [-]

You first implied that tigers are conscious (because people react to them as if conscious.)

I pointed out that people react that way to video game characters.

You then said that tigers are conscious in the same way as video game characters, that is, they're not conscious in the ordinary sense, that is, you admitted you were wrong.

Comment author: John_Mlynarski 15 May 2017 03:13:16AM 0 points [-]

I said no such thing.

There is a way in which people believe video game characters, tigers, and human beings to be conscious. That doesn't preclude believing in another way that any of them is conscious.

Tigers are obviously conscious in the no-nonsense sense. I don't think anything is conscious in the philosobabble sense, i.e. I don't believe in souls, even if they're not called souls; see my reply to arundelo. I'm not sure which sense you consider to be the "ordinary" one; "conscious" isn't exactly an everyday word, in my experience.

Video game characters may also be obviously conscious, but there's probably better reason to believe that that which is obvious is not correct, in that case. Tigers are far more similar to human beings than they are to video game characters.

But I do think that we shouldn't casually dismiss consciousnesses that we're aware of. We shouldn't assume that everything that we're aware of is real, but we should consider the possibility. Why are you so convinced that video game characters don't have subjective experiences? If it's just that it's easy to understand how they work, then we might be just as "non-conscious" to a sufficiently advanced mind as such simple programs are to us; that seems like a dubious standard.

Comment author: Jiro 17 May 2017 12:07:12PM *  0 points [-]

Why are you so convinced that video game characters don't have subjective experiences?

The default for 99.99% of people is to not believe that video game characters are conscious. It's so common a belief that I am justified in assuming it unless you specifically tell me that you don't share it. You haven't told me that.

Comment author: John_Mlynarski 12 June 2017 01:12:03PM *  0 points [-]

Firstly, it seems more accurate to say that the standard default belief is that video characters possess awareness. That the vast majority rationalize their default belief as false doesn't change that.

Secondly, that's argumentum ad populum, which is evidence -- Common beliefs do seem to be usually true-- but not very strong evidence. I asked why you're as confident in your belief as you are. Are you as convinced of this belief as you are of most beliefs held by 99.99% of people? If you're more (or less) convinced, why is that?

Thirdly, you seem to be describing a reason for believing that I share your belief that video game characters aren't sentient, which is different from a reason for thinking that your belief is correct. I was asking why you think you're right, not why you assumed that I agree with you.

Comment author: Jiro 15 June 2017 10:40:35PM *  0 points [-]

Having confidence in the belief is irrelevant. Assuming that you agree with it is relevant, because

1) Arguments should be based on premises that the other guy accepts. You probably accept the premise that video game characters aren't conscious.

2) It is easy to filibuster an argument by questioning things that you don't actually disagree with. Because the belief that video game characters aren't conscious is so widespread, this is probably such a filibuster. I wish to avoid those.

Comment author: John_Mlynarski 16 June 2017 03:37:30AM 0 points [-]

Eliezer suggested that, in order to avoid acting unethically, we should refrain from casually dismissing the possibility that other entities are sentient. I responded that I think that's a very good idea and we should actually implement it. Implementing that idea means questioning assumptions that entities aren't sentient. One tool for questioning assumptions is asking "What do you think you know, and why do you think you know it?" Or, in less binary terms, why do you assign things the probabilities that you do?

Now do you see the relevance of asking you why you believe what you do as strongly as you do, however strongly that is?

I'm not trying to "win the debate", whatever that would entail.

Tell you what though, let me offer you a trade: If you answer my question, then I will do my best to answer a question of yours in return. Sound fair?

Comment author: Jiro 16 June 2017 08:52:50AM *  0 points [-]

Or, in less binary terms, why do you assign things the probabilities that you do?

I'm assuming that you assign it a high probability.

I personally am assigning it a high probability only for the sake of argument.

Since I am doing it for the sake of argument, I don't have, and need not have, any reason for doing so (other than its usefulness in argument).