Posts

Sorted by New

Wiki Contributions

Comments

Eliezer suggested that, in order to avoid acting unethically, we should refrain from casually dismissing the possibility that other entities are sentient. I responded that I think that's a very good idea and we should actually implement it. Implementing that idea means questioning assumptions that entities aren't sentient. One tool for questioning assumptions is asking "What do you think you know, and why do you think you know it?" Or, in less binary terms, why do you assign things the probabilities that you do?

Now do you see the relevance of asking you why you believe what you do as strongly as you do, however strongly that is?

I'm not trying to "win the debate", whatever that would entail.

Tell you what though, let me offer you a trade: If you answer my question, then I will do my best to answer a question of yours in return. Sound fair?

Firstly, it seems more accurate to say that the standard default belief is that video characters possess awareness. That the vast majority rationalize their default belief as false doesn't change that.

Secondly, that's argumentum ad populum, which is evidence -- Common beliefs do seem to be usually true-- but not very strong evidence. I asked why you're as confident in your belief as you are. Are you as convinced of this belief as you are of most beliefs held by 99.99% of people? If you're more (or less) convinced, why is that?

Thirdly, you seem to be describing a reason for believing that I share your belief that video game characters aren't sentient, which is different from a reason for thinking that your belief is correct. I was asking why you think you're right, not why you assumed that I agree with you.

I said no such thing.

There is a way in which people believe video game characters, tigers, and human beings to be conscious. That doesn't preclude believing in another way that any of them is conscious.

Tigers are obviously conscious in the no-nonsense sense. I don't think anything is conscious in the philosobabble sense, i.e. I don't believe in souls, even if they're not called souls; see my reply to arundelo. I'm not sure which sense you consider to be the "ordinary" one; "conscious" isn't exactly an everyday word, in my experience.

Video game characters may also be obviously conscious, but there's probably better reason to believe that that which is obvious is not correct, in that case. Tigers are far more similar to human beings than they are to video game characters.

But I do think that we shouldn't casually dismiss consciousnesses that we're aware of. We shouldn't assume that everything that we're aware of is real, but we should consider the possibility. Why are you so convinced that video game characters don't have subjective experiences? If it's just that it's easy to understand how they work, then we might be just as "non-conscious" to a sufficiently advanced mind as such simple programs are to us; that seems like a dubious standard.

But saying that e.g. rats are not sentient in the context of concern about the treatment of sentient beings is like saying that Negroes are not men in the context of the Declaration of Independence. Not only are the purely semantic aspects dubious, but excluding entities from a moral category on semantic grounds seems like a severe mistake regardless.

Words like "sentience" and especially "consciousness" are often used to refer to the soul without sounding dogmatic about it. You can tell this from the ways people use them: "Would a perfect duplicate of you have the same consciousness?", "Are chimps conscious?", etc. You can even use such terminology in such ways if you're a materialist who denies the existence of souls. You'd sound crazy talking about souls like they're real things if you say that there are no such things as souls, wouldn't you? Besides, souls are supernatural. Consciousness, on the other hand, is an emergent phenomenon, which sounds much more scientific.

Is there good reason to think that there is some sort of psychic élan vital? It strikes me as probably being about as real as phlogiston or luminiferous aether; i.e. you can describe phenomena in terms of the concept, and it doesn't necessarily prevent you from doing so basically correctly, but you can do better without it.

And, of course, in the no-nonsense senses of the terms, rats are sentient, conscious, aware, or however else you want to put it. Not all of the time, of course. They can also be asleep or dead or other things, as can humans, but rats are often sentient. And it's not hard to tell that plenty of non-humans also experience mental phenomena, which is why it's common knowledge that they do.

I can't recall ever seeing an argument that mistreating minds without self-awareness or metacognition or whatever specific mental faculty is arbitrarily singled out, is kinder or more just or in any normal sense more moral than mistreating a mind without it. And you can treat any position as a self-justifying axiom, so doing so doesn't work out to an argument for the position's truth in anything but a purely relativist sense.

It is both weird and alarming to see Eliezer arguing against blindly assuming that a mind is too simple to be "sentient" while also pretty clearly taking the position that anything much simpler than our own minds isn't. It really seems like he rather plainly isn't following his own advice, and that that could happen without him realizing it is very worrying. He has admitted that this is something he's confused about and is aware that others are more inclusive, but that doesn't seem to have prompted him to rethink his position all that much, which suggests that Eliezer is really confused about this in a way that may be hard to correct.

Looking for a nonperson predicate is kind of seeking an answer to the question "Who is it okay to do evil things to?" I would like to suggest that the correct answer is "No one", and that asking the question in the first place is a sign that you made a big mistake somewhere if you're trying to avoid being evil.

If having the right not to have something done to you just means that it's morally wrong to do that thing to you, then everything has rights. Making a rock suffer against it will would be, if anything, particularly evil, as it would require you to go out of your way to give the rock a will and the capacity to suffer. Obviously, avoiding violating anything's rights requires an ability to recognize what something wills, what will cause it to suffer, and/or etc. Those are useful distinctions. But it seems like Eliezer is talking about something different.

Has he written anything more recently on this subject?

It seems that you anticipate as if you believe in something that you don't believe you believe.

It's in that anticipatory, non-declarative sense that one believes in the awareness of tigers as well as video game characters, regardless of one's declarative beliefs, and even if one has no time for declarative beliefs.

"Is a human mind the simplest possible mind that can be sentient?" Of course not. Plenty of creatures with simpler minds are plainly sentient. If a tiger suddenly leaps out at you, you don't operate on the assumption that the tiger lacks awareness; you assume that the tiger is aware of you. Nor do you think "This tiger may behave as if it has subjective experiences, but that doesn't mean that it actually possesses internal mental states meaningfully analogous to wwhhaaaa CRUNCH CRUNCH GULP." To borrow from one of your own earlier arguments.

If you are instead sitting comfortably in front of a keyboard and monitor with no tiger in front of you, it's easy to come up with lots of specious arguments that tigers aren't really conscious, but so what? It's also easy to come up with lots of specious arguments that other humans aren't really conscious. Using such arguments as a basis for actual ethical decision-making strikes me as a bad idea, to put it mildly. What you've written here seems disturbingly similar to a solipsist considering the possibility that he could, conceivably, produce an imaginary entity sophisticated enough to qualify as having a mind of its own. Technically, it's sort of making progress, but....

When I first read your early writing, the one thing that threw me was an assertion that "Animals are the moral equivalent of rocks." At least, I hope that I'm not falsely attributing that to you; I can't track down the source, so I apologize if I'm making a mistake. But my recollection is of its standing out from your otherwise highly persuasive arguments as such blatant unsupported personal prejudice. No was evidence given in favor of this idea and it was followed by a parenthetical that clearly indicated that it was just wishful thinking; it really only made any sense in light of a different assertion that spotting glaring holes in other people's arguments isn't really indicative of any sort of exceptional competence except when dealing with politically and morally neutral subject matter.

Your post and comments here seem to conflate, under the label of "personhood," having moral worth and having a mind somehow closely approximating that of an adult human being. Equating these seems phenomenally morally dubious for any number of reasons; it's hard to see how it doesn't go directly against bedrock fairness, for example.