You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mestroyer comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: Mestroyer 24 May 2013 08:01:59PM 2 points [-]

The two scenarios have equal utility to me, as close as I can tell. The paperclipper (and the many more than one copies of itself it would make) would be minds optimized for creating and maintaining paperclips (Though maybe it would kill itself off to create more paperclips eventually?) and would not be sentient. In contrast to you, I think I care about sentience, not sapience. To the very small extent that I saw the paperclipper has a person, rather than a force of clips, I would wish it ill, but only in a half-hearted way, which wouldn't scale to disutility for every paperclip it successfully created.

Comment author: DataPacRat 24 May 2013 08:09:29PM 2 points [-]

I tend to use 'sentience' to separate animal-like things which can sense their environment from plant-like things which can't; and 'sapience' to separate human-like things which can think abstractly from critter-like things which can't. At the least, that's the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so.

If I may ask, what quality are you describing with the word 'sentience'?

Comment author: Mestroyer 24 May 2013 08:32:25PM 1 point [-]

I'm thinking of having feelings. I care about many critter-like things which can't think abstractly, but do feel. But just having senses is not enough for me.

Comment author: Vladimir_Nesov 24 May 2013 10:45:53PM 2 points [-]

I'm thinking of having feelings. I care about many critter-like things which can't think abstractly, but do feel. But just having senses is not enough for me.

What you care about is not obviously the same thing as what is valuable to you. What's valuable is a confusing question that you shouldn't be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn't necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I'm applying this observation to a decision to provisionally adopt certain moral principles).

Comment author: DataPacRat 24 May 2013 08:40:11PM 2 points [-]

To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn't enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?

Comment author: Mestroyer 24 May 2013 08:54:42PM 0 points [-]

Sounds about right, given the inaccurate biology.

Comment author: MugaSofer 28 May 2013 03:51:06PM *  0 points [-]

Probably the same thing people mean when they say "consciousness". At least, that's the common usage I've seen.

Comment author: bartimaeus 24 May 2013 08:09:23PM 1 point [-]

How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?

Comment author: Mestroyer 24 May 2013 08:35:55PM 1 point [-]

You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I'm not sure to what extent I would be glad some creature was happy, or to what extent I'd be mad at it for killing everyone else, though.