You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bartimaeus comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: bartimaeus 24 May 2013 08:09:23PM 1 point [-]

How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?

Comment author: Mestroyer 24 May 2013 08:35:55PM 1 point [-]

You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I'm not sure to what extent I would be glad some creature was happy, or to what extent I'd be mad at it for killing everyone else, though.