bartimaeus comments on Is a paperclipper better than nothing? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (116)
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I'm not sure to what extent I would be glad some creature was happy, or to what extent I'd be mad at it for killing everyone else, though.