Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mike_Blume comments on Sympathetic Minds - Less Wrong

25 Post author: Eliezer_Yudkowsky 19 January 2009 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Mike_Blume 19 January 2009 04:32:47PM 5 points [-]

"To a paperclip maximizer, the humans are just machines with pressable buttons. No need to feel what the other feels - if that were even possible across such a tremendous gap of internal architecture. How could an expected paperclip maximizer "feel happy" when it saw a human smile? "Happiness" is an idiom of policy reinforcement learning, not expected utility maximization. A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips. Though a paperclip maximizer might find it convenient to display a smile when it made paperclips - so as to help manipulate any humans that had designated it a friend."

Correct me if I'm wrong, but haven't you just pretty accurately described a human sociopath?