Pentashagon comments on By Which It May Be Judged - Less Wrong

35 Post author: Eliezer_Yudkowsky 10 December 2012 04:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (934)

You are viewing a single comment's thread.

Comment author: Pentashagon 17 December 2012 06:47:42PM 3 points [-]

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.

What if we also changed the subject into a sentient paperclip? Any "standard" paperclip maximizer has to deal with the annoying fact that it is tying up useful matter in a non-paperclip form that it really wants to turn into paperclips. Humans don't usually struggle with the desire to replace the self with something completely different. It's inefficient. An AI primarily designed to benefit humanity (friendly or not) is going to notice that inefficiency in its goals as well. It will feel less moral from the inside than we do. I'm not sure what to do about this, or if it matters.

Comment author: MugaSofer 17 December 2012 07:06:51PM 0 points [-]

Well, if the AI is a person, it should be fine. If it's some sort of nonsentient optimizer, then yep.