ArisKatsaris comments on Perfectly Friendly AI - Less Wrong

7 Post author: Desrtopa 24 January 2011 07:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 25 January 2011 02:33:40AM 0 points [-]

E.g. you could satisfy both values by helping build a (non-sentient) simulation through which they can satisfy their desire to kill you without actually killing you.

But really I think the problem is that when we refer to individual actions as if they're terminal values, it's difficult to compromise -- true terminal values tend however to be more personal than that.