ArisKatsaris comments on Perfectly Friendly AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
E.g. you could satisfy both values by helping build a (non-sentient) simulation through which they can satisfy their desire to kill you without actually killing you.
But really I think the problem is that when we refer to individual actions as if they're terminal values, it's difficult to compromise -- true terminal values tend however to be more personal than that.