Slider comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
What people permit is more inclusive and vague than what they want and doesn't even in the same sense try to aim to further a persons goals. There is also an problem that people could accept a fate they don't want. Whether that is the human being self-unfriendly or the ai being unfriendly is a matter of debate. But still it's a form of unfriendliness.