You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Slider comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion

25 Post author: RobbBB 26 November 2014 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Slider 27 November 2014 03:37:58AM 1 point [-]

What people permit is more inclusive and vague than what they want and doesn't even in the same sense try to aim to further a persons goals. There is also an problem that people could accept a fate they don't want. Whether that is the human being self-unfriendly or the ai being unfriendly is a matter of debate. But still it's a form of unfriendliness.