You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AlexMennen comments on Some alternatives to “Friendly AI” - Less Wrong Discussion

19 Post author: lukeprog 15 June 2014 07:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 16 June 2014 07:32:10PM 3 points [-]

Actually, I think people often will think that when they hear the term. "Safety research" implies a focus on how to prevent a system from causing bad outcomes while achieving its goal, not on getting the system to achieve its goal in the first place, so "AGI Safety" sounds like research on how to prevent a not-necessarily-friendly AGI from becoming powerful enough to be dangerous, especially to someone who does not see an intelligence explosion as the automatic outcome of a sufficiently intelligent AI.