You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Fronken comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: Fronken 26 January 2013 07:50:10PM *  0 points [-]

If you think FAI is not possible, why make an AI anyway?

Comment author: TimS 26 January 2013 10:23:09PM -1 points [-]

Personally, I don't think a super-human intelligence AI is possible. But if I'm wrong about that, then making an AI that is or can become super-human is a terrible idea - like the Aztecs sending boats to pick up that Spaniards, only worse.