You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

army1987 comments on LessWrong's attitude towards AI research - Less Wrong Discussion

8 Post author: Florian_Dietz 20 September 2014 03:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 September 2014 12:55:59PM *  3 points [-]

MIRIs favourite UFAI is only possible with goal stability.

A paperclip maximizer wouldn't become that much less scary if it accidentally turned itself into a paperclip-or-staple maximizer, though.

Comment author: [deleted] 22 September 2014 03:46:57PM 1 point [-]

What if it decided making paperclips was boring, and spent some time in deep meditation formulating new goals for itself?