ChristianKl comments on LessWrong's attitude towards AI research - Less Wrong

8 Post author: Florian_Dietz 20 September 2014 03:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 22 September 2014 02:32:51PM 1 point [-]

Paperclip maximizers serve as illustration of a principle. I think that most MIRI folks consider UFAI to be more complicated than simple paperclip maximizers.

Goal stability also get's harder the more complicated the goal happens to be. A paperclip maximizer can have a off switch but at the same time prevent anyone from pushing that switch.