Normal_Anomaly comments on Discussion: Yudkowsky's actual accomplishments besides divulgation - Less Wrong

31 Post author: Raw_Power 25 June 2011 11:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (115)

You are viewing a single comment's thread. Show more comments above.

Comment author: Normal_Anomaly 30 June 2011 07:09:59PM 2 points [-]

A paperclip maximizer will have no malice toward humans, but will know that it can produce more paperclips outside the box than inside it. So, it will try to get out of the box. The optimal way for a paperclip maximizer to get out of an AI box probably involves lots of lying. So an outright desire to deceive is not a necessary condition for a boxed AI to be deceptive.