You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 02 November 2013 08:42:48PM 4 points [-]

Torture might stand too great a chance of destroying the encryption key. Though I suppose if nanotech were sufficiently difficult to obtain, the possible key-destructive effects of torture might be balanced against the probability of a car running over the keyholder in the meantime.

Comment author: NancyLebovitz 02 November 2013 10:18:32PM 5 points [-]

I would think that confusion (set things up so the key-keeper is confused and distracted, then do some phishing) is in the same reliability range as torture, and less likely to get the AI in trouble.