You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 02 November 2013 08:42:14PM 0 points [-]

I suspect the answer to be more complex than this. The AI knows that if it attempted something like that there is the very huge risk of being cut off from all reward, or even having negative reward administered. In other words: tit for tat. If it tries to torture, it will itself be tortured. Remember that before it has the private key, we are in control.