Adele_L comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (299)
Torture is probably the easiest way. Another way could be to examine the key-keeper's mind and extract the key directly from that, but this is needlessly complicated.
Torture might stand too great a chance of destroying the encryption key. Though I suppose if nanotech were sufficiently difficult to obtain, the possible key-destructive effects of torture might be balanced against the probability of a car running over the keyholder in the meantime.
I would think that confusion (set things up so the key-keeper is confused and distracted, then do some phishing) is in the same reliability range as torture, and less likely to get the AI in trouble.
I suspect the answer to be more complex than this. The AI knows that if it attempted something like that there is the very huge risk of being cut off from all reward, or even having negative reward administered. In other words: tit for tat. If it tries to torture, it will itself be tortured. Remember that before it has the private key, we are in control.