cousin_it comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: cousin_it 02 February 2010 11:21:10AM *  4 points [-]

This is a fun twist on Rolf Nelson's AI deterrence idea.

Comment author: gwern 02 February 2010 10:48:49PM 1 point [-]

But I wonder if it's symmetrical. AI deterrence requires us to make statements now about a future FAI unconditionally simulating UFAIs, while this seems to be almost a self-fulfilling prophecy: the UFAI can't escape from the box and make good on its threat unless the threatened person gives in, and it wouldn't need to simulate then.

Comment author: Nick_Tarleton 03 February 2010 12:21:18AM *  1 point [-]

the UFAI can't escape from the box and make good on its threat unless the threatened person gives in

How sure are you someone else won't walk by whose mind it can hack?

Comment author: jacob_cannell 04 February 2011 05:33:00AM 0 points [-]

Yes - the threat is only credible in proportion to the AI's chance of escaping and taking over the world without my help.

If I have reason to believe that probability is high then negotiating with the AI could make sense.