You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JohnWittle comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 22 January 2013 11:13:03PM 10 points [-]

(Here is a proof that you will let me go)


The original rules allow the AI to provide arbitrary proofs, which the gatekeeper must accept (no saying my cancer cure killed all the test subjects, etc.). Saying you destroy me would require the proof to be false, which is against the rules...

What? Shminux said to cheat!

Comment author: JohnWittle 30 January 2013 12:35:55AM 1 point [-]

This certainly wouldn't work on me. The easiest way to test the veracity of the proof would be AI DESTROYED. Whether or not I would want to kill the AI... I'd have to test that proof.

Comment author: handoflixue 30 January 2013 10:03:13PM 0 points [-]