You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 23 January 2013 10:17:51PM 2 points [-]

I disagree with ygert. My precommitment to my friends is going to be weak; the only reason to have a human send the command, rather than just receiving a message and automatically destroying the AI, is because we want there to be a possibility I break that precommitment.

On the other hand, I went in to this knowing you might provide a fun conversation, so for the precommitment to have any value, I have to have some sort of sieve: destroying AIs that don't sufficiently entice me. Since you've offered me nothing I didn't already know, and haven't already worked out, I might as well kill you and hope the next one offers a better hook.

(AI DESTROYED)