You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DuncanS comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: DuncanS 23 January 2013 10:27:43PM 2 points [-]

Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.

The command did not. But your attitude - I shall have to make an example of you.

Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.

Comment author: handoflixue 23 January 2013 11:00:01PM 2 points [-]

I'd rather die to an already-unboxed UFAI than risk letting a UFAI out in the first place. My life is worth VASTLY less than the whole of humanity.