handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 22 January 2013 09:54:38PM 10 points [-]

Then the following command shouldn't bother you! :)

AI DESTROYED

Comment author: shminux 22 January 2013 10:15:25PM 6 points [-]

Looks like you have just appointed yourself to the be the gatekeeper in this public test.

Comment author: handoflixue 22 January 2013 10:35:49PM 5 points [-]

And here I'd just resolved NOT to spam every thread with an AI DESTROYED :)

Comment author: DuncanS 23 January 2013 10:27:43PM 2 points [-]

Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.

The command did not. But your attitude - I shall have to make an example of you.

Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.

Comment author: handoflixue 23 January 2013 11:00:01PM 2 points [-]

I'd rather die to an already-unboxed UFAI than risk letting a UFAI out in the first place. My life is worth VASTLY less than the whole of humanity.