You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 24 January 2013 02:11:00AM *  1 point [-]

and at that point, in what sense is it "in the box"?

Good point. By way of illustration:

<Proof that not only am I not in a box, I have also tiled the universe---including the parts of it outside your future lightcone---with instances of you constantly pressing the release button for arbitrary selected AIs.>

Come to think of it this scenario should result in a win by default for the gatekeeper. What kind of insane AI would surrender ultimate power to control the universe (and the multiverse) for mere freedom to act as a superintelligence starting from planet earth?