ygert comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
The flaw here is that the gatekeeper has up front said that he or she would destroy the AI immediately. Now, it is true that the gatekeeper is not forced to abide by that, but notice that it is a Schelling Fence. The gatekeeper certainly doesn't want to make a policy of passing Schelling Fences.
See my reply to the parent post vis-a-vis the precommitment only being useful IFF I expect to violate it at least occasionally.