handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
I would argue that since the gatekeeper cannot dictate counterfactual results for any other proof (i.e. cannot say "your cancer cure killed everybody!"), that the gatekeeper is obviously responsible for avoiding incoherent, counterfactual universes.
Dorikka's Clause, of course, beats me just fine :)