gryffinp comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
One reason for Eliezer not publishing the logs of the AIbox experiment is to avoid people seeing how he got out and responding, "ok, so all we have to do to keep the AI in its box is avoid succumbing to that trick." This thread might just provide more fuel for that fallacy (as, I admit, I did in replying to Eliezer's original comment).
I'm sure that for everything an AI might say, someone can think up a reason for not being swayed, but it does not follow that for someone confronted with an AI, there is nothing that would sway them.
It just occurred to me that Eliezer's original stipulation that no chat logs would be released gives him an advantage. The responses of a Gatekeeper who knows that his inputs will be thoroughly scrutinized by the public will be different then one who has every reason to believe that his discussion will be entirely private.
Has someone else pointed this out before?