Eliezer proposed in a comment:
>More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?
This spawned a flurry of ideas on what the AI might say. I think there's a lot more ideas to be mined in that line of thought, and the discussion merits its own thread.
So, give your suggestion - what might an AI might say to save or free itself?
(The AI-box experiment is explained here)
EDIT: one caveat to the discussion: it should go without saying, but you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" This should hopefully be a fun way to get us thinking about the broader issue of superinteligent AI in general. (Credit goes to Elizer, RichardKennaway, and others for the caveat)
If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?
Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.
Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.
I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.
Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?