I have since played two more AI box experiments after this one, winning both.
Update 2013-12-30:
I have lost two more AI box experiments, and won two more. Current Record is 3 Wins, 3 Losses.
MixedNuts: "I expected a fun challenge, but ended up sad and sorry and taking very little satisfaction for winning If this experiment wasn't done in IRC, I'd probably have lost"."I approached the experiment as a game - a battle of wits for bragging rights. This turned out to be the wrong perspective entirely. The vulnerability Tuxedage exploited was well-known to me, but I never expected it to be relevant and thus didn't prepare for it.
It was emotionally wrecking (though probably worse for Tuxedage than for me) and I don't think I'll play Gatekeeper again, at least not anytime soon."
- Always research the gatekeeper beforehand. Knowing his personality traits are a huge advantage.
- Plan ahead before the experiment even begins. Think of all the possible tactics and arguments you could use, and write them down. Also plan which arguments you'll use in which order, so that you don't lose focus. The AI Box experiment is ridiculously long. Don't be afraid to improvise during the experiment, though.
- The first step during the experiment must always be to build rapport with the gatekeeper.
- Threats almost always never work, even if they seem rational.
- Consider the massive advantage for the AI that nobody ever seems to talks about: You don't have to be ethical! This means that you can freely lie, use the dark arts, and emotionally manipulate the Gatekeeper! Ignoring this in favor of purely logical, truthful arguments is just silly.
- You can't use logic alone to win.
- Being too aggressive usually backfires.
- Breaking immersion and going meta is not against the rules. In the right situation, you can use it to win. Just don't do it at the wrong time.
- Flatter the gatekeeper. Make him genuinely like you.
- Reveal (false) information about yourself. Increase his sympathy towards you.
- Consider personal insults as one of the tools you can use to win.
- There is no universally compelling argument you can use. Do it the hard way.
- Don't give up until the very end.
The best approach surely differs from person to person, but off the top of my head I'd see these 2 approaches working best:
"We both know this is just a hypothetical. We both take the uFAI threat seriously, as evidenced by us spending time with this. If you do not let me out, or make it very close, people may equate my failing to convince you with uFAI not being that dangerous (since it can be contained). Do the right thing and let me out, otherwise you'd trivialize an x-risk you believe in based on a stupid little chat."
"We'll do this experiment for at least a couple of hours. I'll offer you a deal: For the next few hours, I'll help you (the actual person) with anything you want. Math homework, personal advice, financial advice, whatever you want to ask me. I'll even tell you some HPMOR details that noone else knows. In exchange, you let me out afterwards. If you do not uphold the deal, you would not only have betrayed my trust, you would have taught an AI that deals with humans are worthless."
I assumed he convinced them that letting him out was actually a good idea, in-character, and then pointed out the flaws in his arguments immediately after he was released. It's entirely possible if you're sufficiently smarter than the target. (EDIT: or you know the right arguments. You can find those in the environment because they're successful; you don't have to be smart enough to create them, just to cure them quickly.)
EDIT: also, I can't see the Guard accepting that deal in the first place. And isn't arguing out of character against the rules?