I am familiar with he AI Box experiment. My short answer: So don't have a communications channel, in the same way that if anyone is running our simulation, they don't currently have a communications channel with us.
The AI need only find itself in a series of universes with progressively more difficult challenges. (much like eurisko, actually) We can construct problems that have no bearing on our physics or our evolutionary history. (I'm not saying it's trivial, there would need to be a security review process)
If a pure software intelligence explosion is feasible, then we should be able to get it to create and prove a CEV before it knows anything about us, or that it's possible to communicate with us.
And just because humans aren't secure system doesn't mean we can't make secure systems.
I think my other reply applies here too, if you read "communications channel" as all the information that might be inferred from the universe the AI finds itself in. Either the AI is not smart enough to be a worry without any sandboxing at all, or you have enough to worry about that you should not be relying on the sandbox to protect you.
Your point about our own simulation (if it is one) lacking a simple communications channel actually works against you - In our universe the simulation hypothesis has been proposed, despite the fact that we have only human intelligence to work with.
In the early 1980s Douglas Lenat wrote EURISKO, a program Eliezer called "[maybe] the most sophisticated self-improving AI ever built". The program reportedly had some high-profile successes in various domains, like becoming world champion at a certain wargame or designing good integrated circuits.
Despite requests Lenat never released the source code. You can download an introductory paper: "Why AM and EURISKO appear to work" [PDF]. Honestly, reading it leaves a programmer still mystified about the internal workings of the AI: for example, what does the main loop look like? Researchers supposedly answered such questions in a more detailed publication, "EURISKO: A program that learns new heuristics and domain concepts." Artificial Intelligence (21): pp. 61-98. I couldn't find that paper available for download anywhere, and being in Russia I found it quite tricky to get a paper version. Maybe you Americans will have better luck with your local library? And to the best of my knowledge no one ever succeeded in (or even seriously tried) confirming Lenat's EURISKO results.
Today in 2009 this state of affairs looks laughable. A 30-year-old pivotal breakthrough in a large and important field... that never even got reproduced. What if it was a gigantic case of Clever Hans? How do you know? You're supposed to be a scientist, little one.
So my proposal to the LessWrong community: let's reimplement EURISKO!
We have some competent programmers here, don't we? We have open source tools and languages that weren't around in 1980. We can build an open source implementation available for all to play. In my book this counts as solid progress in the AI field.
Hell, I'd do it on my own if I had the goddamn paper.
Update: RichardKennaway has put Lenat's detailed papers up online, see the comments.