So, I'm not especially convinced that EY's/MIRI's position holds water, but on 5 minutes thought I see two problems with your objection.
1) The idea that more information is always better, even when that information is being cherry-picked by an inimical agent, seems to contradict my experience. I've certainly found myself in situations in which it's easier to solve a problem by myself than it is to solve it in conjunction with someone who is doing their best to keep me from solving the problem.
2) The idea that it's just as easy and reliable to verify for security a completed system (whether by inspecting the source code or testing running executable code or both) created by an insecure mechanism, as it is to establish a secure mechanism to create that system in the first place, is inconsistent with my experience of security audits.
1 is Conceded (see edit), for humans at least and possibly for all bounded-rationals. For 2, I presume you'd have full access to the unmodified source code, even as the AI was running simultaneously.
1 for rational agents is an interesting question, though...I think it's true that additional cherry-picked information could be used to harm rational agents in general, since they'll predictably act according to the most parsimonious model that fits the evidence. It would be a case of bad epistemic luck, sort of like Santa Claus is a parsimonious hypothesis if ...
AI Box Experiment Update #3
Tuxedage (AI) vs Alexei (GK) - Gatekeeper Victory
Tuxedage (AI) vs Anonymous (GK) - Gatekeeper Victory
I have won a second game of AI box against a gatekeeper who wished to remain Anonymous.
This puts my AI Box Experiment record at 3 wins and 3 losses.