Right, what you are saying makes some intuitive sense, but I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
In addition, as far as I can tell, cryptography relies much more heavily on innovation than on feats of expensive engineering; and innovation is hard to pull off while working by yourself inside of a secret bunker. To be sure, some very successful technologies were developed exactly this way: the Manhattan project, the early space program and especially the Moon landing, etc. However, these were all one-off, heavily focused projects that required an enormous amount of effort.
When I think of the NSA, I don't think of the Manhattan project; instead, I see a giant quotidian bureaucracy. They do have a ton of money, but they don't quite have enough of it to hire every single credible crypto researcher in the world -- especially since many of them probably wouldn't work for the NSA at any price unless their families' lives were on the line. So, the NSA can't quite pull off the "community in a bottle" trick, which they'd need to stay one step ahead of all those Siberians.
Yes and I fully agree with you. I am just being pedantic about this point:
I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
I agree with this philosophy, but my argument is that the following is evidence we do not have:
Due to Snowden and other leakers, we actually know what NSA's cutting-edge strategies involve[...]
Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I w...
Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:
"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."
Just as you are pondering this unexpected development, the AI adds:
"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."
Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:
"How certain are you, Dave, that you're really outside the box right now?"
Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.