One of my old CS teachers defended treating the environment as adversarial and knowing your source code, because of hackers. See median of 3 killers. (I'd link something, but besides a paper, I can't find a nice link explaining what they are in a small amount of googling).
I don't see why Yudkowsky makes superintelligence a requirement for this.
Also, it doesn't even have to be source code they have access to (which they could if it was open-source software anyway). There are such things as disassemblers and decompilers.
[Edit: removed implication that Yudkowsky thought source code was necessary]
I don't see why Yudkowsky makes superintelligence a requirement for this.
Because often when we talk about 'worst-case' inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don't think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can m...
One of the most interesting debates on Less Wrong that seems like it should be definitively resolvable is the one between Eliezer Yudkowsky, Scott Aaronson, and others on The Weighted Majority Algorithm. I'll reprint the debate here in case anyone wants to comment further on it.
In that post, Eliezer argues that "noise hath no power" (read the post for details). Scott disagreed. He replied:
Eliezer replied:
Scott replied:
And later added:
Eliezer replied:
Scott replied:
And that's where the debate drops off, at least between Eliezer and Scott, at least on that thread.