Basically this: "Eliezer Yudkowsky writes and pretends he's an AI researcher but probably hasn't written so much as an Eliza bot."
While the Eliezer S. Yudkowsky site has lots of divulgation articles and his work on rationality is of indisputable value, I find myself at a loss when I want to respond to this. Which frustrates me very much.
So, to avoid this sort of situation in the future, I have to ask: What did the man, Eliezer S. Yudkowsky, actually accomplish in his own field?
Please don't downvote the hell out of me, I'm just trying to create a future reference for this sort of annoyance.
This is by design. If you had the transcript, you could say in hindsight that you wouldn't be fooled by this. But the fact is, the conversation would have been very different with someone else as the guardian, and Eliezer would have search for and pushed other buttons.
Anyway, the point is to find out if a transhuman AI would mind-control the operator into letting it out. Eliezer is smart, but is no transhuman (yet). If he got out, then any strong AI will.
EY's point would be even stronger if transcripts were released and people still let him out regularly.