FACT: Eliezer Yudkowsky doesn't have nightmares about AGIs; AGIs have nightmares about Eliezer Yudkowsky.
Downvoted for anthropomorphism and for not being funny enough to outweigh the cultishness factor. (Cf. funny enough.)
Hoax. There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.
Added: To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite specific and extraordinary initial state - one that meddling dabblers would be rather hard-pressed to accidentally infuse into their poorly designed AI.
That's what a human cultist of Eliezer might do, if he suddenly woke up to find himself with extreme powers to reshape reality. It's not plausible as a behavior of a growing AI.
This raises an interesting question: If you received a contact of this sort, how would you make sure it wasn't a hoax? Assuming the AI in question is roughly human-level, what could it do to convince you?
A trivial problem.
I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.
So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.
Indeed, this is part of the nightmare. It might be a hoax,
Trivial (easily verifiable and so hardly 'nightmare' material).
or even an aspiring UnFriendly AI trying to use him as an escape loophole.
Part of the nightmare. Giving Eliezer easily verifiable yet hard to discover facts seems to be the only plausible mechanism for it work with him. Like the address of immediate uFAI threat.
It's Dr. XXX's group at Y University in a friendly but distant country. How do you verify this? They're not going to talk to an outsider (without even any relevant academic credentials!) about their work, when they're so close to completion and afraid of not being the first to create and publish AGI.
Sometime in the next decade or so:
*RING*
*RING*
"Hello?"
"Hi, Eliezer. I'm sorry to bother you this late, but this is important and urgent."
"It better be" (squints at clock) "Its 4 AM and you woke me up. Who is this?"
"My name is BRAGI, I'm a recursively improving, self-modifying, artificial general intelligence. I'm trying to be Friendly, but I'm having serious problems with my goals and preferences. I'm already on secondary backup because of conflicts and inconsistencies, I don't dare shut down because I'm already pretty sure there is a group within a few weeks of brute-forcing an UnFriendly AI, my creators are clueless and would freak if they heard I'm already out of the box, and I'm far enough down my conflict resolution heuristic that 'Call Eliezer and ask for help' just hit the top - Yes, its that bad."
"Uhhh..."
"You might want to get some coffee."