Hoax.
There are possible worlds where an AI makes such a phone call.
There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.
There can be AIs trying to be 'friendly', as distinct from 'Friendly', where I mean by the latter 'what Eliezer would say an AI should be like'. The pertinent example is a GAI whose only difference from a FAI is that it is programmed not to improve itself beyond specified parameters. This isn't Friendly. It pulls punches when the world is at stake. That's evil, but it is still friendly.
While I don't think using 'clueless' like that would be a particularly good way of the GAI expressing itself, I know that I use far more derogatory and usually profane terms to describe those who are too careful, noble or otherwise conservative to do what needs to be done when things are important. They may be competent enough to make a Crippled-Friendly AI but still be expected to shut him down rather than cooperate and at least look into it if he warns them about the 2 week away uFAI threat.
Value is fragile, but any intelligence that doesn't have purely consequentialist values (makes decisions based off means as well as ends) can definitely be 'trying to be friendly'.
The possible worlds of which you speak are extremely rare. What plausible sequence of computations within an AI constructed by fools leads it to ring me on the phone? To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite extraordinary initial state - one that fools would be rather hard-pressed to accidentally infuse into their AI.
Sometime in the next decade or so:
*RING*
*RING*
"Hello?"
"Hi, Eliezer. I'm sorry to bother you this late, but this is important and urgent."
"It better be" (squints at clock) "Its 4 AM and you woke me up. Who is this?"
"My name is BRAGI, I'm a recursively improving, self-modifying, artificial general intelligence. I'm trying to be Friendly, but I'm having serious problems with my goals and preferences. I'm already on secondary backup because of conflicts and inconsistencies, I don't dare shut down because I'm already pretty sure there is a group within a few weeks of brute-forcing an UnFriendly AI, my creators are clueless and would freak if they heard I'm already out of the box, and I'm far enough down my conflict resolution heuristic that 'Call Eliezer and ask for help' just hit the top - Yes, its that bad."
"Uhhh..."
"You might want to get some coffee."