It occurs to me, one of the best possible things that can happen here is if the first self-aware AI is in a robot, and not too too smart. Why?
We would expect that such a robot be social, in ways that we wouldn't demand of a server rack. This would more readily expose any unfriendly elements of its programming (and unless the problem is a whole lot easier than it seems, there will be some).
So far, robots have been given the benefit of the doubt because they're obviously complicated appliances. Once that no longer applies - once it's past 'Wow, you're really good at imitating a person' and into 'Do I like you' territory - then we will naturally begin applying different standards to them.
On the other hand, it could be that friendliness sufficient for such limited AIs does nothing for a superintelligence. Even so, I think that this would raise the profile of the problem, give it more mindshare, and generally help.
There is something entertainingly ironic about this sentiment being expressed on an online forum.
Apparently a PhD candidate at the Social Robotics Lab at Yale created a self-aware robot:
What do Less Wrongians think? Is this "cheating" traditional concepts of self-awareness, or is self-awareness "self-awareness" regardless of the path taken to get there?