The only solution that I can see is morally abhorrent, and I'm trying to open a discussion looking for a better one.
It's already been linked to a couple times under this post, but: have you read http://lesswrong.com/lw/v1/ethical_injunctions/ and the posts it links to?
In any case, non-abhorrent solutions include "work on FAI" and "talk to AGI researchers, some of whom will listen (especially if you don't start off with how we're all going to die unless they repent, even though that's the natural first thought)".
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?