It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
As direct moderator censorship seems to provoke a lot of bad feeling, I would encourage everyone to downvote this to oblivion, or for the original poster to voluntarily delete it, for reasons given in highly upvoted comments below. Or search on "UTTERLY FUCKING STUPID", without quotes.
Wise move. At least with respect to restraint with clear assertion. The reference to uncalled for profanity was a little silly - a simple link or verbal reference to CarlShulman's rather brilliant explanation of how to actually think about these issues responsibly would sent a far better message.