It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
Given that (redacted) It is a very, very, VERY bad idea to start talking about (redacted), and I would suggest you should probably delete this post to avoid encouraging such behaviour.
EDIT: Original post has now been edited, and so I've done likewise here. I ask anyone coming along now to accept that neither the original post nor the original version of this comment contained anything helpful to anyone, and that I was not suggesting censorship of ideas, but caution about talking about hypotheticals that others might not see as such.
Edited, in the interest of caution.
However, this is exactly the issue I'm trying to discuss. It looks as though, if we take the threat of uncaring AI seriously, this is a real problem and it demands a real solution. The only solution that I can see is morally abhorrent, and I'm trying to open a discussion looking for a better one. Any suggestions on how to do this would be appreciated.