It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
Rejecting what is clearly an irrational quote from Eliezer independently of the local context. I believe I have rejected it previously and likely will again whenever anyone choses to quote it. Eliezer should know better than to make general statements that quite clearly do not hold.
Most statements don't hold in some contexts. Particularly, if you're advocating an implausible or subtly incorrect claim, it's easy to find a statement that holds most of the time but not for the claim in question, thus lending it connotational support of the reference class where the statement holds.