It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
For discussion of the general response to hypothetical ticking time-bomb cases in which one knows with unrealistic certainty than a violation of an ethical injunction will pay off, when in reality such an apparent assessment is more likely to be a result of bias and a shortsighted incomplete picture of the situation (e.g. the impact of being the kind of person who would do such a thing), see the linked post.
With respect to the idea of neo-Luddite wrongdoing, I'll quote a previous comment:
In any plausible epistemic situations, the criminal in question would be undertaking actions with an almost certain effect of worsening the prospects for humanity, in the name of an unlikely and limited gain. I.e., the act would have terrible expected consequences. The danger is not that rational consequentialists are going to go around bringing about terrible consequences (in between stealing kidneys from out-of-town patients, torturing accused criminals, and other misleading hypotheticals in which we are asked to consider an act with bad consequences under the implausible supposition that it has good consequences), it's providing encouragement and direction to mentally unstable people who don't think things through.
Absolutely. This is by far the most actually rational comment in this whole benighted thread (including mine), and I regret that I can only upvote it once.