No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating - or have already advocated - the murder of scientific researchers.
Huh? People here often advocate to kill a completely innocent fat guy to save a few more people. People even advocate to torture someone for 50 years so others don't get dust specks into their eyes...
The difference is there are no hypothetical fat men who are near train lines. There are, however, really-existing AI researchers who have received death threats as a result of this kind of thinking.
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?