This is a site devoted to rationality, supposedly. How rational is it to make public statements that can be interpreted as saying people one disagrees with deserve to be shot? It's hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot. If the world where $randomAIresearcher, who wasn't anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
This is a site devoted to rationality, supposedly. How rational is it to
Comments of this form are almost always objectionable.
It's hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot. If the world where $randomAIresearcher, who wasn't anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
Are you trying to be ironic here? You criticize hyperbole while writing that?
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?