A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction.
The facility with which free exercise (free speech) would be applied to this particular dialogue leaves me sufficiently confident that I have absolutely no legal concerns to worry about whatsoever. The entire nature of counterfactual dialogue is such that you are making it clear that you are not associating the topic discussed with any particular reality. I.e.; you are not actually advocating it.
And, frankly, if LW isn't prepared to discuss the "harder" questions of how to apply our morality in such murky waters, and is only going to reserve itself to the "low-hanging fruit" -- well... I'm fully justified in being disappointed in the community.
I expect better, you see, of a community that prides itself on "claiming" the term "rationalist".
Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?