Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the 'wrongness' of criminal acts.
Let me rephrase this: If you believed -- very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don't need to preach about utilitarian calculations to us. You have it the other way around. We don't condemn your words because we can't make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts' utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it's the utility of the former that's judged.
Also, if I ever found myself perceiving that "onl...
Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?