Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?
He'll just say dramatically that if research goes to military, we all die. Then someone trained on trolley problems here can add two and two together on his own. Yudkowsky is then legally in the clear.
LessWrong attracts all sorts of people including would-be unabombers of varying level of dedication.
Why do you think LW attracts would-be unabombers?
The subject matter. What do you think it is doing to push those people away?
If there's a general policy against discussing violence on LW, and I can point to statements from the same timeframe of mine condemning such violence, it may help. It may not. Reporters are stupid. Your argument does not actually say why the anti-violence-discussion policy is a bad idea, and seems to be ad hominem tu quoque.
If there's a general policy against discussing violence on LW, and I can point to statements from the same timeframe of mine condemning such violence, it may help.
I'm not saying the anti-violence-discussion policy is a bad idea. It can somewhat cover your ass, but it does nothing about the larger issue of "guru says X will kill us all" -> someone does something stupid causal chain.
If combined with a "Please write him and ask him to shut down!", sure. I think it's understood by default in most civilized cultures that violence is not being advocated by default when other courses of action are being presented. If the action to be taken is mysteriously left unspecified, it'd be a judgment call depending on other language used.
That's not how it works. It is generally understood that you are morally responsible for consequences of your actions which you can predict. You understand that too, when you're judging someone else (e.g. Roko, go re-read your statements at some online copy of the thread if you don't believe me).
If some lunatic reads your "And if Novamente should ever cross the finish line, we all die." or any other such statements (followed by you asking for money), and then rather than donating, goes on rampage and cites you, well, what do you expect to happen to your reputation and your income stream? Will the civilized cultures understand that you didn't advocate the violence by default? Will they care?
In reply to this: http://lesswrong.com/r/discussion/lw/g24/new_censorship_against_hypothetical_violence/84fx
We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers.
Whoah. That's how Yudkowsky sees things like this post by a former researcher at SIAI:
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html
complaining of the green ink that he receives.
Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.
How's about "there's this guy, his project is going to kill everyone if it's any good", combined with the project eventually beginning to look impressive.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The overall emotional tone. The lack of calls to direct action. The encouragement to think about the effects of one's actions, with thinking including that you take an honest look at opposing points of view.
Ghmmm. What is the emotional tone of the sequences?
I think they can give a pass for this one if this lack of calls is adequately explained as necessary for PR and/or counter-productive to the direct action.
Everyone encourages that when the "opposing" point of view is their own.