Comment author: NancyLebovitz 25 December 2012 06:28:29AM 2 points [-]

The overall emotional tone. The lack of calls to direct action. The encouragement to think about the effects of one's actions, with thinking including that you take an honest look at opposing points of view.

Comment author: All_work_and_no_play 25 December 2012 07:41:16AM *  0 points [-]

The overall emotional tone.

Ghmmm. What is the emotional tone of the sequences?

The lack of calls to direct action.

I think they can give a pass for this one if this lack of calls is adequately explained as necessary for PR and/or counter-productive to the direct action.

The encouragement to think about the effects of one's actions, with thinking including that you take an honest look at opposing points of view.

Everyone encourages that when the "opposing" point of view is their own.

Comment author: shminux 23 December 2012 10:48:40PM 3 points [-]

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

Comment author: All_work_and_no_play 25 December 2012 05:51:58AM 1 point [-]

He'll just say dramatically that if research goes to military, we all die. Then someone trained on trolley problems here can add two and two together on his own. Yudkowsky is then legally in the clear.

Comment author: NancyLebovitz 24 December 2012 06:41:13PM 1 point [-]

LessWrong attracts all sorts of people including would-be unabombers of varying level of dedication.

Why do you think LW attracts would-be unabombers?

Comment author: All_work_and_no_play 25 December 2012 04:49:54AM 0 points [-]

The subject matter. What do you think it is doing to push those people away?

Comment author: Eliezer_Yudkowsky 24 December 2012 11:05:37PM 2 points [-]

If there's a general policy against discussing violence on LW, and I can point to statements from the same timeframe of mine condemning such violence, it may help. It may not. Reporters are stupid. Your argument does not actually say why the anti-violence-discussion policy is a bad idea, and seems to be ad hominem tu quoque.

Comment author: All_work_and_no_play 24 December 2012 11:10:05PM 1 point [-]

If there's a general policy against discussing violence on LW, and I can point to statements from the same timeframe of mine condemning such violence, it may help.

I'm not saying the anti-violence-discussion policy is a bad idea. It can somewhat cover your ass, but it does nothing about the larger issue of "guru says X will kill us all" -> someone does something stupid causal chain.

Comment author: Eliezer_Yudkowsky 24 December 2012 08:18:19PM 1 point [-]

If combined with a "Please write him and ask him to shut down!", sure. I think it's understood by default in most civilized cultures that violence is not being advocated by default when other courses of action are being presented. If the action to be taken is mysteriously left unspecified, it'd be a judgment call depending on other language used.

Comment author: All_work_and_no_play 24 December 2012 10:54:08PM *  -1 points [-]

That's not how it works. It is generally understood that you are morally responsible for consequences of your actions which you can predict. You understand that too, when you're judging someone else (e.g. Roko, go re-read your statements at some online copy of the thread if you don't believe me).

If some lunatic reads your "And if Novamente should ever cross the finish line, we all die." or any other such statements (followed by you asking for money), and then rather than donating, goes on rampage and cites you, well, what do you expect to happen to your reputation and your income stream? Will the civilized cultures understand that you didn't advocate the violence by default? Will they care?

Comment author: All_work_and_no_play 24 December 2012 12:39:38PM 2 points [-]

In reply to this: http://lesswrong.com/r/discussion/lw/g24/new_censorship_against_hypothetical_violence/84fx

We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers.

Whoah. That's how Yudkowsky sees things like this post by a former researcher at SIAI:

http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

complaining of the green ink that he receives.

Comment author: Eliezer_Yudkowsky 23 December 2012 09:43:43PM 20 points [-]

Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.

Comment author: All_work_and_no_play 24 December 2012 12:13:33PM 3 points [-]

How's about "there's this guy, his project is going to kill everyone if it's any good", combined with the project eventually beginning to look impressive.