New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.
Now it's true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, "An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."
Idiots make up crap about all kinds of things, not just violence or other illegal acts. Ideas outside societal norms often attract bad PR. If your primary goal here is to improve PR, you would have to censor posts by explicit PR criteria. The proposed criteria of discussion of violence or law-breaking is not optimized for this goal. So, what is it you really want?
Discussion of violence is something that (you claim) has no positive value, even ignoring PR. So it's easy to decide to censor it. But have you really considered what else to censor according to y... (read more)