New censorship: against hypothetical violence against identifiable people

22 Post author: Eliezer_Yudkowsky 23 December 2012 09:00PM

New proposed censorship policy:

Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.

Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.

More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).  

This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'

Yes, a post of this type was just recently made.  I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

Comments (457)

Comment author: [deleted] 23 December 2012 11:51:07PM *  30 points [-]

Would my pro-piracy arguments be covered by this? What about my pro-coup d'état ones?

Comment author: [deleted] 23 December 2012 11:58:06PM 19 points [-]

Possibly. I hope not. I'm all for mod action, but not at the expense of political diversity.

Comment author: [deleted] 24 December 2012 10:17:37AM 4 points [-]

You mean copyright piracy or sea piracy?

Comment author: [deleted] 24 December 2012 10:20:09AM *  23 points [-]

Sea piracy obviously. What kind of a person do you think I am?!

Comment author: Qiaochu_Yuan 24 December 2012 11:12:24AM *  7 points [-]

As someone unfamiliar with your views, I can't tell whether this is sarcasm or not, especially because of the interrobang. Can you clarify? Is there anywhere on the internet where your views are concisely summarized? (Is it in any way associated with your real name?)

Comment author: [deleted] 24 December 2012 12:27:37PM *  11 points [-]

The levels can be hard to disambiguate so I sympathize. I'll write my opinions out unironically. You can find the full arguments in my comment history (I can dig links to that up too).

  • I'm assuming you are familiar with the arguments for efficent charity and optimal employment? If not I can provide citations & links. I don't think Sea Piracy as a means to funding efficient charity is obviously worse from a utilitarian perspective than a combo with many legal professions. It may or may not be justified, I'm leaning towards it being justified on the same utilitarian grounds as government taxation can be. If not cheating on taxes to fund efficient charity is a pretty good idea. Some people's comparative advantage will lay in sea piracy.

  • Violating copyright on software or media products in the modern West is in general not a bad thing. But indiscriminately pirating everything may be bad.

In the grandfather comment I was aiming for ambiguity and humour.

Comment author: MBlume 24 December 2012 06:45:26PM 13 points [-]

I mean, assuming that sea piracy to fund efficient charity is good, media piracy to save money that you can give to efficient charity is just obviously good.

Comment author: [deleted] 24 December 2012 06:51:54PM *  11 points [-]

media piracy to save money that you can give to efficient charity

Is so incredibly obviously good that I'm mystified no one is promoting it. I think the main reason is because it is "illegal".

Comment author: FiftyTwo 24 December 2012 07:24:44PM 2 points [-]

We often seperate endorsing things from believing they are good, as endorsing them implies you would like them to be prevalent which leads to collective action issues. (E.g. I think it is ok to occasionally take more than your share share of the cake if you're hungry, I wouldn't encourage it as then there wouldn't be any cake left)

Comment author: Jabberslythe 24 December 2012 06:12:35AM 5 points [-]

I think piracy cases are pretty similar to marijuana cases (they are even less likely to be enforced actually) which he said won't be banned.

Comment author: Eugine_Nier 24 December 2012 07:54:07AM 8 points [-]

I don't think Konkvistador was talking about software piracy.

Comment author: Jabberslythe 24 December 2012 06:01:55PM 2 points [-]

Hahaha, whoops.

Comment author: [deleted] 24 December 2012 10:09:44AM *  27 points [-]

Fun Exercise

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal

Consider what would have been covered by this 250, 100 and 50 years ago.

Bonus Consider what wouldn't have been covered by this 250, 100 and 50 years ago but would be today.

Comment author: Qiaochu_Yuan 24 December 2012 11:35:32AM *  12 points [-]

I see the point you're trying to make, but I don't think it constitutes a counterargument to the proposed policy. If you were an abolitionist back when slavery was commonly accepted, it would've been a dumb idea to, say, yell out your plans to free slaves in the Towne Square. If you were part of an organization that thought about interesting ideas, including the possibility that you should get together and free some slaves sometime, that organization would be justified in telling its members not to do something as dumb as yelling out plans to free slaves in the Towne Square. And if Ye Olde Eliezere Yudkowskie saw you yelling out your plans to free slaves in the Towne Square, he would be justified in clamping his hand over your mouth.

Comment author: [deleted] 24 December 2012 12:18:32PM *  13 points [-]

It wouldn't be dumb to argue for the moral acceptability of freeing slaves (even by force) however.

Comment author: Qiaochu_Yuan 24 December 2012 12:28:46PM *  7 points [-]

It wouldn't be dumb for an organization to decide that society at large might be willing to listen to them argue for the moral acceptability of freeing slaves, even by force. It would be dumb for an organization to allow its individual members to make this decision independently because that substantially increases the probability that someone gets the timing wrong.

Comment author: prase 24 December 2012 01:53:37PM 11 points [-]

Beware selective application of your standards. If the members can't be trusted with one type of independent decision, why they can be trusted with other sorts of decisions?

Comment author: Qiaochu_Yuan 24 December 2012 11:10:39PM *  3 points [-]

Because the decision to initiate a particular kind of public discussion entails everyone else in the organization taking on a certain level of risk, and an organization should be able to determine what kinds of communal risk it's willing to allow its individual members to force on everyone else. There are jurisdictions where criminal incitement is itself a crime.

Comment author: ChristianKl 25 December 2012 02:19:29AM *  7 points [-]

Bonus:

Consider what's likely to be covered 50 years in the future.

Comment author: Eugine_Nier 26 December 2012 03:12:19AM 5 points [-]

For something like that, consider the algorithm you use to answer it. Then consider why the output of said algorithm should at all correlate with future social trends.

Comment author: [deleted] 25 December 2012 10:17:54AM 2 points [-]

I considered adding that too. :)

Comment author: ChristianKl 25 December 2012 04:58:04PM 23 points [-]

I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise.

I'm not sure what's obvious for you. In an enviroment without censorship you don't endorse a post by not censoring the post. If you however start censoring you do endorse a post by letting it stand.

Your legal and PR obligations for those posts that LessWrong hosts get bigger if you make editorial censorship decisions.

Comment author: Viliam_Bur 26 December 2012 12:00:38AM 2 points [-]

Is there any way out of this dilemma? For example having a policy where moderator flips a coin for each offending article or comment, and head = delete, tails = keep.

:D

Comment author: David_Gerard 01 January 2013 10:01:00PM 3 points [-]

Your legal and PR obligations for those posts that LessWrong hosts get bigger if you make editorial censorship decisions.

AIUI this is legally true: CDA section 230, mere hosting versus moderation.

Comment author: CronoDAS 23 December 2012 11:15:21PM 20 points [-]

My post was indeed inappropriate. I have used the "Delete" function on it.

Comment author: DataPacRat 24 December 2012 12:35:23PM 18 points [-]

I currently find myself tempted to write a new post for Discussion, on the general topic of "From a Bayesian/rationalist/winningest perspective, if there is a more-than-minuscule threat of political violence in your area, how should you go about figuring out the best course of action? What criteria should you apply? How do you figure out which group(s), if any, to try to support? How do you determine what the risk of political violence actually is? When the law says rebellion is illegal, that preparing to rebel is illegal, that discussing rebellion even in theory is illegal, when should you obey the law, and when shouldn't you? Which lessons from HPMoR might apply? What reference books on war, game-theory, and history are good to have read beforehand? In the extreme case... where do you draw the line between choosing to pull a trigger, or not?".

If it was simply a bad idea to have such a post, then I'd expect to take a karma hit from the downvotes, and take it as a lesson learned. However, I also find myself unsure whether or not such a post would pass the muster of the new deletionist criteria, and so I'm not sure whether or not I would be able to gather that idea - let alone whatever good ideas might result if such a thread was, in fact, something that interested other LessWrongers.

This whole thread-idea seems to fall squarely in the middle, between the approved 'hypothetical violence near trolleys' and 'discussion violence against real groups'. Would anyone be interested in helping me put together a version of such a post to generate the most possible constructive discourse? Or, perhaps, would somebody like to clarify that no version of such a post would pass muster under the new policy?

Comment author: MixedNuts 25 December 2012 04:48:35PM 3 points [-]

Do you have answers to those questions? Just "Hey, this problem exists" has not historically been shown to be productive.

Comment author: DataPacRat 25 December 2012 07:40:51PM 7 points [-]

I have /a/ set of answers, based on what I've learned so far of economics, politics, human nature, and various bits of evidence. However, I peg my confidence-levels of at least some of those answers as being low enough that I could be easily persuaded to change my mind, especially by the well-argued points that tend to crop up around here.

Comment author: pleeppleep 23 December 2012 10:42:32PM 17 points [-]

Deleting comments for being perceived as dangerous might get in the way of conversation. I think that if we're worried about how the site looks to outsiders then it's probably only necessary to worry about actual posts. Nobody expects comments to be appropriate on the internet, so it probably doesn't hurt us that much.

Comment author: CronoDAS 23 December 2012 11:42:17PM 16 points [-]

The "interesting" thing about violence is that it's one of the few ways that a relatively small group of (politically) powerless people with no significant support can cause a big change in the world. However, the change rarely turns out the way the small group would hope; most attempts at political violence by individuals or small groups fail miserably at achieving the group's aims.

Comment author: BrassLion 24 December 2012 06:33:53AM 5 points [-]

Non-violent action has a reasonable track record, considering how rarely it's been used in an organized way by the oppressed. The track record is particularly good in the first world, where people care about appearances.

Comment author: BrassLion 24 December 2012 06:48:53AM *  14 points [-]

I think this is an overreation to (deleted thing) happening, and the proposed policy goes too far. (Deleted thing) was neither a good idea or good to talk about in this public forum, but it was straight-out advocating violence in an obvious and direct way, against specific, real people that aren't in some hated group. That's not okay and it's not good for community for the reasons you (EY) said. But the proposed standard is too loose and it's going to have a chilling effect on some fringe discussion that's probably going to be useful in teasing out some of the consquences of ethics (which is where this stuff comes up). Having this be a guideline rather than a hard rule seems good, but it still seems like we're scarring on the first cut, as it were.

I think we run the risk of adopting a censorship policy that makes it difficult to talk about or change the censorship policy, which is also a really terrible idea.

I agree with the general idea of protecting LW's reputation to outsiders. After all, if we're raising the sanity waterline (rather than researching FAI), we want outsiders to become insiders, which they won't do if they think we're crazy.

"No advocating violence against real world people, or opening a discussion on whether to commit violence on real world people" seems safe enough as a policy to adopt, and specific enough to not have much of a chilling effect on discussion. We ought to restrict what we talk about as little as possible, in the absence of actual problems, given that any posts we don't want here can be erased by a few keystrokes from an admin.

Comment author: MixedNuts 25 December 2012 07:12:18PM 12 points [-]

The freaky consequences are not of the policy, they're of the meta-policy. You know how communities die when they stop being fun? Occasional shitstorms are not fun, and fear of saying something that will cause a shitstorm is not fun. Benevolent dictators work well to keep communities fun; the justifications don't apply when the dictator is pursuing goals that aren't in the selfish interest of members and interested lurkers; making the institute the founder likes look bad only weakly impacts community fun.

Predictable consequences are bright iconoclasts leaving, and shitstorm frequency increasing. (That's kinda hard to settle: the former is imprecise and the latter can be rigged.)

Every time, people complain much less about the policy than about not being consulted. There are at least two metapolicies that avoid this:

  • Avoid kicking up shitstorms. In this particular instance, you could have told CronoDAS his post was stupid and suggest he delete it, and then said "Hey, everyone, let's stop talking about violence against specific people, it's stupid and makes us look bad" without putting your moderator hat on.

  • Produce a policy, possibly ridiculously stringent, that covers most things you don't like, which allows people to predict moderator behavior and doesn't change often. Ignore complaints when enforcing, and do what you wish with complaints on principle.

Comment author: shware 25 December 2012 03:33:14AM 12 points [-]

Taking this post in the way it was intended i.e. 'are there any reasons why such a policy would make people more likely to attribute violent intent to LW' I can think of one:

The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: 'LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world'

And, of course, statements like 'if a proposed conspiratorial crime were in fact good you shouldn't talk about it on the internet' make for good out-of-context excerpts.

Comment author: kodos96 24 December 2012 10:44:37PM *  11 points [-]

I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out

Eliezer, at this point I think it's fair to ask: has anything anyone has said so far caused you to update? If not, why not?

I realize some of my replies to you in this thread have been rather harsh, so perhaps I should take this opportunity to clarify: I consider myself a big fan of yours. I think you're a brilliant guy, and I agree with you on just about everything regarding FAI, x-risk, SIAI's mission.... I think you're probably mankind's best bet if we want to successfully navigate the singularity. But at the same time, I also think you can demonstrate some remarkably poor judgement from time to time... hey, we're all running on corrupted hardware after all. It's the combination of these two facts that really bothers me.

I don't know of any way to say this that isn't going to come off sounding horribly condescending, so I'm just going to say it, and hope you evaluate it in the context of the fact that I'm a big fan of your work, and in the grand scheme of things, we're on the same side.

I think what's going on here is that your feelings have gotten hurt by various people misattributing various positions to you that you don't actually hold. That's totally understandable. But I think you're confusing the extent to which your feelings have been hurt with the extent to which actual harm has been done to SIAI's mission, and are overreacting as a result. I'm not a psychologist - this is just armchair speculation.... I'm just telling you how it looks from the outside.

Again, we're all running on corrupted hardware, so it's entirely natural for even the best amongst us to make these kinds of mistakes... I don't expect you to be an emotionless Straw Vulcan (and indeed, I wouldn't trust you if you were)... but your apparent unwillingness to update in response to other's input when it comes to certain emotionally charged issues is very troubling to me.

So to answer your question "Are there any predictable consequences we didn't think of that you would like to point out"... well I've pointed out many already, but the most concise, and most important predictable consequence of this policy which I believe you're failing to take into account, is this: IT LOOKS HORRIBLE... like, really really bad. Way worse than the things it's intended to combat.

Comment author: Locke 23 December 2012 09:09:53PM 11 points [-]

Would this censor posts about robbing banks and then donating the proceeds to charity?

Comment author: Alicorn 23 December 2012 10:35:54PM *  11 points [-]
Comment author: Larks 23 December 2012 10:58:58PM 3 points [-]

Note to all: Alicorn is referring to something else. Robbing banks may be extreme but it is not altruism.

Comment author: wedrifid 24 December 2012 12:59:11AM 1 point [-]

Or Really Extreme Altruism?

This is an example of why I support this kind of censorship. Lesswrong just isn't capable of thinking about such things in a sane way anyhow.

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don't expect any useful or sane discussion to occur on sensitive subjects like this.

(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)

Comment author: Eugine_Nier 24 December 2012 09:32:50AM 7 points [-]

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things.

Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.

Comment author: fubarobfusco 24 December 2012 09:26:07AM 11 points [-]

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

Comment author: wedrifid 24 December 2012 10:09:13AM *  2 points [-]

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.

The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don't want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.

You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you're wrong.

Comment author: fubarobfusco 25 December 2012 02:06:22AM 6 points [-]

No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.

Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.

It sure looks from what you wrote that you drew an inference from "Anna does not agree with me" to "Anna is running broken or disreputable inference rules, or is lying out of self-interest" without considering alternate hypotheses.

Comment author: jsalvatier 24 December 2012 07:24:49PM *  4 points [-]

This also seems like an excessively hostile way of disagreeing! I think there's some illusion of transparency going on.

I think

Sorry, I think you've misunderstood me. I don't want to see bullshit on lesswrong. [Elaboation] The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship.

Might have worked better

Comment author: jbeshir 24 December 2012 01:47:57AM *  3 points [-]

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I'm comfortable with them being censored as a side effect of a policy with useful effects.

Comment author: wedrifid 24 December 2012 01:58:56AM 2 points [-]

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn't something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:20:44AM -2 points [-]

This does indeed seem like something that's covered by the new policy. It's illegal. In the alternative where it's a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.

Comment author: Tenoke 24 December 2012 02:24:53AM 11 points [-]

So you don't see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?

Comment author: Eliezer_Yudkowsky 24 December 2012 02:34:03AM 0 points [-]

I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn't worth it.

Comment author: Tenoke 24 December 2012 02:40:01AM 5 points [-]

Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don't tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don't tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:44:30AM 8 points [-]

It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.

Now it's true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, "An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."

Comment author: Eugine_Nier 24 December 2012 07:47:28AM 8 points [-]

"Oh, look, LW is encouraging people to commit suicide and donate the money to them."

Well, are you?

idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical,

True, but you have said things that seem to imply it. Seriously, you can't go around saying "X" and "X->Y" and then object when people start attributing position "Y" to you.

Comment author: fubarobfusco 24 December 2012 07:19:01PM 9 points [-]

idiots repeatedly make up crap

Idiots make up crap. You probably can't change this. The more significant you are, the more crap idiots will make up about you. Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal. Not because they have sufficient evidence of these propositions, but because gossip about Obama and Bush is thereby juicier than gossip about my neighbor Marty whom you've never heard of.

Idiots make up crap about projects, too. They say NASA faked the moon landing, vaccines cause autism, and that international food aid contains sterility drugs. It turns out that scurrilous rumors about NASA and the United Nations spread farther than scurrilous rumors about that funny-looking building in the town park which is totally a secret drug lab for the mayor.

But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."

How about treating the hypothetical as the stupidity it is? "Dude, beating up AI researchers wouldn't work and you're a jerk for posting it. There are a half dozen obvious reasons it wouldn't work, if you take five minutes to think about it ... and you're a jerk for posting it because it's stirring up shit for no good reason. Seriously, quit it. This is LW, not Conspiracy Hotline."

Comment author: Eugine_Nier 24 December 2012 08:08:16PM *  8 points [-]

There are a half dozen obvious reasons it wouldn't work, if you take five minutes to think about it

And yet, when attempting to list them, the only one anyone from SIAI can seem think of is bad PR.

Comment author: DanArmak 26 December 2012 07:22:39PM *  3 points [-]

Idiots make up crap about all kinds of things, not just violence or other illegal acts. Ideas outside societal norms often attract bad PR. If your primary goal here is to improve PR, you would have to censor posts by explicit PR criteria. The proposed criteria of discussion of violence or law-breaking is not optimized for this goal. So, what is it you really want?

Discussion of violence is something that (you claim) has no positive value, even ignoring PR. So it's easy to decide to censor it. But have you really considered what else to censor according to your goals? Violence clearly came due to the now deleted post; it was an available example. But you shouldn't just react to it and ignore other things, if your goal is not to prevent discussion of violence or crime in itself.

Comment author: Tenoke 24 December 2012 11:01:34AM 3 points [-]

I thought I posted this comment last night, but it seems like I didn't (and now I have to pay karma to post it) but aren't we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)

Comment author: saturn 24 December 2012 03:50:10AM 4 points [-]

In the alternative where it's a bad idea, talking about it has net negative expected utility.

What about the possibility that someone who thought it was a good idea would change their mind after talking about it?

Comment author: Eliezer_Yudkowsky 24 December 2012 04:07:06AM 3 points [-]

This seems an order-of-magnitude less likely than somebody wouldn't naturally think of the dumb idea, seeing the dumb idea.

Comment author: Decius 24 December 2012 05:00:22AM 6 points [-]

Therefore censor uncommon bad ideas generally?

Comment author: CronoDAS 24 December 2012 02:43:36AM *  7 points [-]

As far as I can tell, Really Extreme Altruism actually is legal.

Comment author: Eliezer_Yudkowsky 23 December 2012 09:43:43PM 20 points [-]

Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.

Comment author: [deleted] 24 December 2012 10:16:36AM *  15 points [-]

Uncharitable reading: As long as taking utilitarianism seriously doesn't lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can't.

Comment author: [deleted] 23 December 2012 11:46:26PM 12 points [-]

What abot pro-robbing banks in general?

Comment author: Viliam_Bur 25 December 2012 11:25:43PM 3 points [-]

Best way would be to construct the comment in a way that makes it least likely to seem bad when quoted outside of LW. For example we could imagine an alternative universe with intelligent bunnies and carrot-banks. Would it be good if a bunny robbed the carrot-bank and donated the carrots to charity?

If someone copied this comment on a different forum, it would seem silly, but not threatening. It is more difficult to start a wave of moral panic because of carrots and bunnies.

Comment author: Decius 24 December 2012 05:15:44AM 3 points [-]

What about discussions which discuss flaws in security systems, generally? e.g. "Banks often have this specific flaw which can be mitigated in this cost-ineffective manner."?

Comment deleted 24 December 2012 12:13:33PM [-]
Comment author: asparisi 25 December 2012 06:21:06AM 10 points [-]

Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...

In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.

This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.

Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won't remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)

Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create "silly" attempts to get around the rule by gray-areaing it (Where people say things like "I won't say which country, but it starts with United States and rhymes with Bymerica") which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence

Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.

Comment author: [deleted] 24 December 2012 12:27:38AM *  40 points [-]

I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states).

Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)

We're losing Graham cred by being unwilling to discuss things that make us look bad.

This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

Would my pro-piracy arguments be covered by this? What about my pro-coup d'etat ones?

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))

My post was indeed inappropriate. I have used the "Delete" function on it.

This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.

In general, two conclusions:

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.

Comment author: handoflixue 24 December 2012 08:50:09PM 8 points [-]

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

I want to upvote these two sentences again and again.

Comment author: AlexMennen 24 December 2012 01:06:43AM 17 points [-]

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

It makes sense to have mod discretion, but it also makes sense to have a list of rules that the mods can point to so that people whose posts get censored are less likely to feel that they are being personally targeted.

Comment author: [deleted] 24 December 2012 01:23:36AM 10 points [-]

Yes. Explanatory rules are good. Letting the rules drive is not.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:17:54AM 16 points [-]

These are explanations, not rules, check.

Comment author: Luke_A_Somers 24 December 2012 05:10:10AM 2 points [-]

Hence "may at the admins' option be censored"

Comment author: Multiheaded 24 December 2012 07:15:17AM *  5 points [-]

I support censorship, but only if it is based on the unaccountable personal opinion of a human.

I think that there's the usual paradox of benevolent dictatorship here; you can only trust humans who clearly don't seek this position for selfish ends and aren't likely to present a rational/benevolent front just so you would give them political power.

In a liberal/democratic political atmosphere, self-proclaimed benevolent dictators are a rare and prized resource; you can pressure one to run a website, an organization, etc to the best of their ability. But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization. So, IMO, isolated experiments with dictatorship are fine; overall preference for it is terribly dangerous.

(All of the above goes only for humans, of course; I have no qualms about FAI rule.)

P.S.: I googled for "benevolent dictator" + "paradox" and found an argument similar to mine.

Being governed by people instead of a system isn’t just dangerous, it suffers from a limited attention span, too. The Chinese oligarchy is, indeed, very effective. Beijing was cleaner for the Olympics and those pesky plastic bags are gone, but there is only so much bandwidth for the authorities to enforce regulation and address new concerns. Pollution is a serious problem in China that no one denies, but little is done so far. The people and the government are both troubled, but frankly, they have bigger fish to stir fry. Three hundred million people may be living middle class western lives, but that leaves another billion in a falling apart shack.

The Chinese have every reason to be proud of their beautiful country and amazing progress. There is much to enjoy and appreciate and, even if it pained me to admit it, their system works far better than I would like to give it credit. My worry for them is if it’s sustainable. Can those billion people rely on replacing great technocrats with new ones who also make the right decisions? Is it even possible for a system which depends on the vagaries of people to even effectively address all the concerns and needs of the people they govern and the society they guide?

Comment author: [deleted] 24 December 2012 07:22:35AM 2 points [-]

But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization.

Interesting. Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Comment author: Multiheaded 24 December 2012 07:33:14AM *  2 points [-]

Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Only a single one: a great actually-benevolent-dictator, with a good insight into people and lots of rationality, personally selects his successor among several candidates, after lengthy consideration and hidden testing. But, of course, remove one of the above qualifiers, and it can blow up regardless of the first dictator's best intentions. See e.g. Marcus Aurelius and Commodus. So, on a meta level, no, there's likely no system that would work for humans.

(I think that "real" democracy is also too dangerous - see the 19th and early 20th century - so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.)

Comment author: Eliezer_Yudkowsky 24 December 2012 02:17:18AM 4 points [-]

Yeah seriously. What if violence is the right thing to do?

Then discussing it on the public Internet is the wrong thing to do. I can't compare it to anything but juvenile male locker-room boasting.

Comment author: [deleted] 24 December 2012 02:11:58PM 20 points [-]

What if you aren't sure if violence is the right thing to do? You obviously should want as many eyeballs to debug your thinking on that as possible no?

Comment author: Plasmon 24 December 2012 03:21:50PM 2 points [-]

If you actually believe that violence might be the right thing to do, then you assign non-negligible probability to

  • the discussion will convince you that violence is indeed the right thing to do
  • you now have moral imperative to do violence, and you will act on this or convince others to act on it
  • you will want the discussion to never have occurred in the first place, because authorities can use it to track you down , and suppress your justified violence

If you want to discuss a coup or something do it in a less easily traceable fashion (not on a public forum. Use encryption. ).

Comment author: [deleted] 24 December 2012 04:30:18PM *  7 points [-]

The thing is discussing desirability of violence and carrying out violence are not necessarily done by the same person. Indeed historically they usually aren't. This does not remove moral culpability but does provide some legal protection.

Comment author: [deleted] 24 December 2012 04:38:23PM 9 points [-]

You do realize this argument generalizes to discussing many things beyond violence right? So if this is your true rejection I hope you've spent some time decompartmentalizing on this.

Comment author: [deleted] 25 December 2012 12:22:10AM 5 points [-]

I don't see how to decompartmentalize that, so I'm interested in what you are referring to.

Comment author: DataPacRat 24 December 2012 02:45:23AM 12 points [-]

A friend and I once put together a short comic trying to analyze democracy from an unusual perspective, including presenting the idea that an underlying threat of violent popular uprising should the system be corrupted helps keep it running well. This was closely related to a shorter comic presenting some ideas on rationality. The project led to some interesting discussions with interesting people, which helped me figure out some ideas I hadn't previously considered, and I consider it to have been worth the effort; but I'm unsure whether or not it would fall afoul of the new policy.

How 'identifiable' do the targets of proposed violence have to be for the proposed policy to apply, and how 'hypothetical' would they have to be for it not to? Some clarification there would be appreciated.

Comment author: AdeleneDawner 24 December 2012 02:31:58PM 2 points [-]

Actually, I can think of at least one type of situation where this isn't true, though it seems unwise to explain it in public and in any case it's still not something you'd want associated with LW, or in fact happening at all in most cases.

Comment author: Kawoomba 24 December 2012 08:26:35AM 2 points [-]

Then discussing it on the public Internet is the wrong thing to do.

Also, implying that violence is best discussed in private, versus not being discussed at all. It's like saying in public "But let's talk about our illegal activities in a more private venue." There should be no perception of LW being associated with such, period (.)

Comment author: CronoDAS 25 December 2012 11:57:43PM 9 points [-]

Maybe something like "Moderators, at their discretion, may remove comments that can be construed as advocating illegal activity" would work for a formal policy - it reads like relatively inoffensive boilerplate and would be something to point to if a post like mine needs to go, but is vague enough that it doesn't scream "CENSORSHIP!!!" to people who feel strongly about it. The "at their discretion" is key; it doesn't create a category of posts that moderators are required to delete, so it can't be used by non-moderators as a weapon to stifle otherwise productive discussion. (If you don't trust the discretion of the moderators, that's not a problem that can be easily solved with a few written policies.)

Comment author: Incorrect 24 December 2012 04:40:09AM *  8 points [-]

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

Comment author: quiet 24 December 2012 05:12:40PM *  4 points [-]

We should exempt any imagery fitting of a Slayer album cover, lest we upset the gods of metal with our weakness.

Comment author: CronoDAS 24 December 2012 04:20:57AM 8 points [-]

I don't know if we actually need a specific policy on this. We didn't in the case of my post...

Comment author: DanArmak 26 December 2012 07:37:12PM 3 points [-]

I agree. We should trust in the community more where the guarantee of moderation (by establishing a policy) is not needed.

Your post was quickly downvoted, and you deleted it yourself. This is an example of a good outcome that demonstrates we didn't need moderation.

Comment author: Suryc11 24 December 2012 07:04:39AM *  22 points [-]

I'm disappointed by EY's response so far in this thread, particularly here. The content of the post above in itself did not significantly dismay me, but upon reading what appeared to be a serious lack of any rigorous updating on the part of EY to--what I and many LWers seemed to have thought were--valid concerns, my motivation to donate to the SI has substantially decreased.

I had originally planned to donate around $100 (starving college student) to the SI by the start of the new year, but this is now in question. (This is not an attempt at some sort of blackmail, just a frank response by someone who reads LW precisely to sift through material largely unencumbered by mainstream non-epistemic factors.) This is not to say that I will not donate at all, just that the warm fuzzies I would have received on donating are now compromised, and that I will have to purchase warm fuzzies elsewhere--instead of utilons and fuzzies all at once through the SI.

Comment author: drethelin 24 December 2012 07:19:14AM 15 points [-]

This is similar to how I feel. I was perfectly happy with his response to the incident but became progressively less happy with his responses to the responses.

Comment author: orthonormal 26 December 2012 05:28:33AM 6 points [-]

There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait; Eliezer hasn't (to the detriment of this discussion and some prior ones).

(I'd bet at 10-to-1 that Eliezer agrees with this assessment.)

Comment author: lukeprog 02 February 2013 12:31:10AM 3 points [-]

There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait

Not sure I totally agree. My LW comments may show retained composure in most cases, but I can think of two instances in the past few months in which I became (mildly) emotional in SI meetings in ways that disrupted my judgment until after I had cooled down. Anna can confirm, as she happens to have been present for both meetings. (Eliezer could also confirm if he had better episodic memory.) The first instance was a board meeting at which we discussed different methods of tracking project expenses, the second was at a strategy meeting which Anna compared to a Markov chain.

Anyway, I'm aware of people who are better at this than I am, and building this skill is one of my primary self-improvement goals at this time.

Comment author: orthonormal 03 February 2013 04:02:15PM 2 points [-]

I appreciate you sharing this.

Keeping one's composure in person and keeping one's composure on the Internet are distinct aptitudes (and only somewhat correlated, as far as I can tell), and it still looks to me like you've done well at the latter.

Comment author: Wei_Dai 24 December 2012 09:05:35PM 7 points [-]

Eliezer, could you clarify whether this policy applies to discussions like "maybe action X, which some people think constitutes violence, isn't really violence"? And what about nuclear war strategies?

Comment author: twanvl 24 December 2012 12:40:43PM 7 points [-]

What if some violence helps reduce further violence? For example corporal punishment could reduce crime (think of Singapore). Note that I am not saying that this is necessarily true, just that we should not a priori ban all discussion on topics like this.

Comment author: ewbrownv 26 December 2012 04:39:38PM 6 points [-]

Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.

If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of <jurisdiction in which Less Wrong staff live> in a manner likely to create criminal liability will be deleted."

Because, you know, it's just about impossible to talk about specific wars, terrorism, criminal law or even many forms of political activism without advocating real violence against identifiable groups of people.

Comment author: NancyLebovitz 24 December 2012 01:17:07AM 16 points [-]

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad

I'm dubious about this because laws can change. I'm also sure I don't have a solid grasp of which laws can be enforced against middle-class people, but I do know that they aren't all like laws against kidnapping. For example, doctors can get into trouble for prescribing "too much" pain medication.

Comment author: [deleted] 25 December 2012 12:43:55AM *  5 points [-]

BTW, I know it's not terribly rare for anti-marijuana laws to be enforced against middle-class people where I am; so he should have either specified “against middle-class people in Northern California” (but how is someone from (say) rural Poland supposed to know?) or use a different example such as copyright infringement for personal use (hoping that no country actually enforces that non-negligibly often).

EDIT: A better criterion that would include laws against kidnapping but not laws against marijuana or laws against copyright infringement (though by far not a perfect one) in the context of ‘suggesting breaking those publicly on the internet would look bad’ would be ‘laws that a supermajority of internet users aged between 18 and 35 and with IQ above 115 would likely find ridiculous’. (Though I might be excessively Generalizing From One Example when thinking about what other people would think of anti-marijuana laws or copyright laws.)

Comment author: kodos96 25 December 2012 01:08:00AM 5 points [-]

BTW, I know it's not terribly rare for anti-marijuana laws to be enforced against middle-class people where I am; so he should have either specified “against middle-class people in Northern California”

Also, even in California, and even for people of middle class, you'll get marijuana laws enforced against you if you manage to piss off the wrong cop/prosecutor.

Comment author: quintopia 24 December 2012 01:03:56AM 30 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence). If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question. This is why I will be opposed to any sort of zero tolerance policy in which the things to be censored is not well-defined a definite impediment to balanced and rationally-considered discussion. It's clear to me that speaking about violence against a particular person or persons is far more likely to have negative consequences on balance, but discussion of the commission of crimes in general seems like something that should be weighed on a case-by-case basis.

In general, I prefer my moderators to have a fuzzy set of broad guidelines about what should be censored in which not deleting is the default position, and they actually have to decide that it is definitely bad before they take the delete action. The guidelines can be used to raise posts to the level of this consideration and influence their judgment on this decision, but they should never be able to say "the rules say this type of thing should be deleted!"

Comment author: Error 24 December 2012 03:05:52AM 7 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence)

I'm not sure how this is relevant; there's a good bit of difference between discussion of breaking a law and discussion of changing it. That said, I think I'm reading this differently than most in the thread. I'm understanding it as aimed against hypotheticals that are really "hypotheticals".

In answer to the question that was actually asked in the post, here is a non-obvious consequence: My impression of the atheist/libertarian/geek personspace cluster that makes up much of LW's readership is that they're generally hostile to anything that smells like conflating "legal" with "okay"; and also to the idea that they should change their behavior to suit the rest of the world. You might find you're making LW less off-putting to the mainstream at the cost of making it less attractive to its core audience. (but you might consider it worth that cost)

As both a relatively new contributor and a member of said cluster, this policy makes me somewhat uncomfortable at first glance. Whether that generalizes to other potential new contributors, I cannot say. I present it as proof-of-concept only.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:15:33AM 0 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence).

That's an... interesting way of putting it, where by "interesting" I mean "wrong". I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

This is why I will be opposed to any sort of zero tolerance policy

The 'rules' do not 'mandate' that I delete anything. They hardly could. I'm just, before I start deleting things, giving people fair notice that this is what I'm considering doing, and offering them a chance to say anything I might have missed about why it's a terrible idea.

Comment author: wedrifid 24 December 2012 03:00:54AM 47 points [-]

That's an... interesting way of putting it, where by "interesting" I mean "wrong".

If you genuinely can't see how similar considerations apply to you personally publishing rape-world stories and the reasoning you explicitly gave in the post then I suggest you have a real weakness in evaluating the consequences of your own actions on perception.

I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

I approve of your Three Worlds Collide story (in fact, I love it). I also approve of your censorship proposal/plan. I also believe there is no need to self censor that story (particularly at the position you were when you published it). That said:

This kind of display of evident obliviousness and arrogant dismissal rather than engagement or---preferably---even just outright ignoring it may well do more to make Lesswrong look bad than half a dozen half baked speculative posts by CronoDAS. There are times to say "but you know, screw it" and "where by interesting I mean wrong" but those times don't include when concern is raised about your legalised-rape-and-it's-great story in the context of your own "censor hypothetical violence 'cause it sounds bad" post.

Comment author: handoflixue 24 December 2012 11:43:15PM 5 points [-]

It seems like this represents, not simply a new rule, but a change in the FOCUS of the community. Specifically, it used to be entirely about generating good ideas, and you are now adding a NEW priority which is "generating acceptable PR".

Quite possibly there is an illusion of transparency here, because there hasn't really BEEN (to my knowledge) any discussion about this change in purpose and priorities. It seems reasonable to be worried that this new priority will get in the way of, or even supersede the old priority, especially given the phrasing of this.

At a minimum, it's a slippery slope - if we make one concession to PR, it's reasonable to assume others will be made as well. I don't know if that's the case - if I'm in error on that point, feel free to mention it.

Comment author: Qiaochu_Yuan 24 December 2012 11:56:53PM *  3 points [-]

When you go on a first date with someone, would you tell them "hey, I've got this great idea about how I should [insert violence here] in order to [insert goal here]. What do you think?" Of course not, because whether or not this is a good idea, you are not getting a second date.

PR isn't inherently Dark Arts. It's about providing evidence to another party about yourself or your organization in a way which is conducive to further provision of evidence. If you start all your dates by talking about your worst traits first, you aren't giving your date incentives to stick around and learn about your best traits. If LW becomes known for harboring discussions of terrorism or whatever, you aren't giving outsiders incentives to stick around and learn about all the other interesting things happening on LW, or work for SIAI, etc.

Comment author: DanArmak 26 December 2012 07:57:25PM 2 points [-]

If you start all your dates by talking about your worst traits first

This begs the question by assuming the proposed violence is a bad trait.

Comment author: Qiaochu_Yuan 26 December 2012 10:51:05PM 2 points [-]

All I'm assuming is that a typical date will assume that people who talk about violence on the first date are crazy and/or violent themselves. This is an argument about first impressions, not an argument about goodness or badness.

Comment author: handoflixue 25 December 2012 12:55:03AM 2 points [-]

You'd be amazed how many second dates I get...

That said, I don't think PR is Dark Arts, I just think it's an UNSPOKEN change in community norms, and... from a PR standpoint, this post is a blatantly stupid way of revealing that change.

Huh. Either the original post is bad because PR is bad, or this post is bad because it demonstrates bad PR. Lose/lose :)

Comment author: Pentashagon 24 December 2012 06:01:49AM 12 points [-]

Do wars count? I find it strange, to say the least, that humans have strong feelings about singling out an individual for violence but give relatively little thought to dropping bombs on hundreds or thousands of nameless, faceless humans.

Context matters, and trying to describe an ethical situation in enough detail to arrive at a meaningful answer may indirectly identify the participants. Should there at least be an exception for notorious people or groups who happen to still be living instead of relegated to historical "bad guys" who are almost universally accepted to be worth killing? I can think of numerous examples, living and dead, who were or are the target of state-sponsored violence, some with fairly good reason.

Comment author: [deleted] 24 December 2012 05:44:23AM *  17 points [-]

violence against real people.

Abortion, euthenasia and suicide fit that description, some say. For them and those who disagree with them this proposal may have unforeseen consequences. Edit: all three are illegal in parts of the world today.

Comment author: [deleted] 24 December 2012 04:11:56PM *  11 points [-]

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored

The blasphemy laws of many countries fit this description - another possible unintended consequence.

Comment author: [deleted] 24 December 2012 10:07:52AM 11 points [-]

Would pro-suicide and general anti-natalist posts be covered by this?

Comment author: Viliam_Bur 26 December 2012 12:45:44AM 2 points [-]

Suggesting that specific people commit suicide, obviously yes. People in general... maybe no.

I am not going to explain why, but although death of all people technically includes the death of any specific person X.Y., saying "X.Y. should die" sounds worse than saying "all humans should die".

Comment author: Nominull 24 December 2012 03:29:35AM 11 points [-]

Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.

Comment author: kodos96 24 December 2012 04:17:08AM 7 points [-]

You must do what you feel is right, of course

Passive-aggression level: Obi-Wan Kenobi

Comment author: gjm 24 December 2012 11:06:20AM 3 points [-]

I don't see that that's passive-aggressive when it's accompanied by a clear and explicit statement that Nominull thinks Eliezer is wrong and why. What would be passive-aggressive is just saying "Well, I suppose you must do what you feel is right" and expecting Eliezer to work out that disapproval is being expressed and what sort.

Comment author: twanvl 24 December 2012 12:37:25PM 2 points [-]

because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true

In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.

Comment author: Viliam_Bur 26 December 2012 12:36:03AM *  2 points [-]

the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post

Anyone can read LW. There is no IQ test, rationality test, or a mandatory de-biasing session before reading the articles and discussions.

I am not concerned about someone reading LW and commiting a violence. I am concerned about someone commiting violence and coincidentally having read LW a day before (for example just one article randomly found by google), and police collecting a list of recently visited websites, and a journalist looking at that list and then looking at some articles on LW.

Shortly, we don't live on a planet full of rationalists. It is a fact of life that anything we do can be judged by any irrational person who notices. Sure, we can't make everyone happy. But we should avoid some things that can predictably lead to unnecessary trouble.

Comment author: Decius 24 December 2012 05:23:31AM 4 points [-]

Why the explicit class distinction?

It would be prohibited to discuss how to or speed and avoid being cited for it. (I thought that this was already policy, and I believe it to be a good policy.)

It would not be prohibited to discuss how to be a vagrant and avoid being cited for it. (Middle class people temporarily without residences typically aren't treated as poorly as the underclass.)

Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?

Comment author: ChristianKl 25 December 2012 02:16:06AM *  15 points [-]

On of the most challenging moderation decisions I had to do at another forum was whether someone who argues the position "Homosexuality is a crime. In my country it's punishable with death. I like the laws of my country" should have his right of free speech. I think the author of the post was living in Uganda.

The basic question is, should someone who's been raised in Uganda feel free to share his moral views? Even if those views are offensive to Western ears and people might die based on those views?

If you want to have a open discussion about morality I think it's very valuable to have people who aren't raised in Western society participating openly in the discussion. I don't think LessWrong is supposed to be a place where someone from Uganda should be prevented from arguing the moral views in which he believes.

When it comes to politics, communists argue frequently for the necessarity of a revolution. A revolution is an illegal act that includes violence against real people. Moldburg argues frequently for the necessity of a coup d'état.

This policy allows for censoring both the political philosophy of communism as well as the political philosophy of moldbuggianism.
Even when I disagree with both political philosophies I think they should stay within the realm of discourse on LessWrong.

A community which has the goal of finding the correct moral system shouldn't ban ideas because they conflict with the basic Western moral consensus.

TDT suggests that one should push the fat man. It's a thought exercise and it's easy to say "I would push the fat man". In a discussion about pushing fat man's on trolly I think it's valid to switch the discussion from trolly cars to real world examples.

Discussion of torture is similar. If you say "Policemen should torture kidnappers to get the location where the kidnapper hid the victim" you are advocating a crime against real people.

Corporal punishment is illegal violence.

Given the examples I listed in this posts, which are cases where you would choose to censor? Do you think that you could articulate a public criteria about which cases you censor and which you will allow?

Comment author: [deleted] 25 December 2012 06:00:51PM 3 points [-]

TDT suggests that one should push the fat man.

Does it? CDT most certainly does, but...

Comment author: [deleted] 24 December 2012 10:10:23AM *  10 points [-]

(i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).

This seems to be a fully general argument against Devil's Advocacy. Was it meant as such?

Comment author: drethelin 23 December 2012 09:08:40PM 19 points [-]

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

Comment author: timtyler 24 December 2012 02:52:42AM 6 points [-]

I believe the traditional structure is a clandestine cell system.

Comment author: David_Gerard 23 December 2012 10:04:15PM 5 points [-]

And the mailing lists, apparently.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:24:07AM 5 points [-]

The Surgeon General recommends that you not discuss criminal activities, with respect to laws actually enforced, on any mailing list containing more than 5 people.

Comment author: [deleted] 24 December 2012 04:36:10PM 4 points [-]

Why 5?

Comment author: Waffle_Iron 25 December 2012 01:29:10AM 2 points [-]

Have you ever tried to get a group of more than 5 people to keep a secret?

Comment author: [deleted] 25 December 2012 10:39:09AM *  3 points [-]

Have you ever tried to get a group of 4 people to keep a secret?

I'm just wondering where the particular number comes from. Three people can keep a secret if two are dead and all that...

Comment author: Kawoomba 23 December 2012 09:13:16PM 2 points [-]

Back in line with you!

Comment author: [deleted] 23 December 2012 09:56:51PM *  16 points [-]

Yes, a post of this type was just recently made.

Well then.

I've heard that firemen respond to everything not because they actually have to, but because it keeps the drill sharp, so to speak. The same idea may apply to mod action... (in other words, MOAR "POINTLESS" CENSORSHIP)

More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?

Comment author: Eugine_Nier 24 December 2012 08:40:40AM *  12 points [-]

I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.

Frankly, I find it even more troubling that your standard reasons for why violence is not in fact a good idea seem to be "it's bad PR" and "even if it is we shouldn't say so in public".

As I quote here:

if your main goal is to show that your heart is in the right place, then your heart is not in the right place.

Edit: added link to an example of SIAI people unable to give a better reason against doing violence than PR.

Comment author: jimrandomh 24 December 2012 06:25:17PM 5 points [-]

I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.

I would find this troubling if it were true, but the better reason is right there in the post: "Talking about such violence makes that violence more probable".

Comment author: quiet 24 December 2012 04:55:29PM 7 points [-]

I appreciate the honesty of it. No one here is going to enact any of these thought experiments in real life. The likely worst outcome is to off-put potential SI donors. It must be hard enough to secure funding for a fanfic-writing apocalypse cult; prepending violent onto that description isn't going to loosen up many wallets.

Comment author: SoftFlare 24 December 2012 12:09:48PM 8 points [-]

Beware Evaporative Cooling of Group Beliefs.

I am for the policy, although heavy-heartedly. I feel that one of the pillars of Rationality is that there should be no Stop Signs and this policy might produce some. On the other hand, I think PR is important, and that we must be aware of evaporative cooling that might happen if it is not applied.

On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.

To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy. If you just tell him "Your policy is flawed due to ambiguitiy in description" or "You have, in the past, said things that are not consistent with this policy" - I place low probability on him significantly changing his mind. You should take this as a sign that you are Straw-manning Eliezer, when you should be Steel-manning him.

Also, how about some creative solutions? An special post tag that must be applied to posts that condone hypothetical violence which causes them to only be seen to registered users - and displays a disclaimer above the post warning against the nature of the post? That should mitigate 99% of the PR effect. Or, your better, more creative idea. Go.

Comment author: kodos96 24 December 2012 04:49:01PM *  2 points [-]

On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.

I disagree that this is the entire source of the dispute. I think that even when constrained to optimizing only for good PR, this is an instrumentally ineffective method of achieving that. Censorship is worse for PR than the problem in question, especially given that that problem in question is thus far nonexistent

To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy.

This is trivially easy to do, since the positive effects of enacting the policy are zero, given that the one and only time this has ever been a problem, the problem resolved itself without censorship, via self-policing.

Well... the showing him the model part is trivially easy anyway. Convincing him... apparently not so much.

Comment author: jimrandomh 23 December 2012 11:27:14PM 8 points [-]

Posts advocating or "asking about" violence against identifiable real people or groups should be deleted at the admins' discretion:

Agree Disagree

Posts advocating or "asking about" violation of laws that are actually enforced against middle-class people, other than the above, should be deleted at the admins' discretion:

Agree Disagree

Submitting...

Comment author: gjm 24 December 2012 02:08:22AM 8 points [-]

This poll, like EY's original question, conflates two things that don't obviously belong together. (1) Advocating certain kinds of act. (2) "Asking about" the same kind of act.

I appreciate that in some cases "asking about" might just be lightly-disguised advocacy, or apparent advocacy might just be a particularly vivid way of asking a question. I'm guessing that the quotes around "asking about" are intended to indicate something like the first of these. But what, exactly?

Comment author: jbeshir 24 December 2012 02:51:36AM 3 points [-]

I think in this context, "asking about" might include raising for neutral discussion without drawing moral judgements.

The connection I see between them is that if someone starts neutral discussion about a possible action, actions which would reasonably be classified as advocacy have to be permitted if the discussion is going to progress smoothly. We can't discuss whether some action is good or bad without letting people put forward arguments that it is good.

Comment author: gjm 24 December 2012 03:15:34AM 3 points [-]

There's certainly a connection. I'm not convinced the connection is so intimate that if censoring one is a good idea then so is censoring the other.

Comment author: jimrandomh 23 December 2012 11:35:02PM 3 points [-]

This is not a poll, but

...but it'd be nice to have a poll to point at later, to show consensus, and I'd be surprised if people disagreed.

Comment author: Eugine_Nier 24 December 2012 02:03:16AM 11 points [-]

I find that threatening hypothetical violence against my interlocutor can be a useful rhetorical device for getting them to think about ethical problems in near mode.

Comment author: FiftyTwo 24 December 2012 02:46:41AM 36 points [-]

I'm going to hit you with a stick unless you can give me an example of where that has been effective.

Comment author: Pentashagon 24 December 2012 05:36:42AM 7 points [-]

THREE examples.

Comment author: kodos96 24 December 2012 04:14:57AM 4 points [-]

For all the whining I do about how LWers lack a sense of humor.... I absolutely love it when I'm proven wrong.

Comment author: Qiaochu_Yuan 24 December 2012 11:19:35AM 7 points [-]

Do you really feel like LWers lack a sense of humor? LWers have posted some of the funniest things I've ever read. Their sense-of-humor distribution has heavy tails, at least.

Comment author: [deleted] 24 December 2012 05:30:00AM 14 points [-]

Just because I think responses to this post might not have been representative:

I think this is a good policy.

Comment author: Kaj_Sotala 24 December 2012 06:17:06AM *  4 points [-]

I also agree with this policy, and feel that many of the raised or implied criticisms of it are mostly motivated from an emotional reaction against censorship. The points do have some merit, but their significance is vastly overstated. (Yes, explicit censorship of some topics does shift the Schelling fence somewhat, but suggesting that violence is such a slippery topic that next we'll be banning discussion about gun control and taxes? That's just being silly.)

Comment author: kodos96 24 December 2012 06:36:28AM 8 points [-]

You may think it's silly, others do not. Even if Eliezar has no intention of interpeting "violence" that way, how do we know that? Ambiguity about what is and is not allowed results in chilling far more speech than may have been originally intended by the policy author.

Also, the policy is not limited to only violence, but to anything illegal (and commonly enforced on middle class people). What the hell does that even mean? Illegal according to whom? Under what jurisdiction? What about conflicts between state/federal/constitutional law? I mean, don't get me wrong, I think I have a pretty good idea what Eliezar meant by that, but I could well be wrong, and other people will likely have different ideas of what he meant. Again, ambiguity is what ends up chilling speech, far more broadly than the original policy author may have actually intended.

And I will again reiterate what I consider to be the most slam-dunk argument against this policy: in the incident that provoked this policy change, the author of the offending post voluntarily removed it, after discussion convinced him it was a bad idea. Self-policing worked! So what exactly is the necessity for any new policy at all?

Comment author: Kaj_Sotala 24 December 2012 07:43:26AM 2 points [-]

I agree that your points about ambiguity have some merit, but I don't think there's much of a risk of free speech being chilled more than was intended, because there will be people who test these limits. Some of their posts will be deleted, some of them will not. And then people can see directly roughly where the intended line goes. The chilling effect of censorship would be a more worrying factor if the punishment for transgressing was harsher: but so far Eliezer has only indicated that at worst, he will have the offending post deleted. That's mild enough that plenty of people will have the courage to test the limits, as they tested the limits in the basilisk case.

As for self-policing, well, it worked once. But we've already had trolls in the past, and the userbase of this site is notoriously contrarian, so you can't expect it to always work - if we could just rely on self-policing, we wouldn't need moderators in the first place.

Comment author: fubarobfusco 24 December 2012 09:48:17AM 7 points [-]

Two thoughts:

One: When my partner worked as the system administrator of a small college, her boss (the head of IT, a fatherly older man) came to her with a bit of an ethical situation.

It seems that the Dean of Admissions had asked him about taking down a student's personal web page hosted on the college's web server. Why? The web page contained pictures of the student and her girlfriend engaged in public displays of affection, some not particularly clothed. The Dean of Admissions was concerned that this would give the college a bad reputation.

Naturally the head of IT completely rejected the request out of hand, but was interested in discussing the implications. One that came up was that taking down a student web page about a lesbian relationship would be worse reputation than hosting it could bring. Another was that the IT staff did not feel like being censors over student expression, and certainly did not feel like being so on behalf of the Admissions office.

It's not clear to me that this case is especially analogous. It may be rather irrelevant, all in all.

Two: There is the notion that politics is about violence, not about agreement. That is to say, it is not about what we do when everyone agrees and goes along; but rather what we do when someone refuses to go along; when there is contention over shared resources because not everyone agrees what to do with them; when someone is excluded; when someone gets to impose on someone else (or not); and so on. Violence is often at least somewhere in the background of such discussions, in judicial systems, diplomacy, and so on. As Chairman Mao put it (at least, as quoted by Bob Wilson), political power grows out of the barrel of a gun. And a party with no ability to disrupt the status quo is one that nobody has to listen to.

As such, a position of nonviolence goes along with a position of non-politics. Avoiding threatening people — taken seriously enough — may require disengaging from a lot of political and legal-system stuff. For instance, proposing to make certain research illegal or restricted by law entails proposing a threat of violence against people doing that research.

Comment author: kodos96 24 December 2012 05:14:20AM 13 points [-]

Aside from the fact that "it might make us look bad" is a horrible argument in general, have you not considered the consequence that censorship makes us look bad? And consider the following comment below:

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

It was obviously intended as a joke, but is that clear to outsiders? Does forcing certain kinds of discussions into side-channels, which will inevitibly leak, make us look good?

Consideration of these kinds of meta-consequences is what separates naive decision theories from sophisticated decision theores. Have you considered that it might hurt your credibility as a decision theorist to demonstrate such a lack of application of sophisticated decision theory in setting policies on your own website?

And now, what I consider to be the single most damning argument against this policy: in the very incident that provoked this rule change, the author of the post in question, after discussion, voluntarily withdrew the post, without this policy being in effect! So self-policing has demonstrated itself, so far, to be 100% effective at dealing with this situation. So where exactly is the necessity for such a policy change?

Comment author: MixedNuts 23 December 2012 10:30:17PM 9 points [-]

Your generalization is averaging over clairvoyance. The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Does the policy apply to violence against oneself? (I'm guessing not, since it's not illegal.) Talking about it is usually believed to reduce risk.

There's a scarcity effect whereby people believe pro-violence arguments to be stronger, since if they weren't convincing they wouldn't be censored. Not sure how strong it is, likely depends on whether people drop the topic or say things like "I'm not allowed to give more detail, wink wink nudge nudge".

It's a common policy so there don't seem to be any slippery slope problems.

We're losing Graham cred by being unwilling to discuss things that make us look bad. Probably a good thing, we're getting more mainstream.

Comment author: Mestroyer 24 December 2012 02:08:50AM 7 points [-]

I'll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn't want people to read, and that you, dear reader of this comment, probably can't!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.

This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.

Comment author: drethelin 24 December 2012 09:16:55AM 12 points [-]

Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!

Comment author: NancyLebovitz 24 December 2012 03:19:50AM *  9 points [-]

You can't reliably make things on the internet go away.

Comment author: Mestroyer 24 December 2012 03:24:42AM 3 points [-]

You can make them hard enough to access that they won't be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong's reputation.

Comment author: NancyLebovitz 24 December 2012 05:07:03AM 3 points [-]

I was thinking about people posting screen shots.

Comment author: Qiaochu_Yuan 24 December 2012 10:45:45AM 4 points [-]

Agreed. It only takes one high-karma user posting a screenshot on reddit of LW's Secret Thread Where They Discuss Terrorism or whatever...

Comment author: Tenoke 24 December 2012 02:14:15AM 4 points [-]

Not a bad option indeed. It has a merit if we are really that bothered about the general view of LW.

And for the record the post is still accessible albeit deleted.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:27:00AM 2 points [-]

LW has effectively zero resources to implement software changes.

Comment author: kodos96 24 December 2012 04:24:41AM *  5 points [-]

If this were your real rejection, you would be asking for volunteer software-engineer-hours.

Comment author: Eliezer_Yudkowsky 24 December 2012 05:00:00AM 5 points [-]

Tried.

Comment author: gelisam 24 December 2012 07:24:29AM 12 points [-]

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

Comment author: yli 24 December 2012 11:08:28AM *  9 points [-]

Maybe it's just that volunteers that will actually do any work are hard to find. Related.

Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.

Comment author: gelisam 24 December 2012 04:27:20PM 3 points [-]

I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.

It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.

Comment author: Risto_Saarelma 24 December 2012 07:29:06AM 5 points [-]

The site is open source, you should be able to just write a patch and submit it.

Comment author: Tenoke 24 December 2012 12:25:22AM 10 points [-]

So I finally downvoted Yudkowsky.

Comment author: NancyLebovitz 24 December 2012 01:11:15AM 4 points [-]

What was your line of thought?

Comment author: Tenoke 24 December 2012 01:19:44AM *  35 points [-]

That censorship because of what people think of LessWrong is ridiculous. That the negative effect on the reputation is probably significantly less than what is assumed. And that if EY thought that censorship of content for the sake of LW's image is in order he should've logically thought that omitting fetishes from his public OKCupid profile(for the record I've defended the view that this is his right) among other things is also in order as well. And some other thoughts of this kind.

Comment author: DanArmak 26 December 2012 07:55:03PM 2 points [-]

laws that are actually enforced against middle-class people

Different countries can have very different laws. Are you going to enforce this policy with reference to U.S. laws only, as they exist in 2012? If not, what is your standard of reference?

As I commented elsewhere, if your goal is to prevent bad PR, it is not obvious to me that this policy is the right way to optimize for it. Perhaps you have thought this out and have good reasons for believing that this policy is best for this goal, but it is not clear to me, so please elaborate on this if you can.

Comment author: buybuydandavis 24 December 2012 10:53:22PM 2 points [-]

If the point was to "make a good impression" by distorting the impression given by people on the list to potential donors, maybe a more effective strategy is to shut up and do it, instead of making an announcement about it and causing a ruckus. "Disappear" the problems quietly and discretely.

This reminds me of the phyg business. Prompting long discussion threads about how "We are not a phyg! We are not a phyg!" is not recommended behavior if you don't want people to think you're not a phyg.

Comment author: handoflixue 24 December 2012 08:46:54PM *  2 points [-]

"anyone talking about a proposed crime on the Internet fails forever as a criminal"

I realize this isn't your TRUE objection, just a bit of a tangential "Public Service Announcement". The real concern is simply PR / our appearance to outsiders, right? But... I'm confused why you feel the need to include such a PSA.

Do we have a serious problem with people saying "Meet under the Lincoln Memorial at midnight, the pass-phrase is Sic semper tyrannis" or "I'm planning to kill my neighbor's dog, can you please debug my plot, I live in Brooklyn at 123 N Stupid Ave"?

You can use Private Messaging to send me actual examples, without causing a public reputation hit. I can't recall ever reading anything like that on this site.

Comment author: Qiaochu_Yuan 24 December 2012 11:13:53AM *  2 points [-]

I wouldn't have posted the following except that I share Esar's concerns about representativeness:

I think this is a good idea. I think using the word "censorship" primes a large segment of the LW population in an unproductive direction. I think various people are interpreting "may be deleted" to mean "must be deleted." I think various people are blithely ignoring this part of the OP (emphasis added):

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

In particular, I think people are underestimating how important it is for LW not to look too bad, and also underestimating how bad LW could be made to look by discussions of the type under consideration.

Finally, I strongly agree that

anyone talking about a proposed crime on the Internet fails forever as a criminal[.]

Comment author: wedrifid 24 December 2012 12:26:01AM 2 points [-]

Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

Someone please send me a link via PM? Or perhaps the author could PM me? Not because the censorship of that class bothers me but because talking to wedrifid is not posting things on the internet, I'm curious and there are negligible consequences for talking to me about interesting hypothetical questions.

(Disregard the above is the post or comment was boring.)

Comment author: [deleted] 24 December 2012 12:34:57AM *  23 points [-]

tl;dr: tobacco kills more people than guns and cars combined. Should we <insert violence here>?

PS: fuck the police

Comment author: wedrifid 24 December 2012 12:47:37AM 8 points [-]

tl;dr: tobacco kills more people than guns and cars combined. Should we <insert violence here>?

PS: fuck the police

(I laughed). Thanks nyan. (I hope this kind of satirical summary is considered acceptable.)

Comment author: CronoDAS 24 December 2012 02:45:39AM 7 points [-]

As the author of the offending Discussion post in question, I'd say it's an adequate summary.

Comment author: kodos96 24 December 2012 04:27:56AM *  8 points [-]

I hope this kind of satirical summary is considered acceptable

This kind of uncertainty about what is and is not acceptible, is perhaps the primary reason why such censorship policies are evil.

Comment author: Viliam_Bur 26 December 2012 01:24:52AM *  3 points [-]

This is a huge exaggeration!

I mean, yes, in a far mode, censorship creates fear and so on... but let's come back to near mode and ask: "What is the worst consequence of stepping just a little on the wrong side of this uncertain line?"

Well, Eliezer would delete my comment or article, and that's it. It does not really make my legs shake.

My guess is that "tobacco kills more people than guns and cars combined. Should we <insert violence here>?", written literally like this, is acceptable. Probability estimate? It would be 98% in a different discussion on a different day, and perhaps 95% here and now because Eliezer may still be in the deleting mood. So what? If I am wrong, he will delete that comment, and perhaps also my comment for quoting it. And that's all. Am I afraid? No. Actually, I would probably not even notice if that happened.

Generally, I also prefer precise rules to imprecise ones, but there are limits how precise one can be in topics like this. Trying to make the rules exact (to avoid all harm, but allow all harmless discussion) is a FAI-complete problem. Even the real-world laws often have imprecise parts. Also, the more precise rules, the greater pressure on moderators to follow them literally; but I would prefer them using their own judgement.

Comment deleted 24 December 2012 08:09:40PM [-]
Comment author: blacktrance 27 December 2012 06:22:32AM 3 points [-]

I don't have any principled objection to this policy, other than that as rationalists, we want to have fun, and this policy makes LW less fun.

Comment author: fubarobfusco 24 December 2012 10:05:14AM 6 points [-]

Counter-proposal:

We don't contemplate proposals of violence against identifiable people because we're not assholes.

I mean, seriously, what the fuck, people?

Comment author: Manfred 24 December 2012 04:12:23PM 5 points [-]

Generalizations: on average accurate. In specific wrong.

Comment author: Larks 23 December 2012 11:01:04PM 4 points [-]

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states). Is the public-private distinction the important thing - would advocating/talking about state-sanctioned genocide be ok?

Comment author: ikrase 24 December 2012 01:04:55AM 4 points [-]

While an interesting question, I think that the answer to that is reasonably obvious.

Comment author: Eugine_Nier 24 December 2012 01:54:53AM *  3 points [-]

What about capital punishment and/or corporal punishment?

Comment author: shminux 23 December 2012 10:48:40PM 3 points [-]

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

Comment author: Eliezer_Yudkowsky 24 December 2012 02:25:27AM 8 points [-]

Yes. This seems like yet another example of "First of all, it's a bad fucking idea, second of all, talking about it makes everyone else look bad, and third of all, if hypothetically it was actually a good idea you'd still be a fucking juvenile idiot for blathering about it on the public Internet." What part of "You fail conspiracies forever" is so hard for people to understand? Talk like this serves no purpose except to serve as fodder for people who claim that <rationalist idea X> leads to violence and is therefore false, and your comment shall be duly deleted once this policy is put into place.

Comment author: Kevin 25 December 2012 05:55:50AM 2 points [-]

What would the response to this have been if instead of "censorship policy" the phrase would have been "community standard"?

Comment author: katydee 29 December 2012 01:09:20AM 4 points [-]

It probably would have been more positive but less honest.

Comment deleted 24 December 2012 12:39:38PM [-]
Comment author: NancyLebovitz 24 December 2012 09:00:27PM 2 points [-]
Comment author: Eliezer_Yudkowsky 24 December 2012 08:03:10PM 2 points [-]

Everyone even slightly famous gets arbitrary green ink. Choosing which green ink to 'complain' about on your blog, when it makes an idea look bad which you would find politically disadvantageous, is not a neutral act. I'm also frankly suspicious of what the green ink actually said, and whether it was, perhaps, another person who doesn't like the "UFAI is possible" thesis saying that "Surely it would imply..." without anyone ever actually advocating it. Why would somebody who actually advocated that, contact Ben Goertzel when he is known as a disbeliever in the thesis?

No, I don't particularly trust Ben Goertzel to play rationalist::nice with his politics. And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

Comment author: saturn 24 December 2012 10:05:49PM 7 points [-]

And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

If that's the case, it seems like giving him the title Director of Research could cause a lot of confusion. I certainly find it confusing. Maybe that was a different Ben Goertzel?

Comment author: timtyler 28 December 2012 12:24:18AM *  2 points [-]

Reportedly, Ben Goertzel and OpenCog were intended to add credibility through association with an academic:

It has similarly been a general rule with the Singularity Institute that, whatever it is we're supposed to do to be more credible, when we actually do it, nothing much changes. "Do you do any sort of code development? I'm not interested in supporting an organization that doesn't develop code"—> OpenCog—> nothing changes. "Eliezer Yudkowsky lacks academic credentials"—> Professor Ben Goertzel installed as Director of Research—> nothing changes.

Comment author: kodos96 24 December 2012 03:56:25AM 2 points [-]

How about instead of outright censorship, such discussions be required to be encrypted, via double-rot13?