http://wiki.lesswrong.com/wiki/Deletion_policy

This is my attempt to codify the informal rules I've been working by.

I'll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter.  I haven't been particularly encouraged to try responding to comments, either.  Nonetheless, if there's something I missed, let me know.

New Comment
92 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Suggestion: I recommend sending people their deleted posts.

I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket. If you want it gone, that's your prerogative, but I think it is a breach of etiquette for a forum to destroy information created by a forum user.

Now I assume you found the original post a breach of etiquette, so may feel that tit for tat is the right policy here. I'd consider an intentional breach of etiquette as an unnecessary escalation.

You can still see your own banned comments on your user page. This might be false for posts, I'm not sure.

4ahartell
Judging by Kodos96's user page, the same is the case for posts, i.e., they are still visible after being "censored."

This seems like a good thing to do as a courtesy in cases where it seems reasonable.

If it were an actual policy, you'd want to put some limits on it, i.e. "if the post is longer than X words and/or contains something that was clearly meant to be intelligent thought."

9Kawoomba
I used to do that for a long time on a large-ish subreddit I mod. Eventually, it became too much of a burden, the workload footprint was too large. It may be a feasible policy to try and do that on LW, given the (hopefully) very low volume of deleted content.

This sounds a like something that could be handled by a script so as to be an utterly transparent process. In your role as a subreddit mod, it wouldn't be so easy, but they have source access.

1Kawoomba
Good idea, that difference escaped my notice.
-6Shmi

Concrete suggestions:

1. Bring the policy statements to the forefront; put the lengthy "background" discussion of "free speech" vs. "walled gardens" and the like in a brief FAQ or discussion section at the end. The first line of the policy statement should be the one beginning "Most of the burden of moderation ..."

Reason: Most readers want to know what the policy is — so that should come first. Most of the people who want to argue about the theory of the policy are looking to have an enjoyably clever argument, which the "background" provides — so that should be there, but not in front.

2. Use formatting to emphasize the document's structure. As it stands, there's not enough visual structure for the eye to pick out the little numbers that indicate new points. More notably, the paragraph that separates the "more controversial" items looks structurally like it should be the explanation of the spam item.

3. Readers have heard of the common cases. Spam, harassment, and posting of personal information are things that lots of forums ban; LW is not unusual in this regard. In gist, if it's against Reddit's policy, it doesn't need a lo... (read more)

7Eliezer Yudkowsky
Formatting added.
2fubarobfusco
Yay! Thank you.
3wedrifid
Either that or it isn't specific enough and he could have come out and said what he really meant.
0Barry_Cotter
It was annoying to think I knew what you were referring to by reading this comment in isolation but it was depressing to be right.

I own the "everything-list" Google Group, which has no explicit moderation policy, although I do block spam and the occasional completely off-topic post from newbies who seemingly misunderstood the subject matter of the forum. It worked fine without controversy or anything particularly bad happening, at least in the first decade or so of its existence, when I still paid attention to it. I would prefer if Eliezer also adopted an informal but largely "hands off" policy here. But looking at Eliezer's responses to recent arguments as well as past history, the disagreement seems to be due to some sort of unresolvable differences in priors/values/personality and not amenable to discussion. So I disagree but feel powerless to do anything about it.

[-]Emile140

Interesting. A couple hypotheses:

1) Admins overestimate the effect that certain policies have on behavior (they may underestimate random effects, or assign effects to the wrong policy); just like parents might overestimate the effect of parenting choices, or managers overestimate the impact of their decisions ("we did daily stand-up meetings, and the project was completed on time - the daily stand-up meetings must be the cause!").

2) Eliezer is more concerned about the public image of LessWrong (both because of how it reflects on CFAR and SIAI, and on the kind of people it may attract) than you are (were?) about the everything-list.

For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.

Your testimony of a case where you still have quality discussion with very light moderation makes me slightly less in favor of heavy-handed moderation.

(I'm not sure that the moderation here is becoming "stronger" recently, as opposed to merely a bit more explicit)

3) Eliezer's tolerance for "crazy" or stupid posts is so low that he's way more pissed off by even a small number of them existing than other people are.

It seems to me the occasional crazy idea posted here wouldn't reflect that badly on CFAR and SIAI, if they had a policy of "LW is an open forum and we're not responsible for other people's posts", especially if the bad ideas are heavily voted down and argued against, with the authors often apologizing and withdrawing their own posts.

1crap
A crazy idea reflects badly on the ideology that spawned the crazy idea.
1handoflixue
If that were true, LessWrong would have such an INCREDIBLY HUGE advantage over most every major religion. LessWrong hasn't managed to raise armies and invade sovereign nations yet, after all. Thinking in those terms, it makes me strongly suspect anyone turned away by a single bad post is engaging in some VERY motivated cognition, and probably would not have stayed long. (A high noise:signal ratio, on the other hand, would be genuinely damaging)
2crap
No one here felt distraught with religion? Not even a little? :)

For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.

No, the main reason is to avoid evaporative cooling and slippery slopes, a.k.a., the reasons free speech is such a sacred value.

Keep in mind Eliezer himself would be considered a crank by most "mainstream skeptics".

1Emile
Do you think there's a big risk of evaporative cooling because Eliezer bans too many things? (assuming his current level of banning, not a much higher one) It's true that the infamous Roko case seems to fit the bill, and Wei Dai's concerns make me at least think it's possible - but I would expect a greater risk in the opposite direction, of the quality of discussion being watered down by floods of comments on stupid topics, meaning that people who don't have time to sort through all the clutter may end up giving up participating in most discussions.
0Elithrion
Having spent a few years chatting on karma-less, completely unmoderated fora (spam would be deleted, but nothing else), I can say that this does not seem to occur. The pattern seems to be that when someone says something the forum considers stupid, this is remarked upon, and then they either attempt to improve to be more in line with the general opinion, or leave. People are not really gluttons for punishment - if a community does not welcome them, they (usually) will not continue participating in it - and the ratio of new users to old users is typically very low, so norms are maintained in the medium term (barring major news coverage or something). Although I guess without the deletion policy discussion may drift further away from rationality, so if you think most of that would be boring or mindkilling, it may be of value.
1handoflixue
Eliezer has pretty blatantly stated that the reasoning was #2
2Dr_Manhattan
There is a large difference between running a private list and a more accessible forum associated with an organization (the logos on top).

The section on "information hazards" has an actual live link to TVTropes. Irony much?

9Eliezer Yudkowsky
Heh! Irony emphasized.
0Dorikka
This started me on a trope-walk, though I was eventually able to pull myself back to what I was doing. :P Irony indeed.

I agree with this policy.

When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy.

At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:

WebMD's account

CNN's account

Snopes' account

And CSI's account, which includes the following:

At about 6:51, the flashing lights filled the screens. By 7:30, according to the Fire-Defense agency, 618 children had been taken to hospitals complaining of various symptoms.

News of the attacks shot through Japan, and it was the subject of media reports later that evening. During the coverage, several stations replayed the flashing sequence, whereupon even more children fell ill and sought medical attention. The number affected by this “second wave” is unknown.

And then goes on to argue that the large number of cases was due to mass hysteria.

[-][anonymous]150

Please link to the wiki page somewhere so that it's not an orphan. Official policies need to be readily accessible. Also consider making it visible on the main site somewhere, if at all possible.

Linked to the new page from Moderation tools and policies, linked to 'Moderation tools and policies' from the wiki sidebar (section 'Community').

4[anonymous]
Thank you.
1Eliezer Yudkowsky
This can be carried out by non-admins (at least the first part).
4Vladimir_Nesov
It usually doesn't happen.

Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don't think should be moderated, are. "Information Hazards" specifically says "not mental health reasons."

[-]evand110

A true basilisk is not a mental health risk, or at least not only such. Whether one such has been found is a separate question (I lean toward no).

3A1987dM
IIRC, allegedly there were a few people with OCD having nightmares after reading that post by Roko.
3evand
My point was that it doesn't cause mental health problems, not that it can't trigger them. Perhaps that's a bad way to put it. If it does, there's something beyond the information hazard going on, either an existing problem being triggered, or a multiple hazard. As I understand it, a basilisk is hazardous because you know the argument, without it needing to corrupt your reasoning abilities. Roko's is alleged to be hazardous even to a rational agent. (I don't think it is, and I think censoring it prevents an interesting debate about why. I don't plan to say any more, given the existing censorship policies. If this is already too much, please let me know and I will edit accordingly.)
0Username
Quantum roulette is a possible candidate.
4Manfred
Well, the "LW basilisk" just turned out to be a knife sharp enough to cut yourself with. And sometimes you need sharp knives.
3wedrifid
It does, in as much as it includes: This particular entry makes all the others more or less redundant. This is perhaps better than only having the "information Hazard" clause. Because Eliezer deleting something based on the "Eliezer says so" is at least coherent and unambiguous. It doesn't matter whether a post by Roko is actually dangerous. The says so clause can still cover it and we can just roll our eyes and tolerate Eliezer's quirks.
3RobertLumley
Well his attempt here is to lay out a bit more than "Because Eliezer says so" as a reason.
[-]Emile100

I suspect a good deal of angst around the topic has been from people seeing the issues in online communities as symbolic of real-world issues - opposing policies not because they are bad for an online community, but because they would be bad if applied by a real-world government to a real-world nation; real-world governments come to mind because we have reasons to care more strongly about them, and we hear much more about them. But there are important differences! The biggest is that you can easily leave an online community any time you're not happy about it. I don't think an online community is more similar to a nation than it is to a bridge club, or a company, or a supermarket, or the people making an encyclopedia.

I don't think the concern about the symbolism of censorship is completely wrong; it's quite possible that China could argue that real-world censorship is important for the same reasons it is in online communities!

Somewhat off-topic, but this makes me think that maybe school should teach a bit about "online history" - the history of Usenet and Wikipedia for example.

This seems like a good deletion policy, but doesn't cover all the actual deletions that have been threatened. Edit: specifically, the policy of allowing certain parties to ban direct refutations of their arguments (edit2: from particular users).

At the end, the policy says that the policy does not force the mods to delete anything. Perhaps it should in the same breath also say that it does not prevent them from deleting anything. The judgement of the mods and admins is final and above the policy; the purpose of the policy is to inform them and the readership of the general principles that will be applied.

I was asked to post the following by an anonymous member.

There is a very big issue which this new policy fails to address:

Self defense is a widely advocated legal right in most jurisdictions. For instance, if someone is about to press a button that will activate a bomb which would kill you, and you have no other means of stopping them, in many jurisdictions you have a right to shoot them. Even when the offending party is not legally at fault (e.g. is insane).

This right puts extra burden of moral responsibility on the people that make certain claims. If s

... (read more)
0drethelin
Regardless of whether the authors "accept" this moral burden, to "indicate" that they do would be unwise. If you can get in serious trouble for saying something the public statements of smart people are a lot less evidence for what they actually think on that topic.

I agree with this policy.

[-]gjm80

Is the Pokemon story actually true? Casual googling suggests probably not, but I haven't investigated carefully enough to have a very strong opinion. Specifically, I didn't find corroboration of the claim that most of the children who went to hospital had seen news reports rather than the original programme.

7ArisKatsaris
This just says that some of the children were stricken later -- if I had to guess I'd say that the vast majority was done during the actual show.
0Eliezer Yudkowsky
So noted. Will try to remember to edit at some point.
-4arundelo
"[...] 'Pikachu,' a rat-like creature [...]"

That looks quite wall-of-text-y. It could be made more concise. Also, “We live in a society” -- “we” who? Not all LW users are from the US, or even from the Anglosphere, or even from the Western world. Whereas probably each LWer comes from some society with some stupid laws, that sentence still sounds kind of off, to me.

[-]Shmi70

It's nice to have written ground rules, even if they are basically common sense.

I think this seems like a basically fine policy.

I will also say that my own experience being a moderator is firmly in agreement with http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/ , and thus in opposition to those who would rather see a totally hands-off approach to moderation.

Why would this post need to be deleted?

Why would this post need to be deleted?

Because people can reply to it and some replies are disagreements.

9Eugine_Nier
So, there might be comments on LW of people disagreeing with Eliezer's policy. The horror.
9Multiheaded
Nah, he likely means that the comments might become so full of censorable examples that the entire branch of discussion would get tainted. I hope not. (I'm moderately against the tightening of censorship policy, BTW, but I understand Eliezer's reasoning, and I'm fine with it.)
[-][anonymous]30

I agree with this policy. It sounds totally benign and ordinary.

I haven't been particularly encouraged to try responding to comments, either.

If you mean comment karma, consider that in the case where people appreciate your responses, but strongly disagree with their content, they will downvote you instinctively, as soon as they would furrow their brows: it's an immediately available, low effort way to scratch the itch of dissenting feelings. Since downvotes seem to give you cold-stabbies, but don't make you reevaluate your positions, instinct-downvotin... (read more)

[This comment is no longer endorsed by its author]Reply
3[anonymous]
Indeed, and we (the LW community) have to learn to tell the difference between deliberate trolls and misguided rationalists for our moderation to be effective. In the same way that replying to a troll is a mistake in that it feeds their attention craving, not replying to a wrong non-troll can be a mistake in that they don't notice their error. Maybe a lower downvote limit (4xkarma) would help break aforementioned habit.
5Epiphany
Then there's the possibility that someone enjoys intentionally pretending to be clueless as a means of trolling and further enjoys that it disrupts people's instinct to provide guidance to misguided rationalists.
-1[anonymous]
That would be incredibly difficult on the moderators. Thankfully, being smart enough to think of that and dumb enough to be a troll isn't a very plausible interval for human intellect.
6Epiphany
Unfortunately, sometimes gifted people are trolls.

I would repeat the thing about not binding at the top.

Well...

I'm upset by this.

Not sure why, exactly, but yeah, definitely upset by this. Just felt like sharing.

3Luke_A_Somers
If you could figure that out, that would be helpful.
5pleeppleep
Intuitive gut reaction. If I had an argument to make I would have said so. Any case I make would have been formed from backtracking from my initial feeling, and I'm probably not the only commenter here arguing based on an "ick" or "yay" gut reaction to the idea of censorship. I thought it was worth pointing out.
3Epiphany
As I see it, this is sort of like that quote on truth that goes something like "You may as well acknowledge the truth - you're already dealing with it." Censorship was already happening on LessWrong. Now that Eliezer is making an effort to share some of his decision-making process, there is less to fear in a way since you get to have that additional info for guessing what he's likely to do. Fear of the unknown can feel a lot worse than fear of the known.
2pleeppleep
I think you mean the Litany of Gendlin, and I believe some of these rules are being newly implemented, but I could be wrong about that. He can run his site anyway he wants, and most of the ideas here are reasonable precautions given his values. That doesn't change the fact that I intuitively don't like them when I read them, and that gut reaction (or possibly it's opposite) is probably shared with others here who probably allow it to color their arguments one way or the other. Just something to keep in mind, is all.
-2Epiphany
Oh thank you. I kept wondering what that quote was. Oh, that is a good point. I was trying to make you feel better.
2[anonymous]
Status quo bias: I'm reasonably sure that if this policy had been in place from Day 1, very few people would have given it a second thought.
2[anonymous]
I remember that one way to combat status quo bias is re-framing. I am about to read the new deletion policy for the first time, but I am going to consciously frame it as "this is a deletion policy already in place for a site I am considering joining" rather than "this is a change to a deletion policy for a site I have already joined." [Goes to read the policy] In that frame, I would like the deletion policy and it wouldn't otherwise discourage me from joining the site. I would appreciate that the moderators would be taking moderation seriously, as opposed to some other sites I know of. In particular, the example about academic conferences is a great illustration of the argument. My only concern is about the broad language used under the sections "Prolific trolls" and "Trollfeeding." The policy refers to commentators who as well as Can the policy be amended to quantify those qualitative standards? Or if for practical purposes we can't quantify those standards, then include an a sentence to emphasize that interpretation of the standard is at the moderator's individual discretion.

LessWrong is focused on rationality and will remain focused on rationality. There's a good deal of side conversation which goes on and this is usually harmless. Nonetheless, if we ask people to stop discussing some other topic instead of rationality, and they go on discussing it anyway, we may enforce this by deleting posts, comments, or comment trees.

This has always been the LW mission, and it's true that some threads are not at all on subject. And then it makes sense to delete them if their net value is even slightly negative, perhaps even if they are... (read more)

I see no definition for the word troll. It seems like a thing that should be obvious, but I've seen people using the word "troll" to describe people who are simply ignorant. I think I'm also picking up on a trend where, if a comment is downvoted, it is considered trolling regardless of the fact that it was simply an unpopular comment by an otherwise likable user. LessWrong seems to use a broader definition of the word "trolling" than I am used to. If you guys have your own twist on "trolling" it would be good to add LessWrong's definition to the wiki.

4Emile
I don't think a formal definition of the word "troll" would be useful; the term is used somewhat informally to the general blob of "problematic users" - trolls, idiots, cranks, aggressive and self-centered users, people who won't shut up about their pet topic, etc. - the borders are somewhat fuzzy, and any attempt to try to formalize them is likely to be too broad or too narrow. Would you be able to properly formalize the kind of behavior you don't want on a website you run, without being too broad or too narrow? "Troll" is a bit like an unambiguous example of the class of behaviors to be discouraged, but if the policies hit a broader target and also discourage non-trolling obnoxious cranks and idiots, that's a feature, not a bug. Incidentally, I agree that using 'trolling" to describe any downvoted comments (like the "troll toll") is somewhat unfortunate, meany downvoted comments are from users who sincerely want to convince everybody that if they would stop being blinded by politically correct groupthink they would recognize that lizard-men are controlling the government. But then, "troll toll" has a nice ring to it.
-4Epiphany
I can see how this would be more useful from the perspective of the person doing the banning, but I don't see why it would be useful from the perspective of the person who is attempting to avoid being banned. Flexible for one purpose, too vague for the other. Somebody has probably already done so. Not perfectly, of course. But they've probably already done so. There might even be a description of undesired behavior in an open source context, either as part of a free legal terms of service agreement, or as part of a piece of open source software. It is quite possible that a good free description has already been written and just needs editing. It's also possible to do better than be flexible/vague and provide a list of behaviors (such as the one you created above) that briefly describes the main concerns, without it being perfect, and simply aim to make an improvement on flexible/vague. The problem is that people with idiotic ideas do not know they are being idiotic, and I think that although some cranks do know that they're wrong and are content trying to scam people, other cranks are just as clueless as their customers, and have no idea that what they're selling is a ripoff. For instance: I'm not religious, but do I consider a priest a crank? No. I consider a priest somebody who genuinely believes the ideas they're selling, not somebody intentionally deceiving people in order to collect donation money. For this reason, using the words "cranks" and "idiots" is probably not likely to work - something like "If you don't bother to support your points with rational arguments and don't update and keep bothering us, we'll boot you." would be more likely to help them realize it's targeted at them.
0Emile
I agree with most of what you say here, there are probably some places where "troll" could have been replaced by something more precise in a way that would be more useful. I agree that it's important to help "borderline problematic users" to mend their ways, but I don't think the deletion policy is the best place to do that; a precise and detailed deletion policy risks increasing the amount of nitpicking over whether such-and-such moderator action was really justified by the rules (even if those "rules" are actually just said moderator trying to explain by what principles he acts, not a binding legal document!), or nitpicking about whether such-and-such hypothetical case should be banned or not; neither of those two conversations are things I'm particularly interested in reading. So I think it may be more efficient to help good faith users by improving welcome pages, or talking to them in welcome threads, etc.
-2Epiphany
The not wanting to nitpick is a good point. I don't know whether a more specific definition of troll would necessarily result in more nitpicking. If readers take "troll" by the stereotypical definition (like what ArisKatsaris provided over here and then somebody gets deemed a troll and censored for saying idiotic things without an intent to annoy (or for some other reason not typically associated with the stereotypical troll), then this could spark controversy, and you still get the nitpicking conversation. Verbiage like "anybody who trolls, but not limited to that" or "we think trolls are this that and the other, but not limited to that" may make any nitpicking conversations rather short. "We said it wasn't limited to that. End of conversation."
0ArisKatsaris
Trolls are generally people who post with the hope of invoking a negative reaction (e.g. negative responses, flames, downvotes, censorship, bans). Identifying trolls is often a harder job than defining them.
0Eugine_Nier
So does asking for criticism of your argument count as trolling?
0ArisKatsaris
There's a difference between asking for criticism of a post/argument that you nonetheless hope to be good, and intentionally making a bad argument so that you will be criticized. I think the difference I'm talking about is well understood.
0Eugine_Nier
Basically, would Socrates be considered a troll?
-1Epiphany
Thanks. That looks like the stereotypical definition of troll to me. Is it that you're saying LessWrong does not use the word "troll" differently, and the ambiguity is just due to people having a hard time figuring out who is a troll?
0ArisKatsaris
'LessWrong' is composed of many people. I'm sure that some use it the way I use it, and some have different definitions. I don't think that LessWrong differs in this respect from any other forum or community.
[+]Cyan-70