Suggestion: I recommend sending people their deleted posts.
I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket. If you want it gone, that's your prerogative, but I think it is a breach of etiquette for a forum to destroy information created by a forum user.
Now I assume you found the original post a breach of etiquette, so may feel that tit for tat is the right policy here. I'd consider an intentional breach of etiquette as an unnecessary escalation.
You can still see your own banned comments on your user page. This might be false for posts, I'm not sure.
This seems like a good thing to do as a courtesy in cases where it seems reasonable.
If it were an actual policy, you'd want to put some limits on it, i.e. "if the post is longer than X words and/or contains something that was clearly meant to be intelligent thought."
This sounds a like something that could be handled by a script so as to be an utterly transparent process. In your role as a subreddit mod, it wouldn't be so easy, but they have source access.
Concrete suggestions:
1. Bring the policy statements to the forefront; put the lengthy "background" discussion of "free speech" vs. "walled gardens" and the like in a brief FAQ or discussion section at the end. The first line of the policy statement should be the one beginning "Most of the burden of moderation ..."
Reason: Most readers want to know what the policy is — so that should come first. Most of the people who want to argue about the theory of the policy are looking to have an enjoyably clever argument, which the "background" provides — so that should be there, but not in front.
2. Use formatting to emphasize the document's structure. As it stands, there's not enough visual structure for the eye to pick out the little numbers that indicate new points. More notably, the paragraph that separates the "more controversial" items looks structurally like it should be the explanation of the spam item.
3. Readers have heard of the common cases. Spam, harassment, and posting of personal information are things that lots of forums ban; LW is not unusual in this regard. In gist, if it's against Reddit's policy, it doesn't need a lo...
I own the "everything-list" Google Group, which has no explicit moderation policy, although I do block spam and the occasional completely off-topic post from newbies who seemingly misunderstood the subject matter of the forum. It worked fine without controversy or anything particularly bad happening, at least in the first decade or so of its existence, when I still paid attention to it. I would prefer if Eliezer also adopted an informal but largely "hands off" policy here. But looking at Eliezer's responses to recent arguments as well as past history, the disagreement seems to be due to some sort of unresolvable differences in priors/values/personality and not amenable to discussion. So I disagree but feel powerless to do anything about it.
Interesting. A couple hypotheses:
1) Admins overestimate the effect that certain policies have on behavior (they may underestimate random effects, or assign effects to the wrong policy); just like parents might overestimate the effect of parenting choices, or managers overestimate the impact of their decisions ("we did daily stand-up meetings, and the project was completed on time - the daily stand-up meetings must be the cause!").
2) Eliezer is more concerned about the public image of LessWrong (both because of how it reflects on CFAR and SIAI, and on the kind of people it may attract) than you are (were?) about the everything-list.
For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.
Your testimony of a case where you still have quality discussion with very light moderation makes me slightly less in favor of heavy-handed moderation.
(I'm not sure that the moderation here is becoming "stronger" recently, as opposed to merely a bit more explicit)
3) Eliezer's tolerance for "crazy" or stupid posts is so low that he's way more pissed off by even a small number of them existing than other people are.
It seems to me the occasional crazy idea posted here wouldn't reflect that badly on CFAR and SIAI, if they had a policy of "LW is an open forum and we're not responsible for other people's posts", especially if the bad ideas are heavily voted down and argued against, with the authors often apologizing and withdrawing their own posts.
For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.
No, the main reason is to avoid evaporative cooling and slippery slopes, a.k.a., the reasons free speech is such a sacred value.
Keep in mind Eliezer himself would be considered a crank by most "mainstream skeptics".
When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy.
At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:
And CSI's account, which includes the following:
At about 6:51, the flashing lights filled the screens. By 7:30, according to the Fire-Defense agency, 618 children had been taken to hospitals complaining of various symptoms.
News of the attacks shot through Japan, and it was the subject of media reports later that evening. During the coverage, several stations replayed the flashing sequence, whereupon even more children fell ill and sought medical attention. The number affected by this “second wave” is unknown.
And then goes on to argue that the large number of cases was due to mass hysteria.
Please link to the wiki page somewhere so that it's not an orphan. Official policies need to be readily accessible. Also consider making it visible on the main site somewhere, if at all possible.
Linked to the new page from Moderation tools and policies, linked to 'Moderation tools and policies' from the wiki sidebar (section 'Community').
Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don't think should be moderated, are. "Information Hazards" specifically says "not mental health reasons."
A true basilisk is not a mental health risk, or at least not only such. Whether one such has been found is a separate question (I lean toward no).
I suspect a good deal of angst around the topic has been from people seeing the issues in online communities as symbolic of real-world issues - opposing policies not because they are bad for an online community, but because they would be bad if applied by a real-world government to a real-world nation; real-world governments come to mind because we have reasons to care more strongly about them, and we hear much more about them. But there are important differences! The biggest is that you can easily leave an online community any time you're not happy about it. I don't think an online community is more similar to a nation than it is to a bridge club, or a company, or a supermarket, or the people making an encyclopedia.
I don't think the concern about the symbolism of censorship is completely wrong; it's quite possible that China could argue that real-world censorship is important for the same reasons it is in online communities!
Somewhat off-topic, but this makes me think that maybe school should teach a bit about "online history" - the history of Usenet and Wikipedia for example.
This seems like a good deletion policy, but doesn't cover all the actual deletions that have been threatened. Edit: specifically, the policy of allowing certain parties to ban direct refutations of their arguments (edit2: from particular users).
At the end, the policy says that the policy does not force the mods to delete anything. Perhaps it should in the same breath also say that it does not prevent them from deleting anything. The judgement of the mods and admins is final and above the policy; the purpose of the policy is to inform them and the readership of the general principles that will be applied.
I was asked to post the following by an anonymous member.
...There is a very big issue which this new policy fails to address:
Self defense is a widely advocated legal right in most jurisdictions. For instance, if someone is about to press a button that will activate a bomb which would kill you, and you have no other means of stopping them, in many jurisdictions you have a right to shoot them. Even when the offending party is not legally at fault (e.g. is insane).
This right puts extra burden of moral responsibility on the people that make certain claims. If s
Is the Pokemon story actually true? Casual googling suggests probably not, but I haven't investigated carefully enough to have a very strong opinion. Specifically, I didn't find corroboration of the claim that most of the children who went to hospital had seen news reports rather than the original programme.
That looks quite wall-of-text-y. It could be made more concise. Also, “We live in a society” -- “we” who? Not all LW users are from the US, or even from the Anglosphere, or even from the Western world. Whereas probably each LWer comes from some society with some stupid laws, that sentence still sounds kind of off, to me.
I think this seems like a basically fine policy.
I will also say that my own experience being a moderator is firmly in agreement with http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/ , and thus in opposition to those who would rather see a totally hands-off approach to moderation.
Why would this post need to be deleted?
Because people can reply to it and some replies are disagreements.
I agree with this policy. It sounds totally benign and ordinary.
I haven't been particularly encouraged to try responding to comments, either.
If you mean comment karma, consider that in the case where people appreciate your responses, but strongly disagree with their content, they will downvote you instinctively, as soon as they would furrow their brows: it's an immediately available, low effort way to scratch the itch of dissenting feelings. Since downvotes seem to give you cold-stabbies, but don't make you reevaluate your positions, instinct-downvotin...
Well...
I'm upset by this.
Not sure why, exactly, but yeah, definitely upset by this. Just felt like sharing.
LessWrong is focused on rationality and will remain focused on rationality. There's a good deal of side conversation which goes on and this is usually harmless. Nonetheless, if we ask people to stop discussing some other topic instead of rationality, and they go on discussing it anyway, we may enforce this by deleting posts, comments, or comment trees.
This has always been the LW mission, and it's true that some threads are not at all on subject. And then it makes sense to delete them if their net value is even slightly negative, perhaps even if they are...
I see no definition for the word troll. It seems like a thing that should be obvious, but I've seen people using the word "troll" to describe people who are simply ignorant. I think I'm also picking up on a trend where, if a comment is downvoted, it is considered trolling regardless of the fact that it was simply an unpopular comment by an otherwise likable user. LessWrong seems to use a broader definition of the word "trolling" than I am used to. If you guys have your own twist on "trolling" it would be good to add LessWrong's definition to the wiki.
http://wiki.lesswrong.com/wiki/Deletion_policy
This is my attempt to codify the informal rules I've been working by.
I'll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter. I haven't been particularly encouraged to try responding to comments, either. Nonetheless, if there's something I missed, let me know.