While I certainly have thoughts on all of this, let me point out one aspect of this system which I think is unusually dangerous and detrimental:
The ability (especially for arbitrary users, not just moderators) to take moderation actions that remove content, or prevent certain users from commenting, without leaving a clearly and publicly visible trace.
At the very least (if, say, you’re worried about something like “we don’t want comments sections to be cluttered with ‘post deleted’”), there ought to be a publicly viewable log of all moderation actions. (Consider the lobste.rs moderation log feature as an example of how such a thing might work.) This should apply to removal of comments and threads, and it should definitely also apply to banning a user from commenting on a post / on all of one’s posts.
Let me say again that I consider a moderation log to be the minimally acceptable moderation accountability feature on a site like this—ideally there would also be indicators in-context that a moderation action has taken place. But allowing totally invisible / untraceable moderation actions is a recipe for disaster.
Edit: For another example, note Scott’s register of bans/warnings, which ...
I'm also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.
For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he's been quite clear this is a condition for his participation. Lo and behold precisely these features have been introduced, with suspiciously convenient karma thresholds which allow EY (at his current karma level) to traceless delete/ban on his own promoted posts, yet exclude (as far as I can tell) the great majority of other writers with curated/front page posts from being able to do the same.
Given the popularity of EY's writing (and LW2 wants to include future work of his), the LW2 team are obliged to weigh up the (likely detrimental) addition of these features versus the likely positives of his future posts. Going for the latter is probably the right judgement call to make, but let's not pretend it is a principled one: we are, as the old saw goes, just haggling over the price.
Yeah, I didn't want to make this a thread about discussing Eliezer's opinion, so I didn't put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.
Here is my rough summary of how this plays into my current perspective on things:
1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.
2. Allowing users to moderate their own front-page posts is not something that Eliezer requested (I think he would be happy with them just being personal posts), but is a natural consequence of wanting to allow users to moderate their own posts, while also not giving up our ability to promote the best content to the front-page and to curated
3. Allowing users to delete things without a trace was a request by Eliezer, but is also something I thought about independently to deal with stuff like spam and repeated offenders (for example, Eugine has created over 100 comments on one of Ozy's posts, and you don't want all of them to show up as deleted stubs). I expect we wouldn't have built the future as it currently stands without Eliezer, but I hadn't actually considered a moderation logs page like the one Said pointed out, and I actually quite like that idea, and don't expect Eliezer to object too much to it. So that might be a solution that makes everyone reasonably happy.
I actually quite like the idea of a moderation log, and Ben and Ray also seem to like it. I hadn't really considered that as an option, and my model is that Eliezer and other authors wouldn't object to it either, so this seems like something I would be quite open to implementing.
I really like the moderation log idea - I think it could be really good for people to have a place where they can go if they want to learn what the norms are empirically. I also propose there be a similar place which stores the comments explaining why posts are curated.
(Also note that Satvik Beri said to me I should do this a few months ago and I forgot and this is my fault.)
Yeah, I agree it doesn't create the ideal level of transparency. In my mind, a moderation log is more similar to an accounting solution than an educational solution, where the purpose of accounting is not something that is constantly broadcasted to the whole system, but is instead used to backtrack if something has gone wrong, or if people are suspicious that there is some underlying systematic problem going on. Which might get you a lot of the value that you want, for significantly lower UI-complexity cost.
Dividing the site to smaller sub-tiefs where individual users have ultimate moderation power seems to have been a big part of why Reddit (and to some extent, Facebook) got so successful, so I'm having big hopes for this model.
I don't have an opinion on the moderation policy, but I did want to say thanks for all the hard work in bringing the new site to life.
LessWrong 1.0 was basically dead, and 2.0 is very much alive. Huge respect and well-wishes.
Just want to say this moderation design addresses pretty much my only remaining aversion to posting on LW and I will be playing around with Reign of Terror if I hit the karma. Also really prefer not to leave public traces.
My primary desire to remove the trace is that there are characters so undesirable on the internet that I don't want to be reminded of their existence every time I scroll through my comments section, and I certainly don't want their names to be associated with my content. Thankfully, I have yet to receive any comments anything close to this level on LW, but give a quick browse through the bans section of SlateStarCodex and you'll see they exist.
I am in favor of a trace if it were on a moderation log that does not show up on the comment thread itself.
Here's a hypothesis for the crux of the disagreement in this comments section:
There's a minor identity crisis about whether LW is/should primarily be a community blog or a public forum.
If it is to be a community blog, then the focus is in the posts section, and the purpose of moderation should be to attract all the rationality bloggers to post their content in one place.
If it is to be a public forum/reddit (I was surprised at people referring to it like so), then the focus is in the comments section, and the main purpose of moderation should be to protect all viewpoints and keep a bare minimum of civility in a neutral and open discussion.
I quite dislike the idea of people being able to moderate their content in this fashion - that just isn't what a public discussion is in my view - but thanks for being transparent about this change.
What was the logic behind having a karma threshold for moderation? What were you afraid would happen if low karma people could moderate, especially on their personal blog?
Does allowing users to moderate mean the moderation team of the website will not also be moderating those posts? If so, that seems to have two implications: one, this eases the workload of the moderation team; two, this puts a lot more responsibility on the shoulders of those contributors.
Ah, sorry, looks like I forgot to mention that in the post above. There is a checkbox you can check on your profile that says "I'm happy for LW site moderators to help enforce my policy", which then makes it so that the sitewide moderators will try to help with your moderation.
We will also continue enforcing the frontpage guidelines on all frontpage posts, in addition to whatever guidelines the author has set up.
I'm still somewhat uncomfortable with authors being able to moderate front-page comments, but I suppose it could be an interesting experiment to see if they use this power responsibly or if it gets abused.
I think that there should also be an option to collapse comments (as per Reddit), instead of actually deleting them. I would suggest that very few comments are actually so bad that they need to be deleted, most of the time it's simply a matter of reducing the incentive to incite controversy in order to get more people replying to your comment.
Anyway, I'm really hoping that it encourages some of the old guard to post more of their content on Less Wrong.
If a comment of yours is ever deleted, you will automatically receive a PM with the text of your comment, so you don’t lose the content of your comment.
My intuition is that it would be better to allow users to see posts of their own that were deleted in a grayed out way instead of going through the way of sending an PM.
If there's a troll, sending a troll a PM that one of their post got deleted creates a stronger invitation to respond. That especially goes for deletions without giving reasons.
In addition I would advocate that posts that are deleted ...
Will there be a policy on banned topics, such as e.g. politics, or will that be left to author discretion as part of moderation? Perhaps topics that are banned from promotion / front page (regardless of upvotes and comments) but are fine otherwise?
If certain things are banned, can they please be listed and defined more explicitly? This came up recently in another thread and I wasn't answered there.
I think this is extremely bad. Letting anyone, no matter how prominent, costlessly remove/silence others is toxic to the principle of open debate.
At minimum, there should be a substantial penalty for banning and deleting comments. And not a subtraction, a multiplication. My first instinct would be to use the fraction of users you have taken action against as a proportional penalty to your karma, for all purposes. Or, slightly more complex, take the total "raw score" of karma of all users you've taken action against, divide by the total "...
[I will move this into meta in a few days, but this seemed important enough to have around on the frontpage for a bit]
Here is a short post with some of the moderation changes we are implementing. Ray, Ben and me are working on some more posts explaining some of our deeper reasoning, so this is just a list with some quick updates.
Even before the start of the open beta, I intended to allow trusted users to moderate their personal pages. The reasoning I outlined in our initial announcement post was as follows:
“We want to give trusted authors moderation powers for the discussions on their own posts, allowing them to foster their own discussion norms, and giving them their own sphere of influence on the discussion platform. We hope this will both make the lives of our top authors better and will also create a form of competition between different cultures and moderation paradigms on Lesswrong.”
And I also gave some further perspectives on this in my “Models of Moderation” post that I posted a week ago.
We now finally got around to implement the technology for this. But the big question that has been on my mind while working on the implementation has been:
Me, Ray, Ben and Vaniver talked for quite a while about the pros and cons, and considered a bunch of perspectives, but the two major considerations on our mind were:
After a good amount of internal discussion, as well as feedback from some of the top content contributors on LW (including Eliezer), we settled on allowing users above 2000 karma to moderate their own frontpage posts, and allow users above 100 karma to moderate their personal blogs. This strikes me as the best compromise between the different considerations we had.
Here are the details about the implementation:
I also want to allow users to create private comments on posts, that are only visible to themselves and the author of the post, and allow authors to make comments private (as an alternative to deleting them). But that will have to wait until we get around to implementing it.
We tested this reasonably thoroughly, but there is definitely a chance we missed something, so let us know if you notice any weird behavior around commenting on posts, or using the moderation tools, and we will fix it ASAP.