Lots of new users have been joining LessWrong recently, who seem more filtered for "interest in discussing AI" than for being bought into any particular standards for rationalist discourse. I think there's been a shift in this direction over the past few years, but it's gotten much more extreme in the past few months.
So the LessWrong team is thinking through "what standards make sense for 'how people are expected to contribute on LessWrong'?" We'll likely be tightening up moderation standards, and laying out a clearer set of principles so those tightened standards make sense and feel fair.
In coming weeks we'll be thinking about those principles as we look over existing users, comments and posts and asking "are these contributions making LessWrong better?".
Hopefully within a week or two, we'll have a post that outlines our current thinking in more detail.
Generally, expect heavier moderation, especially for newer users.
Two particular changes that should be going live within the next day or so:
- Users will need at least N karma in order to vote, where N is probably somewhere between 1 and 10.
- Comments from new users won't display by default until they've been approved by a moderator.
Broader Context
LessWrong has always had a goal of being a well-kept garden. We have higher and more opinionated standards than most of the rest of the internet. In many cases we treat some issues as more "settled" than the rest of the internet, so that instead of endlessly rehashing the same questions we can move on to solving more difficult and interesting questions.
What this translates to in terms of moderation policy is a bit murky. We've been stepping up moderation over the past couple months and frequently run into issues like "it seems like this comment is missing some kind of 'LessWrong basics', but 'the basics' aren't well indexed and easy to reference." It's also not quite clear how to handle that from a moderation perspective.
I'm hoping to improve on "'the basics' are better indexed", but meanwhile it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z).
In some cases you can get away without doing that while participating in local object level conversations, and pick up norms along the way. But if you're getting downvoted and you haven't read them, it's likely you're missing a lot of concepts or norms that are considered basic background reading on LessWrong. I recommend starting with the Sequences Highlights, and I'd also note that you don't need to read the Sequences in order, you can pick some random posts that seem fun and jump around based on your interest.
(Note: it's of course pretty important to be able to question all your basic assumptions. But I think doing that in a productive way requires actually understand why the current set of background assumptions are the way they are, and engaging with the object level reasoning)
There's also a straightforward question of quality. LessWrong deals with complicated questions. It's a place for making serious progress on those questions. One model I have of LessWrong is something like a university – there's a role for undergrads who are learning lots of stuff but aren't yet expected to be contributing to the cutting edge. There are grad students and professors who conduct novel research. But all of this is predicated on there being some barrier-to-entry. Not everyone gets accepted to any given university. You need some combination of intelligence, conscientiousness, etc to get accepted in the first place.
See this post by habryka for some more models of moderation.
Ideas we're considering, and questions we're trying to answer:
- What quality threshold does content need to hit in order to show up on the site at all? When is the right solution to approve but downvote immediately?
- How do we deal with low quality criticism? There's something sketchy about rejecting criticism. There are obvious hazards of groupthink. But a lot of criticism isn't well thought out, or is rehashing ideas we've spent a ton of time discussing and doesn't feel very productive.
- What are the actual rationality concepts LWers are basically required to understand to participate in most discussions? (for example: "beliefs are probabilistic, not binary, and you should update them incrementally")
- What philosophical and/or empirical foundations can we take for granted for building off of (i.e. reductionism, meta-ethics)
- How much familiarity with the existing discussion of AI should you be expected to have to participate in comment threads about that?
- How does moderation of LessWrong intersect with moderating the Alignment Forum?
Again, hopefully in the near future we'll have a more thorough writeup about our answers to these. Meanwhile it seemed good to alert people this would be happening.
I'd be willing to put serious effort into editing/updating/redrafting the two sections that got the most constructive pushback, if that would help tip things over the edge.