Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)
- Duncan posts Basics of Rationalist Discourse. Said writes some comments in response.
- Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts.
- I publish LW Team is adjusting moderation policy. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
- Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
- @gjm publishes On "aiming for convergence on truth", which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?".
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you're [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You've spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You've written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don't think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you'd be much happier with my ideal, you'd think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I'm really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a "this place is really under control" kind of ideal. I take responsibility for it not being so, and I'm sorry. I wouldn't blame you for saying this isn't good enough and wanting to leave[1], there are some pretty bad flaws.
But sir, you impugn my and my site's honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don't work 80-hour weeks, I feel like I put a tonne of my soul into this site.
Over the last year, I've been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we're still failing at this. On many days, the Frontpage is 90+% AI posts. It's not been a trivial problem for many problems.
The other existential problem, beyond the topic, that I've been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It's not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you'll be able to view the content we didn't go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document that describes my sense of priorities for the team.
I think the discourse norms and bad behavior (and I'm willing to say now in advance of my more detailed thoughts that there's a lot of badness to how Said behaves) are also serious threats to the site, and we do give those attention too. They haven't felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you've done us a service to draw our attention to behavior we should be deeming intolerable, and it's easily 50-100 hours of team attention.
It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I'll claim that it was definitely not an obvious mistake that we haven't addressed the problems you're most focused on.
It is actually on my radar and I've been actively wanted for a while a system that reliably gets the mod team to show up and say "cut it out" sometimes. I suspect that's what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say "Duncan, we the mods certify that if you disengage, it is no mark against you" or something. I'm not sure. Ray mentioned the concept of "Maslow's Hierarchy of Moderation" and I like that idea, and would like to get soon to the higher level where we're actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I'm doing to pivot when these threads come up, perhaps I should work on that.
I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven't (or why Lightcone as a whole didn't keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it'd be bad to hire right now.
All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn't meet your standards or not how you'd do it, perhaps we're not all that competent, fair enough.
(And also you've had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven't adopted it wholesale, and make use of many of the concepts you've crystallized. For that I am grateful. Those are contributions I'd be sad to lose, but I don't want to push you to offer to them to us if doing so is too costly for you.)
I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren't working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.
Something like the core identity of LessWrong is rationality. In alternate worlds, that is the same, but the major topic could be something else.
Over the weekend, some parts of the reviewing get deferred till the work week.