Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)
- Duncan posts Basics of Rationalist Discourse. Said writes some comments in response.
- Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts.
- I publish LW Team is adjusting moderation policy. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
- Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
- @gjm publishes On "aiming for convergence on truth", which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?".
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.
Preliminary Verdict (but not "operationalization" of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new useful things about site governance") there's a limit to how much it's worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly suppor rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I'll be aiming for here is "make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can't do the cluster of things they want to do that feels damaging to me."
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I'd be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There's enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who's more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I've gotten about LessWrong, from both new users as well as established, generally highly regarded users, is "too many nitpicky comments that feel like they're missing the point". I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it's still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We've had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he's sticking around, I think we'd need some kind of tech solution. The outcome I want here is that in practice Said doesn't bother people who don't want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I'm skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
There's some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of "authors can ban users" is worth revisiting so my first impulse is to avoid investing in it further until we've had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask "Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?". Here are some areas I think Said contributes in a way that seem important:
Re: Duncan
I've spent years trying to hash out "what exactly is the subtle but deep/huge difference between Duncan's moderation preferences and the LW teams." I have found each round of that exchange valuable, but typically it didn't turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here's this month/year's stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to "knock it off" quickly. But,
a) moderation time is limited
b) even in the world where we massively invest in moderation... the thing Duncan cares most about moderating quickly just doesn't seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don't Exist, Duncan getting as heavily upvoted as it did, so I think it's plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven't actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don't necessarily agree on how. It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don't know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it's fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I'd have are:
FWIW I do think it's moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It's maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it's kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it's worth the LW team having a dedicated "our frame on what the site norms are" anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I'll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven't really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
FWIW, that is a claim I'm fully willing and able to justify. It's hard to disclaim all the possible misinterpretations in a brief comment (e.g. "deeply" != "very"), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.