Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)
- Duncan posts Basics of Rationalist Discourse. Said writes some comments in response.
- Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts.
- I publish LW Team is adjusting moderation policy. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
- Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
- @gjm publishes On "aiming for convergence on truth", which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?".
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?
But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?
Hold on—you’ve lost track of the meta-level point.
The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.
Here’s what you wrote earlier:
Which I summarized/interpreted as:
(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)
But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.
So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)
And that, in turn, means that when you say:
… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.
(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)