tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new...
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
... that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would've been less likely to leave and would be more likely to return with marginal movement in that direction.
I don't know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like "how would you have felt if we had moved 25% in this direction," I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more "what? No, we're well-adapted to the current environment; we're the ones who've been filtered for."
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
It's a bad thing to institute policies when missing good proxies. Doesn't matter if the intended objective is good, a policy that isn't feasible to sanely execute makes things worse.
Whether statements about someone's inner state are "unfounded" or whether something is a "strawman" is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don't see a principled difference. People should be allowed to be wrong, that's the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it's not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It's bad on both levels, hence "hair-raisingly alarming".)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I'm not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn't seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I'm not averse to re-injecting the context into their discussion. But I won't necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators' arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related ...
I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn't originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.
(This announcement is also rather hush-hush, it's not a post and so I've only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
Just want to note that I'm less happy with a lesswrong without Duncan. I very much value Duncan's pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he's doing. The fact that he's being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it's much clearer that his norms are operative, and I've been annoyed, but each of those times, despite being frustrated, I have found that I'm being pushed in the right direction and corrected for something I'm doing wrong.
I agree that it's bad that his comments are often overly confrontational, but there's no way to deliver constructive feedback that doesn't involve a degree of confrontation, and I don't see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I'd be happy to ask him to take a break. But this isn't that world, and it seems much better to actively promote a norm of people saying they don't have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad.
Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.
From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan's post while Said and others were and continue to be banned from commenting on it.
From this point of view, I don't see what either of Said or Duncan have done to justify any sort of ban, temporary or not.
This decision is based on mostly on past patterns with both of them, over the course of ~6 years.
The recent conflict, in isolation, is something where I'd kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people's worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.
The motivation here is from a larger history. (I've summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)
And notably, my preference is for this not to result in a ban. I'm hoping we can work something out. The thing I'm laying down in this comment is "we do have to actually work something out."
The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics
I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.
With regards to Said's 'general pattern', I think there's a dynamic around socially recognized gnosis where sometimes people will say "sorry, my inability/unwillingness to explain this to you is your problem" and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious--like polls or reacts or w/e--does seem good.
I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.
I condemn the restrictions on Said Achmiz's speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question "credibly commit[ting] to changing their behavior in a fairly significant way", "accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior", or "be[ing] banned from commenting on other people's posts".
The first is a restriction on variety of speech. (I don't see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won't result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the "tech solution" of the second could be mere rate-limiting, but the "doesn't depend on their continued behavior" clause makes me think something more onerous is intended.
(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don't comment on the other case, but I'm deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)
The tech solution I'm currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe, he gets to comment more without limit." But that's fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so". The question here is "do you want the 'real work' of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can't bother you and?" (which is what's mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is "sudden...
a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so".
We already have a user-level personal ban feature! (Said doesn't like it, but he can't do anything about it.) Why isn't the solution here just, "Users who don't want to receive comments from Said ban him from their own posts"? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I'm unlikely to guess it; you'll have to clarify.) It's true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by indi...
We already let authors write their own moderation guidelines! It's a blank text box!
Because it's a blank text box, it's not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 -> Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “10...
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
YES. I think this is hugely important, and I think it's a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they're lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or hi...
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
You're describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you're right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can't, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they're an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren't there in the first place. It's definitely more challenging to jam with dissonant characters like that (especially if they're dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it's important to realize that the problem isn't so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Ray writes:
Here are some areas I think Said contributes in a way that seem important:
- Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com.
For the record, I think the value here is "Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world", and I don't think that comes across in this bullet.
I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day.
I feel like this incentivizes comments to be short, which doesn't make them less aggravating to people. For example, IIRC people have complained about him commenting "Examples?". This is not going to be hit hard by a rate limit.
'Examples?' is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit "Oh, I don't have any yet, this is speculative, so YMMV".
Spending my last remaining comment here.
I join Ray and Gwern in noting that asking for examples is generically good (and that I've never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:
...If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say "Examples?" would go into the pile. But just encountering a handful of comments that just say "Examples?" would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn't do their fair share of the labor.
"Do you have examples?" is one of the core, common, prosocial moves, and correctly so. It is a bid for the other person to put in extra work, but the scales of "are we both contributing?" don't need to be balanced every three seconds, or even every conversation. Sometimes I'm the asker/learner and you're the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.
The problem is not in asking someone to do a little labor on your behalf. It's having 85+% of your engagement be asking other pe
Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good - I'd hate to think that's no longer acceptable.
At the same time, there is less room for it as the community got much bigger, and I'd probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it's an issue.
fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don't seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)
Some evidence for that, also seems likely to get upvoted on the basis of "well written and evocative of a difficult personal experience", or people relate to being outliers and unusual even if they didn't feel alienated and hurt in quite the same way. I'm unsure.
I upvoted it because it made me finally understand what in the world might be going on in Duncan's head to make him react the way he does
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it's dialog. and there are lot of unproductive examples for that in LW. and it's quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can't explain clearly. something like, I'm here to try to go higher. if it's impossible, then why bother?
I also think it's V...
Here's a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren't on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can't have particularly important types of conversations here.
I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)
I think there's probably at least 5 more people who complained about you by name who I don't think have particularly legible credibility beyond "being some LessWrong users."
I'm thinking about my reply to "are the complaints valid tho?". I have a different ontology here.
There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.
I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I'm sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.
Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it's worth putting in effort here to figure out if things could be better.
There being a lot of complaints is evidence [...] that it's worth putting in effort here to figure out if things could be better.
It is evidence that there is some sort of problem. It's not clear evidence about what should be done about it, about what "better" means specifically. Instituting ways of not talking about the problem anymore doesn't help with addressing it.
Warning to Duncan
(See also: Raemon's moderator action on Said)
Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.
Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.
Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:
Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action agains...
Just noting as a "for what it's worth"
(b/c I don't think my personal opinion on this is super important or should be particularly cruxy for very many other people)
that I accept, largely endorse, and overall feel fairly treated by the above (including the week suspension that preceded it).
Moderation action on Said
(See also: Ruby's moderator warning for Duncan)
I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such.
I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.
Here’s a quick overview of how I think about Said moderation:
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course ...
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don't think I am saying particularly complicated things, and I think I've communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we'll continue to take some moderator actions until things look better by our models. I think we've both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I've been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it'd be fairly tractable to have a message like "btw, if this conversation doesn't seem productive to you, consider downvoting it and moving on with your day [link to some background]" appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.
Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
Or, take the links. One of them is cl...
The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do
Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).
Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?
… and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts
I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some fa...
… bad in basically the same way we gave you a mod warning about 5 years ago …
I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?
We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)
I could be misunderstanding all sorts of things about this feature that you've just implemented, but…
Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users' posts? Shouldn't I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but... I think I don't know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don't intend to make a "how dare the moderators moderate Less Wrong" objection. Rather, the objection is, "How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma." (That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.) I'm saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don't want to accept literally any speech (which is why the grandparent mentions "removing low-quality [...] comments" as a legitimate moderator duty).
Note that "permanently restrict the account of" is different from "moderate". For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I'm accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz...
Hmm, I am still not fully sure about the question (your original comment said "I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here", which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said's net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like "purchase him out of his right to use LessWrong" or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in ce...
But second, and more importantly, there is a huge bias in karma towards positive karma.
I don't know if it's good that there's a positive bias towards karma, but I'm pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don't really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don't really get what this would improve. Also, not everyone cares about donating to charity, and that's fine.
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
I endorse much of Oliver's replies, and I'm mostly burnt out from this convo at the moment so can't do the followthrough here I'd ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said's reference class is very high, and I'd treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don't think the Spirit of LessWrong 2009 actually supports you on the specific claims you're making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer's buy-in, and now we have 6 years of track of reco...
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the...
(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they're simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn't entered my conscious awareness for a while, until I read this post today.)
(Assuming there's not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to "hide" commenters or individual comments, which means that those comments are collapsed by default (and marked as "hidden by the post author") but can be individually expanded, and each user can set an option to always expand those comments for themselves.
And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.
This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I've rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Faceb...
Some UI thoughts as I think about this:
Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can't actually tell when something is controversial.
One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like "something is WRONG!" which didn't feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.
But, an idea I just had was that maybe we should expose that sort of information once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it's harder to infer individual voters.
Tagging @jp who might be interested.
I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn't address the bulk of my concerns, which I've laid out in more detail in this comment. In connection with karma, I've observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.
First, my read of both Said and Duncan is that they appreciate attention to the object level in conflicts like this. If what's at stake for them is a fact of the matter, shouldn't that fact get settled before considering other issues? So I will begin with that. What follows is my interpretation (mentioned here so I can avoid saying "according to me" each sentence).
In this comment, Said describes as bad "various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on", without specifically identifying Duncan as proposing that norm (tho I think it's heavily implied).
Then gjm objects to that characterization as a straw man.
In this comment Said defends it, pointing out that Duncan's standard of "critics should do some of the work of crossing the gap" is implicitly a rule against "asking people for examples of their claims [without anything else]", given that Duncan thinks asking for examples doesn't count as doing the work of crossing the gap. (Earlier in the conversation Duncan calls it 0% of the work.) I think the point as I have written it here is correct and uncontroversial; I think there is an important difference between the point as I wrot...
Vaniver privately suggested to me that I may want to offer some commentary on what I could’ve done in this situation in order for it to have gone better, which I thought was a good and reasonable suggestion. I’ll do that in this comment, using Vaniver’s summary of the situation as a springboard of sorts.
In this comment, Said describes as bad “various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on”, without specifically identifying Duncan as proposing that norm (tho I think it’s heavily implied).
Then gjm objects to that characterization as a straw man.
So, first of all, yes, I was clearly referring to Duncan. (I didn’t expect that to be obscure to anyone who’d bother to read that subthread in the first place, and indeed—so far as I can tell—it was not. If anyone had been confused, they would presumably have asked “what do you mean?”, and then I’d have linked what I mean—which is pretty close to what happened anyway. This part, in any case, is not the problem.)
The obvious problem here is that “don’t ask people for examples of their claims”—taken literally—is, indeed, a strawman.
The question is, whose problem (to solve) is it?
There a...
In the response I would have wanted to see, Duncan would have clearly and correctly pointed to that difference. He is in favor of people asking for examples [combined with other efforts to cross the gap], does it himself, gives examples himself, and so on. The unsaid
[without anything else]
part is load-bearing and thus inappropriate to leave out or merely hint at. [Or, alternatively, using "ask people for examples" to refer to comments that do only that, as opposed to the conversational move which can be included or not in a comment with other moves.]
I agree that the hypothetical comment you describe as better is in fact better. I think something like ... twenty-or-so exchanges with Said ago, I would have written that comment? I don't quite know how to weigh up [the comment I actually wrote is worse on these axes of prosocial cooperation and revealing cruxes and productively clarifying disagreement and so forth] with [having a justified true belief that putting forth that effort with Said in particular is just rewarded with more branches being created].
(e.g. there was that one time recently where Said said I'd blocked people due to disagreeing with me/criticizing me, and I s...
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you're [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You've spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You've written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don't think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you'd be much happier with my ideal, you'd think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I'm really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a "this place is really under control" kind of ideal. I tak...
But sir, you impugn my and my site's honor
This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.
A [less straightforwardly wrong and unfair] phrasing would have been something like "this is not a Japanese tea garden; it is a British cottage garden."
I probably rushed this comment out the door in a "defend my honor, set the record straight" instinct that I don't think reliably leads to good discourse and is not what I should be modeling on LessWrong.
I didn't make it to every point, but hopefully you find this more of the substantive engagement you were hoping for.
I did, thanks.
gjm specifically noted the separation between the major issue of whether balance is required, and this other, narrower claim.
I think gjm's comment was missing the observation that "comment that just ask for examples" are themselves an example of "unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself", and so it wasn't cleanly about "balance: required or not?". I think a reasonable reader could come away from that comment of gjm's uncertain whether or not Said simply saying "examples?" would count as an example.
My interpretation of this section is basically the double crux dots arguing over the labels they should have, with Said disagreeing strenuously with calling his mode "unproductive" (and elsewhere over whether labor is good or bad, or how best to minimize it) and moving from the concrete examples to an abstract pattern (I suspect because he thinks the former is easier to defend than the latter).
I should also note here that I don't think you have explici...
The problem is not in asking someone to do a little labor on your behalf. It’s having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.
But why should this be a problem?
Why should people say “hey, could you not, or even just a little less”? If you do something that isn’t bad, that isn’t not a problem, why should people ask you to stop? If it’s a good thing to do, why wouldn’t they instead ask you to do it more?
And why, indeed, are you still speaking in this transactional way?
If you write a post about some abstract concept, without any examples of it, and I write a post that says “What are some examples?”, I am not asking you to do labor on my behalf, I am not asking for a favor (which must be justified by some “favor credit”, some positive account of favors in the bank of Duncan). Quite frankly, I find that claim ridiculous to the point of offensiveness. What I am doing, in that scenario, is making a positive contribution to the discussion, both for your benefit and (even more importantly) for the benefit of other readers and com...
There is no good reason why you should resent responding to a request like “what are some examples”.
Maybe "resent" is doing most work here, but an excellent reason to not respond is that it takes work. To the extent that there are norms in place that urge response, they create motivation to suppress criticism that would urge response. An expectation that it's normal for criticism to be a request for response that should normally be granted is pressure to do the work of responding, which is costly, which motivates defensive action in the form of suppressing criticism.
A culture could make it costless (all else equal) to ignore the event of a criticism having been made. This is an inessential reason for suppressing criticism that can be removed, and therefore should, to make criticism cheaper and more abundant.
The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism's posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that's Law, not something that can change for the sake of mechanism design.
I interpret a lot of Duncan’s complaints here thru the lens of imaginary injury that he writes about here.
I just want to highlight this link (to one of Duncan’s essays on his Medium blog), which I think most people are likely to miss otherwise.
That is an excellent post! If it was posted on Less Wrong (I understand why it wasn’t, of course EDIT: I was mistaken about understanding this; see replies), I’d strong-upvote it without reservation. (I disagree with some parts of it, of course, such as one of the examples—but then, that is (a) an excellent reason to provide specific examples, and part of what makes this an excellent post, and (b) the reason why top-level posts quite rightly don’t have agree/disagree voting. On the whole, the post’s thesis is simply correct, and I appreciate and respect Duncan for having written it.)
There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).
I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)
But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.
So, you ask:
If so, why did you choose to express yourself in this insulting-sounding manner?
The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.
I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.
(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)
But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.
I want to register that I don't believe you that you cannot, if we're using the ordinary meaning of "cannot". I believe that it would be more costly for you, but it seems to me that people are very often able to express content like that in your comment, without being insulting.
I'm tempted to try to rephrase your comment in a non-insulting way, but I would only be able to convey its meaning-to-me, and I predict that this is different enough from its meaning-to-you that you would object on those grounds. However, insofar as you communicated a thing to me, you could have said that thing in a non-insulting way.
These are not unflattering facts about Duncan
Indeed, they are not—or so it would seem. So why would my comment be insulting?
After all, I didn’t write “your stated reason is bizarre”, but “I find your stated reason bizarre”. I didn’t write “it seems like your thinking here is incoherent”, but “I can’t form any coherent model of your thinking here”. I didn’t… etc.
So what makes my comment insulting?
Please note, I am not saying “my comment isn’t insulting, and anyone who finds it so is silly”. It is insulting! And it’s going to stay insulting no matter how you rewrite it, unless you either change what it actually says or so obfuscate the meaning that it’s not possible to tell what it actually says.
The thing I am actually saying—the meaning of the words, the communicated claims—imply unflattering facts about Duncan.[1] There’s no getting around that.
The only defensible recourse, for someone who objects to my comment, is to say that one should simply not say insulting things; and if there are relevant things to say which cannot be said non-insultingly, then they oughtn’t be said… and if anything is lost thereby, well, too bad.
And that would be a consistent point of view, certainly. Bu...
I think it's pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I've written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don't care whether you think I'm arguing in bad faith; at least I'm reading what you've written.
Here's a potential alternative wording of your previous statement.
Original: (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)
New version: I am very confused by your stated reason, and I'm genuinely having trouble seeing things from your point of view. But I would genuinely like to. Here's a version that makes a little more sense to me [give it your best shot]... but here's where that breaks down [explain]. What am I missing?
I claim with very high confidence that this new version is much less insulting (or is not insulting at all). It took me all of 15 seconds to come up with, and I claim that it either conveys the same thing as your original comment (plus added extras), or that the difference is negligible and could be overcome with an ongoing and collegial dialog of a kind that the original, insulting version makes impossible. If you have an explanation for what of value is lost in translation here, I'm listening.
If you care more about not making social attacks than telling the truth, you will get an environment which does not tell the truth when it might be socially inconvenient. And the truth is almost always socially inconvenient to someone.
So if you are a rationalist, i.e. someone who strongly cares about truth-seeking, this is highly undesirable.
Most people are not capable of executing on this obvious truth even when they try hard; the instinct to socially-smooth is too strong. The people who are capable of executing on it are, generally, big-D Disagreeable, and therefore also usually little-d disagreeable and often unpleasant. (I count myself as all three, TBC. I'd guess Said would as well, but won't put words in his mouth.)
I'm sure there is an amount of rudeness which generates more optimization-away-from-truth than it prevents. I'm less sure that this is a level of rudeness achievable in actual human societies. And for whether LW could attain that level of rudeness within five years even if it started pushing for rudeness as normative immediately and never touched the brakes - well, I'm pretty sure it couldn't. You'd need to replace most of the mod team (stereotypically, with New Yorkers, which TBF seems both feasible and plausibly effective) to get that to actually stick, probably, and it'd still be a large ship turning slowly.
A monoculture is generally bad, so having a diversity of permitted conduct is probably a good idea regardless. That's extremely hard to measure, so as a proxy, ensuring there are people representing both extremes who are prolific and part of most important conversations will do well enough.
a rude environment is not only one where people say true things rudely, but also where people say false things rudely
The concern is with requiring the kind of politeness that induces substantive self-censorship. This reduces efficiency of communicating dissenting observations, sometimes drastically. This favors beliefs/arguments that fit the reigning vibe.
The problems with (tolerating) rudeness don't seem as asymmetric, it's a problem across the board, as you say. It's a price to consider for getting rid of the asymmetry of over-the-top substantive-self-censorship-inducing politeness.
You understand that you have a reputation for making comments perceived as social attacks, although you don’t intend them as such.
I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.
You don’t care whether or not the other person feels insulted by what you have to say. It’s just not a moral consideration for your commenting behavior.
Certainly I would prefer that things were otherwise. (Isn’t this often the case, for all of us?) But this cannot be a reason to avoid making such comments; to do so would be even more blameworthy, morally speaking, than is the habit on the part of certain interlocutors to take those comments as attacks in the first place. (See also this old comment thread, which deals with the general questions of whether, and how, to alter one’s behavior in response to purported offense experienced by some person.)
...Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just
Do you mean that … (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that?
No, absolutely not.
Or (2) that you don’t endorse endless tangents becoming the norm, forgetting about the original statement?
Yeah.
My view is that first it’s important to get clear on what was meant by some claim or statement or what have you. Then we can discuss whatever. (If that “whatever” includes some hypothetical interpretation of the original (ambiguous) claim, which someone in the conversation found interesting—sure, why not.) Or, at the very least, it’s important to get that clarity regardless—the tangent can proceed in parallel, if it’s something the participants wish.
EDIT: More than anything, what I don’t endorse is a norm that says that someone asking “what did you mean by that word/phrase/sentence/etc.?” must provide some intepretation of their own, whether that be a guess at the OP’s meaning, or some hypothetical, or what have you. Just plain asking “what did you mean by that?” should be ok!
Things can be interesting for whatever reason, doesn’t have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.
Totally agreed.
Again, just chiming in, leaving the actual decision up to Ray:
My current take here is indeed that Said's hypothesis, taking fully literal and within your frame was quite confused and bad.
But also, like, people's frames, especially in the domain of adversarial actions, hugely differ, and I've in the past been surprised by the degree to which some people's frames, despite seeming insane and gaslighty to me at first turned out to be quite valuable. Most concretely I have in my internal monologue indeed basically fully shifted towards using "lying" and "deception" the way Zack, Benquo and Jessica are using it, because their concept seems to carve reality at its joints much better than my previous concept of lying and deception. This despite me telling them many times that their usage of those terms is quite adversarial and gaslighty.
My current model is that when Said was talking about the preference he ascribes to you, there is a bunch of miscommunication going on, and I probably also have deep disagreements with his underlying model, but I have updated against trying to stamp down on that kind of stuff super hard, even if it sounds quite adversarial to me on first gl...
I think you are mistaken about the process that generated my previous comment; I would have preferred a response that engaged more with what I wrote.
In particular, it looks to me like you think the core questions are "is the hypothesis I quote correct? Is it backed up by the four examples?", and the parent comment looks to me like you wrote it thinking I thought the hypothesis you quote is correct and backed up by the examples. I think my grandparent comment makes clear that I think the hypothesis you quote is not correct and is not backed up by the four examples.
Why does the comment not just say "Duncan is straightforwardly right"? Well, I think we disagree about what the core questions are. If you are interested in engaging with that disagreement, so am I; I don't think it looks like your previous comment.
I don't keep track of people's posting styles and correlate them with their names very well. Most people who post on LW, even if they do it a lot, I have negligible associations beyond "that person sounds vaguely familiar" or "are they [other person] or am I mixing them up?".
I have persistent impressions of both Said and Duncan, though.
I am limited in my ability to look up any specific Said comment or things I've said elsewhere about him because his name tragically shares a spelling with a common English word, but my model of him is strongly positive. I don't think I've ever read a Said comment and thought it was a waste of time, or personally bothersome to me, or sneaky or pushy or anything.
Meanwhile I find Duncan vaguely fascinating like he is a very weird bug which has not, yet, sprayed me personally with defensive bug juice or bitten me with its weird bug pincers. Normally I watch him from a safe distance and marvel at how high a ratio of "incredibly suspicious and hackle-raising" to "not often literally facially wrong in any identifiable ways" he maintains when he writes things. It's not against any rules to be incredibly suspicious and hackle-raising in a pu...
Meanwhile I find Duncan vaguely fascinating like he is a very weird bug
I don't know[1] for sure what purpose this analogy is serving in this comment, and without it the comment would have felt much less like it was trying to hijack me into associating Duncan with something viscerally unpleasant.
My guess is that it's meant to convey something like your internal emotional experience, with regards to Duncan, to readers.
I've tried for a bit to produce a useful response to the top-level comment and mostly failed, but I did want to note that
"Oh, it sort of didn't occur to me that this analogy might've carried a negative connotation, because when I was negatively gossiping about Duncan behind his back with a bunch of other people who also have an overall negative opinion of him, the analogy was popular!"
is a hell of a take. =/
It is only safe for you to have opinions if the other people don't dislike them?
I think you're trying to set up a really mean dynamic where you get to say mean things about me in public, but if I point out anything frowny about that fact you're like "ah, see, I knew that guy was Bad; he's making it Unsafe for me to say rude stuff about him in the public square."
(Where "Unsafe" means, apparently, "he'll respond with any kind of objection at all." Apparently the only dynamic you found acceptable was "I say mean stuff and Duncan just takes it.")
*shrug
I won't respond further, since you clearly don't want a big back-and-forth, but calling people a weird bug and then pretending that doesn't in practice connote disgust is a motte and bailey.
I kind of doubt you care at all, but here for interested bystanders is more information on my stance.
For what it's worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you're trying to indicate that this person is either disgusting or repulsive.
Poisonous frogs often have bright colors to say "hey don't eat me", but there are also ones that use a "if you don't notice me you won't eat me" strategy. Ex: cane toad, pickerel frog, black-legged poison dart frog.
Welp, guess I shouldn't pick up frogs. Not what I expected to be the main takeaway from this thread but still good to know.
I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and - it seems to me like there's something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my "not getting along" summary trivializing, which may even be true, as noted I haven't read all the words - just, still, is this really the best thing for everyone involved to be doing with their time?
This seems like a situation that is likely to end up ballooning into something that takes up a lot of time and energy. So then, it seems worth deciding on an "appetite" up front. Is this worth an additional two hours of time? Six? Sixty? Deciding on that now will help avoid a scenario where (significantly) more time is spent than is desirable.
Here is some information about my relationship with posting essays and comments to LessWrong. I originally wrote it for a different context (in response to a discussion about how many people avoid LW because the comments are too nitpicky/counterproductive) so it's not engaging directly with anything in the OP, but @Raemon mentioned it would be useful to have here.
*
I *do* post on LW, but in a very different way than I think I would ideally. For example, I can imagine a world where I post my thoughts piecemeal pretty much as I have them, where I have a research agenda or a sequence in mind and I post each piece *as* I write it, in the hope that engagement with my writing will inform what I think, do, and write next. Instead, I do a year's worth of work (or more), make a 10-essay sequence, send it through many rounds of editing, and only begin publishing any part of it when I'm completely done, having decided in advance to mostly ignore the comments.
It appears to me that what I write is strongly in line with the vision of LW (as I understand it; my understanding is more an extrapolation of Eliezer's founding essays and the name of the site than a reflection of discussion with current ...
Skimmed all the comments here and wanted to throw in my 2c (while also being unlikely to substantively engage further, take that into account if you're thinking about responding):
Wei Dai had a comment below about how important it is to know whether there’s any criticism or not, but mostly I don’t care about this either because my prior is just that it’s bad whether or not there’s criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).
But how do you find the rare good stuff amidst all the bad stuff? I tend to do it with a combination of looking at karma, checking the comments to see whether or not there’s good criticism, and finally reading it myself if it passes the previous two filters. But if a potentially good criticism was banned or disincentivized, then that 1) causes me to waste time (since it distorts both signals I rely on), and 2) potentially causes me to incorrectly judge the post as "good" because I fail to notice the flaw myself. So what do you do such that it doesn't matter whether or not there's criticism?
Thanks for weighing in! Fwiw I've been skimming but not particularly focused on the litigation of the current dispute, and instead focusing on broader patterns. (I think some amount of litigation of the object level was worth doing but we're past the point where I expect marginal efforts there to help)
One of the things that's most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.
*roughly operationalized as "write stuff that ends up in the top 20 or top 50 of the annual review"
This set of issues sounds like a huge time sink for all involved.
My own plan is to upvote this post, hide it from my frontpage, and then not follow any further discussion about it closely, or possibly at all, at least until the conclusions are posted. I'd encourage anyone else who is able, and thinks they're at risk of getting sucked in, to do the same.
I trust that Raemon and the rest of the mod team will make good decisions regardless of my (or anyone else's) input on the matter, and I'm very grateful that they are willing to put in the time to do so, so that others may be spared.
My only advice to them is not to put too much of their own time in, to the detriment of their other priorities and sanity. (This too is a decision I trust them to make on their own, but I think it's worth repeating publicly.)
A tiny bit of object-level discourse: I like Duncan's posts, and would be sad to see fewer of them in the future. I mostly don't pay attention to comments by anyone else mentioned here, including Duncan.
Said's way of asking questions, and the uncharitable assumptions he sometimes makes, is one of the most off-putting things I associate with LW. I don't find it okay myself, but it seems like the sort of thing that's hard to pin down with legible rules. Like, if he were to ask me "what is it that you don't like, exactly" – I feel like it's hard to pin down.
Edit: So, on the topic of moderation policy, seems like the option that individual users can ban specific other users if they have trouble dealing with their style or just if conflicts happen, that seems like a good solution to me. And I don't think it should reflect poorly on the banner (unless they ban an extraordinary number of other users).
Update May 2024: It's been more than a year since the above comment and in that year, I remember I liked a couple of comments by Said and I don't remember any particular ones that I thought exhibited the above pattern.
Okay, overall outline of thoughts on my mind here:
On blocking users from commenting
I still endorse authors being able to block other users (whether for principles reasons, or just "this user is annoying"). I think a) it's actually really important for authors for the site to be fun to use, b) there's a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren't obligated to lend their own karma/reputation to give space to other people's content. If an author doesn't want your comments on his post, whether for defensible reasons or not, I think it's an okay answer that those commenters make their own post or shortform arguing the point elsewhere.
Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.
That all said...
Blocking users on "norm-setting posts"
I think it's more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don't think the censorship effects are that strong or distorting in most c...
First, some background context. When LW2.0 was first launched, the mod team had several back-and-forth with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him.
Ultimately we told him this:
...As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.
I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the
I think some additional relevant context is this discussion from three years ago, which I think was 1) an example of Said asking for definitions without doing any interpretive labor, 2) appreciated by some commenters (including the post author, me), and 3) reacted to strongly by people who expected it to go poorly, including some mods. I can't quickly find any summaries we posted after the fact.
A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn't emerge as extremely costly until you see the whole thing. (i.e. if there's a spectrum of how bad behavior is, from 0-10, and things that are a "3" are considered bad enough to punish, someone who's doing things that are bad at a "2.5" or "2.9" level don't quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad.
If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you're just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they're not at their best when they're dong it.
I think Duncan's response looks very out-of-proportion. I think Duncan's response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I ...
Personally, the thing I think should change with Said is that we need more of him, preferably a dozen more people doing the same thing. If there were a competing site run according to Said's norms, it would be much better for pursuing the art of rationality than modern LessWrong is; disagreeable challenges to question-framing and social moves are desperately necessary to keep discussion norms truth-tracking rather than convenience-tracking.
But this is not an argument I expect to be able to win without actually trying the experiment. And even then I would expect at least five years would be required to get unambiguous results.
I am not sure what you mean, didn't Ray respond on the same day that you tagged him?
I haven't read the details of all of the threads, but I interpreted your comment here as "the mod team ignored your call for clarification" as opposed to "the mod team did respond to your call for clarification basically immediately, but there was some <unspecified issue> with it".
EDIT: to elaborate, Ray actually put quite a bit of effort into a back and forth with Said, and eventually asked him to stop commenting/put a pause on the whole conversation. But there wasn't any "this thing that Said was doing before I showed up is not clearing the bar for LW."
Yeah, I think Ray is currently working on figuring out what the actual norms here should be, which I do think just takes awhile. Ideally we would have a moderation philosophy pinned down in which the judgement here is obvious, but as moderation disputes go, a common pattern is if people disagree with a moderation philosophy, they tend to go right up to the edge of the clear rules you have established (in a way I don't really think is inherently bad, in domains where I disagree with the law I also tend to go right up to the edge of what it allows).
This seems like one of those cases, where my sense is there is a bunch of relatively deep disagreement about character and spirit of LessWrong, and people are going right up to the edge of what's allowed, and disputing those edge-cases almost always tends to require multiple days of thought. My model of you thinks that things were pretty clearly over your line...
That does not seem like an accurate summary of this comment?
...My current take is "this thread seems pretty bad overall and I wish everyone would stop, but I don't have an easy succinct articulation of why and what the overall moderation policy is for things like this." I'm trying to mostly focus on actually resolving a giant backlog of new users who need to be reviewed while thinking about our new policies, but expect to respond to this sometime in the next few days.
What I will say immediately to @Said Achmiz is "This point of this thread is not to prosecute your specific complaints about Duncan. Duncan banning you is the current moderation policy working as intended. If you want to argue about that, you should be directing your arguments at the LessWrong team, and you should be trying to identify and address our cruxes."
I have more to say about this but it gets into an effortcomment that I want to allocate more time/attention to.
I'd note: I do think it's an okay time to open up Said's longstanding disagreements with LW moderation policy, but, like, all the previous arguments still apply. Said's comments so far haven't added new information we didn't already consider.
I th
Yeah, as you were typing this I was also typing an edit. My apologies, Ray, for the off-the-cuff wrong summary.
A lot of digital ink has been spilled, and if I were a random commenter I wouldn't think it that valuable to dig into my object level reasoning. But, since I'm the one making the final calls here it seemed important to lay out how I think about the broader patterns in Said's behavior.
I'll start by clarifying my own take on the "what's up with Said and asking for examples?"
I think it is (all else being equal) basically always fine to ask for examples. I think most posts could be improved by having them, I agree that the process of thinking about concrete examples is useful for sanity checking that your idea is real at all. And there is something good and rationalistly wholesome about not seeing it as an attack, but just as "hey, this is a useful thing to consider" (whether or not Said is consistent about this interpretation)
My take on "what the problem here is" is not the part where Said asks for examples, but that when Said shows up in a particular kind of thread, I have a pretty high expectation that there will be a resulting long conversation that won't actually clarify anything important.
The "particular kind of thread" is a cluster of things surrounding introspection, inter...
My take on "what the problem here is" is not the part where Said asks for examples, but that when Said shows up in a particular kind of thread, I have a pretty high expectation that there will be a resulting long conversation that won't actually clarify anything important.
Agreed. It reminds me of this excerpt from HPMoR:
..."You should have deduced it yourself, Mr. Potter," Professor Quirrell said mildly. "You must learn to blur your vision until you can see the forest obscured by the trees. Anyone who heard the stories about you, and who did not know that you were the mysterious Boy-Who-Lived, could easily deduce your ownership of an invisibility cloak. Step back from these events, blur away their details, and what do we observe? There was a great rivalry between students, and their competition ended in a perfect tie. That sort of thing only happens in stories, Mr. Potter, and there is one person in this school who thinks in stories. There was a strange and complicated plot, which you should have realized was uncharacteristic of the young Slytherin you faced. But there is a person in this school who deals in plots that elaborate, and his name is not Zabini. And I did warn you that the
I feel fine doing this because I feel comfortable just ignoring him after he’s said those initial things, when a normal/common social script would consider that somewhat rude. But this requires a significant amount of backbone.
I still wish that LW would try my idea for solving this (and related) problem(s), but it doesn't seem like that's ever going to happen. (I've tried to remind LW admins about my feature request over the years, but don't think I've ever seen an admin say why it's not worth trying.) As an alternative, I've seen people suggest that it's fine to ignore comments unless they're upvoted. That makes sense to me (as a second best solution). What about making that a site-wide norm, i.e., making it explicit that we don't or shouldn't consider it rude or otherwise norm-violating to ignore comments unless they've been upvoted above some specific karma threshold?
I’d at least want to see a second established user asking for it before I considered prioritizing it more.
I doubt you'll ever see this, because when you're an established / high status member, ignoring other people feels pretty natural and right, and few people ignore you so you don't notice any problems. I made the request back when I had lower status on this forum. I got ignored by others way more than I do now, and ignored others way less than I do now. (I had higher motivation to "prove" myself to my critics and the audience.)
If I hadn't written down my request back then, in all likelihood I would have forgotten my old perspective and wouldn't be talking about this today.
“Getting annoying comments that miss the point” is one of the most cited things people dislike about LW, and forcing authors to engage with them seems like it’d exacerbate it.)
In my original feature request, I had a couple of "agreement statuses" that require only minimal engagement, like "I don’t understand this. I give up." and "I disagree, but don’t want to bother writing out why." We could easily add more, like "I think further engagement won't be productive." or "This isn't material to my main poin...
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes).
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
But I think this is a process you should naturally expect to take 4-20 years.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program th...
I don't have especially strong opinions about what to do here. But, for the curious, I've had run ins with both Said and Duncan on LW and elsewhere, so perhaps this is useful background information to folks outside the moderation team look at this who aren't already aware (I know they are aware of basically everything I have to say here because I've talked to some of them about these situations).
Also, before I say anything else, I've not had extensive bad interactions with either Said or Duncan recently. Maybe that's because I've been writing a book instead of making posts of the sort I used to make? Either way, this is a bit historical and is based on interactions from 1+ years ago.
I've faced the brunt of Said's comments before. I've spent a lot of very long threads discussing things with him and finally gave up because it felt like talking to a brick wall. I have a soft ban on Said on my posts and comments, where I've committed to only reply to him once and not reply to his replies to me, since it seems to go in circles and not get anywhere. I often feel frustrated with Said because I feel like I've put in a lot of work in a conversation to just have him ignore what I said, so th...
Sorry for the lack of links above.
I affirm the accuracy of Gordon's summary of our interactions; it feels fair and like a reasonable view on them.
Preamble: I like transparency
I think it is much better when the LessWrong userbase knows more about how site moderation happens, i.e. who does it, what the tools are, what actions and decisions are, who’s responsible for what, how they think about things, etc. While being careful to say that LessWrong is not a democracy and we will not care equally about the judgments of everyone on the site just because they're an active member[1], I think transparency is valuable here for at least these overlapping reasons:
LessWrong team members mostly speak for themselves
I think it's important that LessWrong team members don't have to pretend to all agree with each other some aggregate offic...
This is not directly related to the current situation, but I think is in part responsible for it.
Said claims that it is impossible to guess what someone might mean by something they wrote, if for some reason the reader decided that the writer likely didn't intend the straightforward interpretation parsed by the reader. It's somewhat ambiguous to me whether Said thinks that this is impossible for him, specifically, or impossible for people (either most or all).
Relevant part of the first comment making this point:
(B) Alice meant something other than what it seems like she wrote.
What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.
Relevant part of the second comment:
...“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilit
I note for any other readers that Said is evincing a confusion somewhere in the neighborhood of the Second Guideline and the typical mind fallacy.
In particular, it's false that I "write and act as though I did hold that belief," in the sense that a supermajority of those polled would check "true" on a true-false question about it, after reading through (say) two of my essays and a couple dozen of my comments.
("That belief" = "Duncan has, I think, made it very clear that that a comment that just says 'what are some examples of this claim?' is, in his view, unacceptable.")
It's pretty obvious that it seems to Said that I write and act in this way. But one of the skills of a competent rationalist is noticing that [how things seem to me] might not be [how they actually are] or [how they seem to others].
Said, in my experience, is not versed in this skill, and does not, as a matter of habit, notice "ah, here I'm stating a thing about my interpretation as if it's fact, or as if it's nearly-universal among others."
e.g. an unequivocally true statement would have been something like "But that still leaves the question of why you write and act in a way that indicates to me that you do hold tha...
Let's see if I can give constructive advice to the parties in question.
First, I'll address Said, on the subject of asking for examples (and this will inform some later commentary to Duncan):
It might be helpful, when asking for examples, to include a few words about why, or what kind of examples you're looking for, or other information. This serves multiple purposes: (a) it can help the author choose a good example and good explanation [and avoid wasteful "No, not that kind of example; I wanted [...]" exchanges]; (b) it signals cognitive effort on your part and helps to distinguish you from a "low-effort obtuse fool or nitpicker"; (c) it gives the author feedback about how their post landed with real users [based on you as a sample, plus the votes on your comment suggesting how the rest of the audience feels]; (d) a sort of proof-of-work assures other people that you care at least some amount about getting their reply.
Examples of being a little more specific:
I find Said Achmiz for be vaguely offputting, abrasive. And yet, I find it difficult not to read his comments when I see them. Even so, reading Said's opinions has ~always left me better off than I was before. Thinking about it, the vibe of Said's posts remind me of Hanson's, which can only be an endorsement in my view.
Some of Ruby's high-level broader thoughts about Moderation Philosophy, what LessWrong ought to be, kinds of behavior that are okay, what to do with users who are perhaps behaving badly, etc.
Politeness/Cooperativeness/Etc
When I first joined the LessWrong team four years ago, I came in with the feeling that politeness and civility were very important and probably standards of those should be upheld on LessWrong. I wrote Combat vs Nurture and felt that LessWrong broadly ought to be a bit more Nurture-y.
Standard arguments in favor this be nice/friendlier/etc:
And further many will claim:
Some counterclaims are:
I'm reminded of a related point here around banning people. When you ban a person, you do not just have the consequence of:
You also:
(Which isn't to say these aren't mirrored for not banning someone. That also alienates people who think the potentially banned behavior is bad for them, people who think that's not acceptable in general, etc.
All this to say, let's say there's some behavior that's valuable in moderation, e.g. "being Socratic and poking a bit in a way that's a bit uncomfortable" but is bad if you have too much of it. There's a cost to banning the person who does it too much in that it discourages and threatens many of the people who were doing it a good amount. I think that ought to factor into moderation decisions.
Hello Ruby,
I did read some of the other comments here, and also the article you linked about butterflies (which I enjoyed):
Since I am a relatively new member, I have ideas, and not that much experience with regard to LW site, technically or historically. I do have experience from various other communities and social arenas, and maybe something can be applicable here as well. Since I myself experienced getting down-voted and 'attacked' the moment I let out some butterflies here on the community, which even made me leave feeling hurt and disappointed, reading Duncan_Sabiens post Killing Socrates, and also seeing another person writing about MBTI having their butterfly squished, made me realize I wasn't really evaluating my experience clearly, and I decided to re-activate my account and try to engage a bit more.
The situation with Duncan_Sabien and Said, to me, is similar to how people value PvP (Player versus Player) and PvE (Player versus Environment) in games. Both are useful, but opening up for both all the time can be a bit taxing for everyone involved.
Having a dedicated space, where arguments are more sharp and less catering to emotion or other considerations, is very good....
I think the precious thing lost in the Nurture cluster is not Combat, but tolerance for or even encouragement of unapologetic and uncompromising dissent. This is straightforwardly good if it can be instantiated without regularly spawning infinite threads of back-and-forth arguing (unapologetic and uncompromising).
It should be convenient for people who don't want to participate in that to opt out, and the details of this seem to be the most challenging issue.
Technological Solutions
I find myself increasingly in favor of tech solutions to moderation problems. It can be hard for users to change their behavior based on a warning, but perhaps you can do better than just ban them – instead shape their incentives and save them from their own worst impulses.
Only recently has the team been playing with rate limits as alternative to bans that can be used to strongly encourage users to improve their content (if by no other mechanism to incentivize investing more time into fewer posts and comments). I don't think it should be overly hard to detect nascent Demon Threads and then intervene. Slowing them down both gives the participants times to reflect more and handle emotions that are coming up, and more time for the mod team to react.
In general, I'd like to build better tools for noticing places that would benefit from intervention, and have more ready trigger-action plans for making them go better. In this recent case, we were aware of the exchanges but didn't have a go-to thing to do. Some of this was not being sure of policies regarding certain behaviors and hashing those out is much slower than the thread proceeds. In my ideal world, we'...
This whole drama is pretty TL;DR but based on existing vibes I'd rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.
[ I don't have strong opinions on the actual individuals or the posts they've made and/or objected to. I've both enjoyed and been annoyed by things that each of them have said, but nothing has triggered my "bad faith/useless/ignore" bit on either of them. I understand that I'm more thick-skinned than many, and I care less about this particular avenue of social reinforcement than many, so I will understand if others fall on the "something must be done" side of the line, even though I don't. I'm mostly going to ask structural questions, rather than exploring communication or behavioral preferences/requirements. ]
Is there anything we can learn from the votes (especially from people who are neither the commenter nor the poster) on the possibly-objectionable threads and posts? Is this something moderation needs to address, or is voting sufficient (possibly with some tweaks to algorithm)?
Echoing Adam's point, what is the budget that admins have for time spent policing subleties of fairly verbose engagement? None of the norms under discussion seem automatable, nor even cheap to detect/adjudicate. This isn't spam, this isn't short, obviously low-value co...
400 comments... :)
When I read Killing Socrates, I had no idea it alluded to Said in any way. The point I took from it, was that it is important to treat both commentors and authors as responsible for the building process.
My limited point of view on Duncan_Sabien and Said is the following:
I really loved the above post by Duncan_Sabien. It was amazing, and on my comment on that post, they answered this. It felt reassuring and caring, and fitting for my comment,
I did really enjoy my brief interaction with Said as well. I wrote an idea, they answered with a valid, but much more solid critique of a specificity, shooting way above idea-level. Which made me first confused, then irritated, then angry, until I decided to just go for what I truly wanted to answer them, which was:
Yes, got it. Thanks for taking the time.
Which, I mean, looks pretty dismissive Said however, answered this:
...
Hah.For what it’s worth, I do, actually, agree with the overall thrust of your suggestion. I have made similar suggestions myself, in the past… unfortunately, my understanding is that the LW team basically don’t think that anything like this is workable. I don’t think I agree with their reasoning, but they
My model of the problem boils down to a few basic factors:
General recommendations/thoughts:
I think that the problem could be alleviated with the following combination of site capabilities:
My view is that the leng...
My two cents:
I suspect that Said is really bad at predicting which of his comments will be perceived as rude.
If I had to give him a rule of thumb, it would probably be like this: "Those that are very short, only one or two lines, but demand an answer that requires a lot of thinking or writing. That feels like entitlement to make others spend orders of magnitude more effort than you did. Even if from the Spock-rational perspective this makes perfect sense (asking someone to provide specific examples to their theory can benefit everyone who finds the theory interesting; and why write a long comment when a short one says the same thing), the feeling of rudeness is still there, especially if you do this repeatedly enough that people associate this thing with your name. Even if it feels inefficient, try to expand your comments to at least five lines. For example, provide your own best guess, or a counter-example. Showing the effort is the thing that matters, but too short length is a proxy for low effort." This sounds susceptible to Goodharting, but who knows...
When I think about Duncan, my first association is the "punch bug" thing (the article, the discussion on LW, Duncan's complaint...
I suspect that Said is really bad at predicting which of his comments will be perceived as rude.
If I had to give him a rule of thumb, it would probably be like this: “Those that are very short, only one or two lines, but demand an answer that requires a lot of thinking or writing. That feels like entitlement to make others spend orders of magnitude more effort than you did. Even if from the Spock-rational perspective this makes perfect sense (asking someone to provide specific examples to their theory can benefit everyone who finds the theory interesting; and why write a long comment when a short one says the same thing), the feeling of rudeness is still there, especially if you do this repeatedly enough that people associate this thing with your name. Even if it feels inefficient, try to expand your comments to at least five lines. For example, provide your own best guess, or a counter-example. Showing the effort is the thing that matters, but too short length is a proxy for low effort.” This sounds susceptible to Goodharting, but who knows...
Why waste time say lot word, when few word do trick?
Look, we covered this already. We covered the “effort” part, we covered the “Goodhart...
I also tend to write concisely. A trick I often use is writing statements instead of questions. I feel statements are less imposing, since they lack the same level of implicit demand that they be responded to.
"Perhaps you might point to some examples of how it’s best applied?" ⇒ "I'd be curious to read some examples of how it’s best applied."
By changing from a question to a statement, the request for information is transferred from a single person [me] to anyone reading the comment thread. This results in a diffusion of responsibility, which reduces the implicit imposition placed on the original parent.
Another advantage of using statements instead of questions is that they tend to direct me toward positive claims, instead of just making demands for rigor. This avoids some of the more annoyingly asymmetric aspects of Socratic dialogue.
Clearly, we have different preferences for what a good comment should look like. I am curious, is there a website where your preferred style is the norm? I would like to see how it works in practice.
(I realize that my request may not make sense; websites have different styles of comments. But if there is a website that feels more compatible with your preferences, I'd like to update my model.)
It feels like an argument between a couple where person A says "You don't love me, you never tell me 'I love you' when I say it to you." and the person B responds "What do you mean I don't love you? I make you breakfast every morning even though I hate waking up early!". If both parties insist that their love language is the only valid way of showing love, there is no way for this conflict to be addressed.
Maybe the person B believes actions speak louder than words and that saying "I love you" is pointless because people can say that even when they don't mean it And perhaps person B believes that that is the ideal way the world works, where everyone is judged purely based on their actions and 'meaningless' words are omitted, because it removes a layer of obfuscation. But the thing is, the words are meaningless to person B; they are not meaningless to person A. It doesn't matter whether or not the words should be meaningful to person A. Person A as they are right now has a need to hear that verbal affirmation, person A genuinely has a different experience when they hear those words; it's just the way person A (and many people) are wired.
If you want to have that rela...
Still trying to figure out/articulate the differences between the two frames, because it feels like people are talking past each other. Not confident and imprecise, but this is what I have so far:
Said-like frame (truth seeking as a primarily individual endeavor)
It seems that Duncan has deactivated his account. https://www.lesswrong.com/users/duncan_sabien?mention=user
I have a very strong bias about the actors involved, so instead I'll say:
Perhaps LessWrong 2.0 was a mistake and the site should have been left to go read only.
My recollection was that the hope was to get a diverse diaspora to post in one spot again. Instead of people posting on their own blogs and tumblrs, the intention was to shove everyone back into one room. But with a diverse diaspora, you can have local norms to a cluster of people. But now when everyone is trying to be crammed into one site, there is an incentive to fight over global norms and attempt to enforce them on others.
Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.
(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the "behind the scenes coordination work" doesn't get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequentl...
So this is the fourth time I am trying to write a comment. This comment is far from ideal, but I feel like I did the best as my current skill in writing in English and understanding such situations allow.
1. I find 90% of the practical problems to be Drama. as in, long, repetitive, useless arguments. if it was facebook and Duncan blocked Said, and then proceeded to block anyone that was too much out of line by Duncan-standards, it would have solved 90% of Duncan-related problems. if he would have given up already on making LW his kind of garden, it wo...
One technical solution that occurs to me is to allow explicitly marking a post as half-baked, and therefore only open to criticism that comes along with substantial effort towards improving the post, or fully-baked and open to any criticism. However, I suspect that Duncan won't like this idea, because [edit: I suspect that] he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.
My current model of this is that the right time to really dig into posts is actually the annual review.
I've been quite sad that Said hasn't been participating much in the annual review, since I do feel like his poking is a pretty good fit for the kind of criticism that I was hoping would come up there, and the whole point of that process is to have a step of "ok, but like, do these ideas actually check out" before something could potentially become canonized.
My apologies! I regret that I’ve mostly not taken part in the annual review. To a large extent this is due to a combination of two things:
The available time I have to comment on Less Wrong (or do anything similar) comes and goes depending on how busy I am with other things; and
The annual review is… rather overwhelming, frankly, since it asks for attention to many posts in a relatively short time.
Also, I don’t have much to say about many (perhaps even most?) posts on Less Wrong. There’s quite a bit of alignment discussion and similar stuff which I simply am not qualified to weigh in on.
Finally, most discussion of a post tends to take place close in time to when it’s first published. To the extent that I tend to find it useful or interesting to comment on any given post, active discussion of it tends to be a major factor in my so finding it. (Indeed, the discussion in the comments is sometimes at least as useful, or even more useful, than the post itself!)
I wish I could promise that I’ll be more active in the annual review process, but that wouldn’t be a fair promise to make. I will say that I hope you don’t intend to shunt all critical discussion into that process; I think that would be quite unfortunate.
Habryka believes (and I agree) the trade offs of Said’s style are more suited to the review than daily commenting.
I think that this is diametrically wrong.
In the field of usability engineering, there are two kinds of usability evaluations: formative and summative.
Formative evaluations are done as early as possible. Not just “before the product is shipped”, but before it’s in beta, or in alpha, or in pre-alpha; before there’s any code—as soon as there’s anything at all that you can show to users (even paper prototypes), or apply heuristic analysis to, you start doing formative evaluations. Then you keep doing them, on each new prototype, on each new feature, continuously—and the results of these evaluations should inform design and implementation decisions at each step. Sometimes (indeed, often) a formative evaluation will reveal that you’re going down the wrong path, and need to throw out a bunch of work and start over; or the evaluation will reveal some deep conceptual or practical problem, which may require substantial re-thinking and re-planning. That’s the point of doing formative evaluations; you want to find out about these problems as soon as possible, not after you’ve ...
The time for figuring out whether the ideas or claims in a post are even coherent, or falsifiable, or whether readers even agree on what the post is saying, is immediately.
Immediately—before an idea is absorbed into the local culture, before it becomes the foundation of a dozen more posts that build on it as an assumption, before it balloons into a whole “sequence”—when there’s still time to say “oops” with minimal cost, to course-correct, to notice important caveats or important implications, to avoid pitfalls of terminology, or (in some cases) to throw the whole thing out, shrug, and say “ah well, back to the drawing board”.
To only start doing all of this many months later, is way, way too late.
We have to distinguish whether comment X is a useful formative evaluation and whether formative evaluations are useful, but I do agree with Said that LessWrong can benefit from improved formative evaluations.
I have written some fairly popular LessWrong reviews, and one of the things I've uncovered is that some of the most memorable and persuasive evidence underpinning key ideas is much weaker and more ambiguous than I thought it was when I originally read the post. At LessWrong, we'r...
I think the crux of our disagreement is that you seem to think there's this sort of latent potential for people to overcome their feelings of insult and social attack, and that even low-but-nonzero contributions to the discussion have positive value.
My view is this:
I know that you have definitely contributed some comments (and posts too, in the past) where clearly a substantial number of peopl...
Thank you for the kind words.
However, I’m afraid I disagree with your view. Taking your points in reverse order of importance:
While ideally, such comments can and would be ignored or blocked by the people who see them as having such low value, it is about has hard to do this as it is to not feel insulted.
This I find to be a basically irrelevant point. If someone is so thin-skinned that they can’t bear even to ignore/block things they consider to be of low value, but rather find themselves compelled to read them, and then get angry, then that person should perhaps consider avoiding, like… the Internet. In general. This is simply a pathetic sort of complaint.
Now, don’t misunderstand me: if you (I mean the general “you” here, not you in particular) want to engage with what you see as a low-value comment, because you think that’s a productive view of your time and effort, well, by all means—who am I to tell you otherwise? If you feel that here is a person being WRONG on the Internet, and you simply must explain to them how WRONG they are, so that all and sundry can see that they are unacceptably and shamefully WRONG—godspeed, I say. Such things can be both valuable and entertaining...
OK, first of all, let me say that this is an example of Said done well - I really like this comment a lot.
I think most of our disagreement flows from fundamentally different perspectives on how bad it is to make people feel insulted or belittled. In my view, it's easy to hurt people's feelings, that outcome is very destructive, and it's natural for people to make suboptimal choices in reacting to those hurt feelings, especially when the other person knows full well that they routinely provoke that response and choose to do it anyway.
Insulting and harsh posts can still be net valuable (as some of yours are, the majority of Eliezer's, and ~all of the harsh critiques of Gwern's that I've read), but they have to be quite substantial in order to overcome the cost of harshness. But there will have to be a high absolute quantity of positive value overall, not just per word, in order to overcome harshness. After all, it's very easy to deliver a huge absolute magnitude of harshness ("f*** you!") in very few words, but much harder to provide an equally large total quantity of value in the same word count.
I know from our previous comment thread that you just don't think about insulting commen...
I actually do think it is the crux, because you seem to be rearticulating the point of view that I was ascribing to you.
You think that:
And I am saying:
Sounds pretty cruxy to me.
Once again you miss a (the?) key point.
“What are some examples?” does not constitute “calling out a flaw”—unless there should be examples but aren’t. Otherwise, it’s an innocuous question, and a helpful prompt.
“What are some examples?” therefore will not be perceived as insulting—except in precisely those cases where any perceived insult is the author’s fault.
Of course, I also totally disagree with this:
the value of calling out flaws with brief remarks is small
Calling out flaws with brief remarks is not only good (because calling out flaws is good), but (insofar as it’s correct, precise, etc.), it’s much better than calling out flaws with long comments. It is not always possible, mind you! Condensing a cogent criticism into a brief remark is no small feat. But where possible, it ought to be praised and rewarded.
And I want to note a key disagreement with your construal of this part:
Or if they are, it’s the other person’s fault for being thin-skinned being insulting while you go about it
Things would be different if what I were advocating was something like “if a post is bad, in your view, then it’s ok to say ‘as would be obvious to anyone with half a brain, your post is wrong...
Mm, I still think my original articulation of our crux is fine.
Here, you're mostly making semantic quibbles or just disagreeing with my POV, rather than saying I identified the wrong crux.
However, we established the distinction between an insulting comment (i.e. a comment the target is likely to feel insulted by) from a deliberate insult (i.e. a comment primarily intended to provoke feelings of insult) in a whole separate thread, which most people won't see here. It is "insulting comment" that I meant in the above, and I will update the articulation to make that more clear.
On reflection, I do think both Duncan and Said are demonstrating a significant amount of hair-splitting and less consistent, clear communication than they seem to think. That's not necessarily bad in and of itself - LW can be a place for making fine distinctions and working out unclear thoughts, when there's something important there.
It's really just using them as the basis for a callout and fuel for an endless escalation-spiral when they become problematic.
When I think about this situation from both Duncan and Said's point of views to the best of my ability, I understand why they'd be angry/frustrated/whatever, and how the search for reasons and rebuttals has escalated to the point where the very human and ordinary flaws of inconsistency and hair-splitting can seem like huge failings.
At this point, I really have lost the ability and interest to track the rounds and rounds of prosecutorial hair-splitting across multiple comment threads. It was never fun, it's not enlightening, and I don't think it's really the central issue at stake. It's more of a bitch eating crackers scenario at this point.
I made an effort to understand Said's point of view, and whatever his qualms with how I've...
On reflection, I do think both Duncan and Said are demonstrating a significant amount of hair-splitting and less consistent, clear communication than they seem to think.
Communication is difficult; communication when subtleties must be conveyed, while there is interpersonal conflict taking place, much more difficult.
I don’t imagine that I have, in every comment I’ve written over the past day, or the past week (or month, or year, or decade), succeeded perfectly in getting my point across to all readers. I’ve tried to be clear and precise, as I always do; sometimes I succeed excellently, sometimes less so. If you say “Said, in that there comment you did not make your meaning very clear”, I think that’s a plausible criticism a priori, and certainly a fair one in some actual cases.
This is, to a greater or lesser degree, true of everyone. I think it is true of me less so than is the average—that is, I think that my writing tends to be more clear than most people’s. (Of course anyone is free to disagree; this sort of holistic judgment isn’t easy to operationalize!)
What I think I can’t be accused of, in general, is:
The most charitable way I can put my point of view is that, even if it is the other person’s fault, I think that you should prioritize figuring out how to cut your rate of being involved in escalation spirals in half.
If we’re referring to my participation in Less Wrong specifically (and I must assume that you are), then I have to point out that it would be very easy for me to cut my rate of being involved in what you call “escalation spirals” (regardless of whether I agree with your characterization of the situations in question) not only in half or even tenfold, but to zero. To do this, I would simply stop posting and commenting here.
The question then becomes whether there’s any unilateral action I can take, any unilateral change I can make, whose result would be that I could continue spending time on participation in Less Wrong discussions in such a way that there’s any point or utility in my doing so, while also to any non-trivial degree reducing the incidence of people being insulted (or “insulted”), escalating, etc.
It seems to me that there is not.
Certainly there are actions that other people (such as, say, the moderators of the site) could take, that would have that sort o...
The thing that makes LW meaningfully different from the rest of the internet is people bothering to pay attention to meaningful distinctions even a little bit.
In my opinion, the internet has fine-grained distinctions aplenty. In fact, where to split hairs and where to twist braids is sort of basic to each political subculture. What I think makes LessWrong different is that we take a somewhat, maybe not agnostic but more like a liberal/pluralistic view of the categories. We understand them as constructs, "made for man," as Scott put it once, and as largely open to critical investigation and not just enforcement. We try and create the social basis for a critical investigation to happen productively.
When anonymousaisafety complains of hair-splitting, I think they are saying that, while the distinction between "I categorize Said as a liar" and "Said is a liar" is probably actually 100-1000x as important a distinction between "due to" and "for" in your mind, other people also get to weigh in on that question and may not agree with you, at least not in context.
If you really think the difference between these two very similar phrasings is so huge, and you want that to land with other peop...
As a single point of evidence: it's immediately obvious to me what the difference is between "X is true" and "I think X" (for starters, note that these two sentences have different subjects, with the former's subject being "X" and the latter's being "I"). On the other hand, "you A'd someone due to their B'ing" and "you A'd someone for B'ing" do, actually, sound synonymous to me—and although I'm open to the idea that there's a distinction I'm missing here (just as there might be people to whom the first distinction is invisible), from where I currently stand, the difference between the first pair of sentences looks, not just 10x or 1000x bigger, but infinitely bigger than the difference between the second, because the difference between the second is zero.
(And if you accept that [the difference between the second pair of phrases is zero], then yes, it's quite possible for some other difference to be massively larger than that, and yet not be tremendously important.)
Here, I do think that Duncan is doing something different from even the typical LWer, in that he—so far as I can tell—spends much more time and effort talking about these fine-grained distinctions than do others, in a way...
If so, I find this reasoning unconvincing
Why?
I mostly don't agree that "the pattern is clear"—which is to say, I do take issue with saying "we do not need to imagine counterfactuals". Here is (to my mind) a salient example of a top-level comment which provides an example illustrating the point of the OP, without the need for prompting.
I think this is mostly what happens, in the absence of such prompting: if someone thinks of a useful example, they can provide it in the comments (and accrue social credit/karma for their contribution, if indeed other users found said contribution useful). Conversely, if no examples come to mind, then a mere request from some other user ("Examples?") generally will not cause sudden examples to spring into mind (and to the extent that it does, the examples in question are likely to be ad hoc, generated in a somewhat defensive frame of mind, and accordingly less useful).
And, of course, the crucial observation here is that in neither case was the request for examples useful; in the former case, the request was unnecessary, as the examples would have been provided in any case, and in the latter case, the request was useless, as it failed to elicit...
Said’s response was “that seems less fun to me”
It was not.
I did not say anything like this, nor is this my reason for not participating, nor is this a reasonable summary of what I described as my reasons.
(I have another comment on another one of your listed cruxes, but I just wanted to very clearly object to this one.)
Uh… I’m not quite sure that I follow. Is writing reviews… obligatory? Or even, in any sense, expected? I… wasn’t aware that I had been shirking any sort of duty, by not writing reviews. Is this a new site policy, or one which I missed? Otherwise, this seems like somewhat of an odd comment…
I'll go along with whatever rules you decide on, but that seems like an extremely long time to wait for basic clarifications like "what did you mean by this word" or "can you give a real-world example".
@Duncan_Sabien I didn't actually upvote @clone of saturn's post, but when I read it, I found myself agreeing with it.
I've read a lot of your posts over the past few days because of this disagreement. My most charitable description of what I've read would be "spirited" and "passionate".
You strongly believe in a particular set of norms and want to teach everyone else. You welcome the feedback from your peers and excitedly embrace it, insofar as the dot product between a high-dimensional vector describing your norms and a similar vector describing the criticism is positive.
However, I've noticed that when someone actually disagrees with you -- and I mean disagreement in the sense of "I believe that this claim rests on incorrect priors and is therefore false." -- I have been shocked by the level of animosity you've shown in your writing.
Full disclosure: I originally messaged the moderators in private about your behavior, but I'm now writing this in public because in part because of your continued statements on this thread that you've done nothing wrong.
I think that your responses over the past few days have been needlessly escalatory in a way that Said's weren't. If we go with the Socra...
The actual facts matter.
But escalating to arbitrary levels of nuance makes communication infeasible, robustness to some fuzziness on the facts and their descriptions is crucial. When particular distinctions matter, it's worth highlighting. Highlighting consumes a limited resource, the economy of allocating importance to particular distinctions.
The threat of pointing to many distinction as something that had to be attended imposes a minimum cost on all such distinctions, it's costs across the board.
Anonymousaisafety, with respect, and acknowledging there's a bit of the pot calling the kettle black intrinsic in my comment here, I think your comments in this thread are also functioning to escalate the conflict, as was clone of saturn's top-level comment.
The things your comments are doing that seem to me escalatory include making an initially inaccurate criticism of Duncan ("your continued statements on this thread that you've done nothing wrong"), followed by a renewed criticism of Duncan that doesn't contain even a brief acknowledgement or apology for the original inaccuracy. Those are small relational skills that can be immensely helpful in dealing with a conflict smoothly.
None of that has any bearing on the truth-value of your critical claims - it just bears on the manner and context in which you're expressing them.
I think it is possible and desirable to address this conflict in a net-de-escalatory manner. The people best positioned to do so are the people who don't feel themselves to be embroiled in a conflict with Duncan or Said, or who can take genuine emotional distance from any such conflict.
Ray is owning stuff, so this is just me chiming in with some quick takes, but I think it is genuinely important for people to be able to raise hypotheses like "this person is trying to maintain a motte-and-bailey", and to tell people if that is their current model.
I don't currently think the above comment violated any moderation norms I would enforce, though navigating this part of conversational space is super hard and it's quite possible there are some really important norms in the space that are super important and should be enforced, that I am missing. I have a model of a lot of norms in the space already, however the above comment does not violate any of them right now (mostly because it does prefix the statement with a "I suspect X", and does not claim any broader social consensus beyond that).
I also think it's good for you to chime in and say that it's false (you are also correct in that it is uncharitable, but assuming that everyone is well-intentioned is IMO not true and not a required part of good discourse, so it not being charitable seems true but also not obviously bad and I am not sure what you pointing it out means. I think we should create justified knowledge of good intentions wherever possible, I just don't think LW comment threads, especially threads about moderation, are a space where achieving such common knowledge is remotely feasible).
It asserted that I do, as if fact
I am quite confused. The comment clearly says "I suspect"? That seems like one of the clearest prefixes I know for raising something as a hypothesis, and very clearly signals that something is not being asserted as a fact. Am I missing something?
I would've preferred if you had proposed another alternative wording, so that poll could be run as well, instead of just identifying the feature you think is disanalogous. (If you supply the wording, after all, Duncan can't have twisted it, and your interpretation gets fairly tested.)
Why do LW users need the ability to ban other users from commenting on their posts?
If user X could choose to:
what desirable thing would be missing?
The optional public notice would ensure that X's non-response to Y would not be taken to imply anything like tacit agreement; it would also let other users know that their comments downstream of a Y comment would not be seen by X.
('All comments' in the firs...
I guess an unstated part of my position is that there's a limit to how much control a LW user can reasonably expect to have over other users' commenting, and that if they want more control than my suggested system allows them then they should probably post to their own blog rather than LW. But I get that you (and at least some others) disagree with me, and/or are aware of users who do want more control and are sufficiently valuable to LW to justify catering to their needs in this way. I won't push the point; thanks for engaging.
(FWIW, my biggest issue with the current system is that it's not obvious to most readers when people are banned from commenting on a post, and thus some posts could appear to have an exaggerated level of support/absence of good counterarguments from the LW community.)
Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
@Duncan_Sabien's commenting privileges are restored, with a warning. @Said Achmiz is currently under a rate limit (see details).
I have no horse in this race, but from my very much outside perspective it feels like the one who was drama-queening came out on top. Not a great look, but an important lesson. It would make much more sense to me if the consequences were equal or at least obviously commensurable.
Ray is owning this decision, so he can say more if he wants, but as I understand the judgements here are at least trying to be pretty independent of the most recent conflict and are both predominantly rooted in data gathered over many years of complaints, past comment threads, and user interviews.
It feels like it would be quite surprising if based on that, consequences should be equal or obviously commensurable, given that these things hugely differ for different users (including in this case).
Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?".
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.