[I will move this into meta in a few days, but this seemed important enough to have around on the frontpage for a bit]

Here is a short post with some of the moderation changes we are implementing. Ray, Ben and me are working on some more posts explaining some of our deeper reasoning, so this is just a list with some quick updates.

Even before the start of the open beta, I intended to allow trusted users to moderate their personal pages. The reasoning I outlined in our initial announcement post was as follows:

“We want to give trusted authors moderation powers for the discussions on their own posts, allowing them to foster their own discussion norms, and giving them their own sphere of influence on the discussion platform. We hope this will both make the lives of our top authors better and will also create a form of competition between different cultures and moderation paradigms on Lesswrong.”

And I also gave some further perspectives on this in my “Models of Moderation” post that I posted a week ago.

We now finally got around to implement the technology for this. But the big question that has been on my mind while working on the implementation has been:

How should we handle moderation on frontpage posts?

Me, Ray, Ben and Vaniver talked for quite a while about the pros and cons, and considered a bunch of perspectives, but the two major considerations on our mind were:

  1. The frontpage is a public forum that should reflect the perspectives of the whole community, as opposed to just the views of the active top-level authors.
  2. We want our best authors to feel safe posting to LessWrong, and us promoting a post from your private blog to the frontpage shouldn’t feel like a punishment (which it might if it also entails losing control over it)

After a good amount of internal discussion, as well as feedback from some of the top content contributors on LW (including Eliezer), we settled on allowing users above 2000 karma to moderate their own frontpage posts, and allow users above 100 karma to moderate their personal blogs. This strikes me as the best compromise between the different considerations we had.

Here are the details about the implementation:

  • Users above...
    • 100 karma can moderate everything in their personal blog posts section,
      • This functionality should go live in about a week
    • 2000 karma can moderate all of their posts, including frontpage and curated. (We’ll likely increase the ‘2000’ threshold once we import the votes from old LessWrong)
      • This should be live right now
  • Before users can moderate, they have to set one of the three following moderation styles in their user settings:
    • Easy Going - I just delete obvious spam and trolling
    • Norm Enforcing - I try to enforce particular rules (See moderation guidelines)
    • Reign of Terror - I delete anything I judge to be annoying or counterproductive
  • Users can also specify more detailed moderation guidelines, which will be shown at the top of the comment section and at the bottom of the new comment form on the posts they can moderate
  • The specific moderation actions available are:
    • Delete comment
      • Optionally: with a public notice and reason
    • Delete thread without trace (deletes all comments and all its children)
      • Optionally: With a private reason sent to the author
    • Ban user from commenting on this post
    • Ban user from commenting on any of my posts
  • If a comment of yours is ever deleted, you will automatically receive a PM with the text of your comment, so you don’t lose the content of your comment.

I also want to allow users to create private comments on posts, that are only visible to themselves and the author of the post, and allow authors to make comments private (as an alternative to deleting them). But that will have to wait until we get around to implementing it.

We tested this reasonably thoroughly, but there is definitely a chance we missed something, so let us know if you notice any weird behavior around commenting on posts, or using the moderation tools, and we will fix it ASAP.

[Meta] New moderation tools and moderation guidelines
New Comment


248 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

While I certainly have thoughts on all of this, let me point out one aspect of this system which I think is unusually dangerous and detrimental:

The ability (especially for arbitrary users, not just moderators) to take moderation actions that remove content, or prevent certain users from commenting, without leaving a clearly and publicly visible trace.

At the very least (if, say, you’re worried about something like “we don’t want comments sections to be cluttered with ‘post deleted’”), there ought to be a publicly viewable log of all moderation actions. (Consider the lobste.rs moderation log feature as an example of how such a thing might work.) This should apply to removal of comments and threads, and it should definitely also apply to banning a user from commenting on a post / on all of one’s posts.

Let me say again that I consider a moderation log to be the minimally acceptable moderation accountability feature on a site like this—ideally there would also be indicators in-context that a moderation action has taken place. But allowing totally invisible / untraceable moderation actions is a recipe for disaster.

Edit: For another example, note Scott’s register of bans/​warnings, which ... (read more)

I'm also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.

For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he's been quite clear this is a condition for his participation. Lo and behold precisely these features have been introduced, with suspiciously convenient karma thresholds which allow EY (at his current karma level) to traceless delete/ban on his own promoted posts, yet exclude (as far as I can tell) the great majority of other writers with curated/front page posts from being able to do the same.

Given the popularity of EY's writing (and LW2 wants to include future work of his), the LW2 team are obliged to weigh up the (likely detrimental) addition of these features versus the likely positives of his future posts. Going for the latter is probably the right judgement call to make, but let's not pretend it is a principled one: we are, as the old saw goes, just haggling over the price.

Yeah, I didn't want to make this a thread about discussing Eliezer's opinion, so I didn't put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.

Here is my rough summary of how this plays into my current perspective on things:

1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.

2. Allowing users to moderate their own front-page posts is not something that Eliezer requested (I think he would be happy with them just being personal posts), but is a natural consequence of wanting to allow users to moderate their own posts, while also not giving up our ability to promote the best content to the front-page and to curated

3. Allowing users to delete things without a trace was a request by Eliezer, but is also something I thought about independently to deal with stuff like spam and repeated offenders (for example, Eugine has created over 100 comments on one of Ozy's posts, and you don't want all of them to show up as deleted stubs). I expect we wouldn't have built the future as it currently stands without Eliezer, but I hadn't actually considered a moderation logs page like the one Said pointed out, and I actually quite like that idea, and don't expect Eliezer to object too much to it. So that might be a solution that makes everyone reasonably happy.

-1Ben Pace
As usual Greg, I will always come to you first if I ever need to deliver well-articulated sick burn that my victim needs to read twice before they can understand ;-) Edit: Added a smiley to clarify this was meant as a joke.
2Thrasymachus
Let's focus on the substance, please.

I really like the moderation log idea - I think it could be really good for people to have a place where they can go if they want to learn what the norms are empirically. I also propose there be a similar place which stores the comments explaining why posts are curated.

(Also note that Satvik Beri said to me I should do this a few months ago and I forgot and this is my fault.)

I actually quite like the idea of a moderation log, and Ben and Ray also seem to like it. I hadn't really considered that as an option, and my model is that Eliezer and other authors wouldn't object to it either, so this seems like something I would be quite open to implementing.

6jimrandomh
I actually think some hesitation and thought is warranted on that particular feature. A naively-implemented auto-filled moderation log can significantly tighten the feedback loop for bad actors trying to evade bans. Maybe if there were a time delay, so moderation actions only become visible when they're a minimum number of days old?
6Said Achmiz
There is some sense in what you say, but… before we allow concerns like this to guide design decisions, it would be very good to do some reasonably thorough investigating about whether other platforms that implement moderation logs have this problem. (The admin and moderators of lobste.rs, for example, hang out in the #lobsters IRC channel on Freenode. Why not ask them if they have found the moderation log to result in a significant ban evasion issue?)
7skybrian
I'm just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.
5Srdjan Miletic
I agree. There are a few feairly simple ways to implement this kind of transparancy. * When a comment is deleted, change it's title to [deleted] and remove any content. This at least shows when censorship is happening and roughly how much. * When a comment is deleted, do as above but give users the option to show it by clicking on a "show comment" button or something similar. * Have a "show deleted comments" button on users profile pages. Users who want to avoid seeing the kind of content that is typically censored can do so. Those who would prefer to see everything can just enable the option and see all comments. I think these features would add at least some transparancy to comment moderation. I'm still unsure how to make user bans transparent. I'm worried that without doing so, bad admins can just bad users they dislike and give the impression of a balanced discussion with little censorship.
2Said Achmiz
User bans can be made transparent via the sort of centralized moderation log I described in my other comment. (For users banned by individual users, from their own personal blogs, there should probably also be a specific list, on the user page of the one who did the banning, of everyone they’ve banned from their posts.)
1Srdjan Miletic
A central log would indeed allow anyone to see who was banned and when. My concern is more that such a solution would be practically ineffective. I think that most people reading an article aren't likely to navigate to the central log and search the ban list to see how many people have been banned by said articles author. I'd like to see a system for flagging up bans which is both transparent and easy to access, ideally so anyone reading the page/discussion will notice if banning is taking place and to what extent. Sadly, I haven't been able to think of a good solution which does that.

Yeah, I agree it doesn't create the ideal level of transparency. In my mind, a moderation log is more similar to an accounting solution than an educational solution, where the purpose of accounting is not something that is constantly broadcasted to the whole system, but is instead used to backtrack if something has gone wrong, or if people are suspicious that there is some underlying systematic problem going on. Which might get you a lot of the value that you want, for significantly lower UI-complexity cost.

7Said Achmiz
I believe it was Eliezer who (perhaps somewhere in the Sequences) enjoined us to consider a problem for at least five minutes, by the clock, before judging it to be unsolvable—and I have found that this applies in full measure in UX design. Consider the following potential solutions (understanding them to be the products of a brainstorm only, not a full and rigorous design cycle): 1. A button (or other UI element, etc.) on every post, along the lines of “view history of moderation actions which apply to this post”. 2. A flag, attached to posts where moderation has occurred; which, when clicked, would take you to the central moderation log (or the user-specific one), and highlight all entries that apply to the referring post. 3. The same as #2, but with the flag coming in two “flavors”—one for “the OP has taken moderation actions”, and one for “the LW2 admin team has taken moderation actions”. This is what I was able to come up with in five minutes of considering the problem. These solutions both seem to me to be quite unobtrusive, and yet at the same time, “transparent and easy to access”, as per your criteria. I also do not see any fundamental design or implementation difficulties that attach to them. No doubt other approaches are possible; but at the very least, the problem seems eminently solvable, with a bit of effort.
2habryka
(Just for the historical record, there is a moderation log visibile at lesswrong/moderation which does this, though it's not the single most beautiful page on the site)
3Lukas Finnveden
Does the log only display some subset of action, e.g. recent ones? I can only see 10 deleted comments. And the "Users Banned From Users" is surprisingly short, and doesn't include some bans that I saw on there years ago (which I'd be surprised if the relevant author had bothered to undo). It would be good if the page itself clarified this.
3habryka
Oops, sorry, there should be load more buttons there! We recently did a rework of some associated functionality, and I made sure it updates with the recent, but apparently lost the load more buttons.  I'll fix it sometime today or tomorrow.
2Adam Zerner
Why exactly do you find it to be unusually dangerous and detrimental? The answer may seem obvious, but I think that it would be valuable to be explicit.

Dividing the site to smaller sub-tiefs where individual users have ultimate moderation power seems to have been a big part of why Reddit (and to some extent, Facebook) got so successful, so I'm having big hopes for this model.

  1. I love the idea of having private comments on posts. Sometimes I want to point out nitpicky things like grammatical errors or how something could have been phrased differently. But I don't want to "take up space" with an inconsequential comment like that, and don't want to appear nitpicky. Private comments would solve those sorts of problems. Another alternative feature might be different comment sections for a given post. Like, a "nitpicks" section, a "stupid questions" section, a "thanks" section.
  2. I have an impression that, as Said Achmitz already noted, if it were required that if you delete a comment, you must explain why, people would feel much less aversion to this policy. I feel like there's something particularly frustrating about having a comment of yours just deleted out of thin air without any explanation as to why. Feels very Big Brother-y.
  3. One thing that I like about this is that regardless of whether or not it works, it's an experiment. You can't improve without trying new things. I generally applaud efforts to experiment. It makes me feel excited about the future of Less Wrong. "What cool features wi
... (read more)

I don't have an opinion on the moderation policy, but I did want to say thanks for all the hard work in bringing the new site to life.

LessWrong 1.0 was basically dead, and 2.0 is very much alive. Huge respect and well-wishes.

4habryka
Thank you! :)

Just want to say this moderation design addresses pretty much my only remaining aversion to posting on LW and I will be playing around with Reign of Terror if I hit the karma. Also really prefer not to leave public traces.

2PDV
If you don't want to leave public traces, others must assume that we wouldn't like what we saw if the traces were public.
8alkjash
No, others could be a bit more charitable than that. Looking back at the very few comments I would have considered deleting, I would use it exclusively to remove low-effort comments that could reasonably be interpreted as efforts to derail the conversation into demon threads.
6Said Achmiz
Consider the possible reasons why you, as the OP, would not want a comment to appear in the comments section of your post. These fall, I think, into two broad categories: Category 1: Comments that are undesirable because having other people respond to them is undesirable. Category 2: Comments that are undesirable because having people read them is undesirable (regardless of whether anyone responds to them). Category 1 (henceforth, C1) includes things like what you just described. Trolling (comments designed to make people angry or upset and thus provoke responses), off-topic comments (which divert attention and effort to threads that have nothing to do with what you want to discuss), low-effort or malicious or intentionally controversial or etc. comments that are likely to spawn “demon threads”, pedantry, nitpicking, nerdsniping, and similar things, all fall into this category as well. Category 2 (henceforth, C2) is quite different. There, the problem is not the fact that the comment provokes responses (although it certainly might); the problem is that the actual content of the comment is something which you prefer people not to see. This can include everything from doxxing to descriptions of graphic violence to the most banal sorts of spam (CHEAP SOCKS VERY GOOD PRICE) to things which are outright illegal (links to distribution of protected IP, explicit incitement to violence, etc.). And, importantly, C2 also includes things like criticism of your ideas (or of you!), comments which mention things about you that paint you in a bad light (such as information about conflicts of interest), and any number of similar things. It should be clear from this description that Category 2 cleaves neatly into two subtypes (let’s call them C2a and C2b), the key distinction between which is this: for comments in C2a, you (the OP) do not want people to read them, and readers themselves also do not want to read them; your interests and those of your readers are aligned. But for

My primary desire to remove the trace is that there are characters so undesirable on the internet that I don't want to be reminded of their existence every time I scroll through my comments section, and I certainly don't want their names to be associated with my content. Thankfully, I have yet to receive any comments anything close to this level on LW, but give a quick browse through the bans section of SlateStarCodex and you'll see they exist.

I am in favor of a trace if it were on a moderation log that does not show up on the comment thread itself.

1Gurkenglas
Wouldn't someone just make a client or mirror like greaterwrong that uses the moderation log to unhide the moderation?
9Said Achmiz
This is a valid concern, one I would definitely like to respond to. I obviously can’t speak for anyone else who might develop another third-party client for LW2, but as far as GreaterWrong goes—saturn and I have discussed this issue. We don’t feel that it would be our place to do what you describe, as it would violate the LW2 team’s prerogative to make decisions on how to set up and run the community. We’re not trying to undermine them; we’re providing something that (hopefully) helps them, and everyone who uses LW2, by giving members of the community more options for how to interact with it. So you shouldn’t expect to see GW add features like what you describe (i.e. those that would effectively undo the moderation actions of the LW2 team, for any users of GW).
4gjm
They might. But that would unhide it only for them. For most undesirable comments, the point of deleting them is to keep them out of everyone's face, and that's perfectly compatible with there being other ways of viewing the content on LW that reinstate the comments. What fraction of users who want the ability to delete comments without trace would be satisfied with that, I don't know. (A moderation log wouldn't necessarily contain the full text of deleted comments, anyway, so restoring them might not be possible.)
5habryka
Yeah, I wasn’t thinking of showing the full text of deleted comments, but just a log of its deletion. This is also how lobste.rs does it.
2Said Achmiz
You’re right about lobste.rs, but in this case I would strongly suggest that you do show the full text of deleted comments in the moderation log. Hide them behind a disclosure widget if you like. But it is tremendously valuable, for transparency purposes, to have the data be available. It is a technically insignificant change, and it serves all the same purposes (the offending comment need not appear in the thread; it needs not even appear by default in the log—hence the disclosure widget); but what you gain, is very nearly absolute immunity to accusations of malfeasance, to suspicion-mongering, and to all the related sorts of things that can be so corrosive to an internet community.
5habryka
Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content). Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content. This might be solvable by making the content of the deleted comments only available to people who have an account, or above a certain level of karma, or to make it hard to link to individual entries in the moderation log (though that seems like it destroys a bunch of the purpose of the moderation log). Currently, I would feel uncomfortable having the content of the old comments be easily available, simply because I expect that people will inevitably start paying more attention to the deleted content section than the average comment with 0 karma, completely defeating the purpose of reducing the amount of attention and influence bad content has. The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.
4Said Achmiz
This view seems to imply some deeply worrying things about what comments you expect to see deleted—and that you endorse being deleted! Consider again my taxonomy of comments that someone might want gone. What you say applies, it seems to me, either to comments of type C1 (comments whose chief vice is that they provoke responses, but have little or no intrinsic value), or to comments of type C2b (criticism of the OP, disagreement, relevant but embarrassing-to-the-author observations, etc.). The former sort of comment is unlikely to provoke a response if they are in the moderation log and not in the thread. No one will go and dig a piece of pedantry or nitpickery out of the mod-log just to response to it. Clearly, such comments will not be problematic. But the latter sort of comment… the latter sort of comment is exactly the type of comment which it should be shameful to delete; the deletion of which reflects poorly on an author; and to whose deletion, attention absolutely should be paid! It is right and proper that such comments, if removed, should attract even more attention than if they remain unmolested. Indeed, if the Streisand effect occurs in such a case, then the moderation log is doing precisely that which it is meant to do. This category of comment ought not meaningfully inform your overall design of the moderation log feature, as there is a simple way to deal with such cases that doesn’t affect anything else: Treat it like any other deleted comment, but instead of showing the text of the comment in the mod-log, instead display a message (styled and labeled so as to clearly indicate its nature—perhaps in bold red, etc.) to the effect of “The text of this comment has been removed, as it contained non-public information / doxxing / etc.”. (If you were inclined to go above and beyond in your dedication to transparency, you might even censor only part of the offending comment—after all, this approach is good enough for our government’s intelligence organizat
2Gurkenglas
Whoever provides a mirror would only need the cooperation of some user with 100 karma to circumvent that restriction. Unless you log which users viewed which deleted posts, and track which deleted posts have been published. Then the mirror might become a trading hub where you provide content from deleted posts in exchange for finding out content from other deleted posts. And at some point money might enter into it, incentivizing karma farms.
-4PDV
Others could, if they are unwise. But they should not. There is no shame in deleting low-effort comments and so no reason to hide the traces of doing so. There is shame in deleting comments for less prosocial reasons, and therefore a reason to hide the traces. The fact that you desire to hide the traces is evidence that the traces being hidden are of the type it is shameful to create.
6alkjash
I agree that desiring to hide traces is evidence of such a desire, but it's simply not my motivation: The primary reason I want comments at all are (a) to get valuable corrective feedback and discussion, and (b) as motivation and positive reinforcement to continue writing frequently. There are comments that provide negligible-to-negative amounts of (a) and even leaving a trace of which stands a serious chance of fucking with (b) when I scroll past in the future. These I would like to delete without trace. Now I would like to have a discussion about whether a negative reaction to seeing even traces of the comments of trolls is a rational aversion to have, but I know I currently have it and would guess that most other writers do as well.
3Said Achmiz
I think you are seriously missing the point of the concerns that PDV is (and that I am) raising, if you respond by saying “but I don’t plan to use traceless deletion for the bad reason you fear!”. Do I really need to enumerate the reasons why this is so? I mean, I will if asked, but every time I see this sort of really very frustrating naïveté, I get a bit more pessimistic…
7Raemon
This seems to be missing the point of Alkjash's comment, though. I don't think Alkjash is missing the concerns you and PDV have. PDV said "others can only assume that we wouldn't like what we saw if the traces were public." This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more). (if PDV was saying ‘it’s strategically adviseable to assume the worst reason, that’s... plausible, and would lead me to respond differently.) FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.
9clone of saturn
Making it easier to get away with bad behavior is bad in itself, because it reduces trust and increases the bad behavior's payoff, even if no bad behavior was occurring before. It's also corrosive to any norm that exists against the bad behavior, because "everyone's getting away with this except me" becomes a plausible hypothesis whether or not anyone actually is. I interpret PDV's comments as an attempt to implicitly call attention to these problems, but I think explicitly spelling them out would be more more likely to be well-received on this particular forum.
4PDV
It is strategically necessary to assume that social incentives are the true reason, because social incentives disguise themselves as any acceptable reason, and the corrosive effect of social incentives is the Hamming Problem for group epistemics. (I went into more detail here.)
2Said Achmiz
Then his comments are simply non-responsive to what I and PDV have said, and make little to no sense as replies to either of our comments. I assumed (as I usually do) compliance with the maxim of relation. Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others. I think here I’m going to say “plausible deniability” and “appearance of impropriety” and hope that those keywords get my point across. If not, then I’m afraid I’ll have to bow out of this for now.
9dxu
This is a claim that requires justification, not bald assertion--especially in this kind of thread, where you are essentially implying that anyone who disagrees with you must be either stupid or malicious. Needless to say, this implication is not likely to make the conversation go anywhere positive. (In fact, this is a prime example of a comment that I might delete were it to show up on my personal blog--not because of its content, but because of the way in which that content is presented.) Issues with tone aside, the quoted statement strongly suggests to me that you have not made a genuine effort to consider the other side of the argument. Not to sound rude, but I suspect that if you were to attempt an Ideological Turing Test of alkjash's position, you would not in fact succeed at producing a response indistinguishable from the genuine article. In all charitability, this is likely due to differences of internal experience; I'm given to understand that some people are extremely sensitive to status-y language, while others seem blind to it entirely, and it seems likely to me (based on what I've seen of your posts) that you fall into the latter category. In no way does this obviate the existence or the needs of the former category, however, and I find your claim that said needs are "dwarfed" by the concerns most salient to you extremely irritating. Footnote: Since feeling irritation is obviously not a good sign, I debated with myself for a while about whether to post this comment. I decided ultimately to do so, but I probably won't be engaging further in this thread, so as to minimize the likelihood of it devolving into a demon thread. (It's possible that it's already too late, however.)
2Gurkenglas
Can't you just use AdBlock to hide such comments from your browser?
2PDV
I agree that desiring to hide traces is evidence of such a desire, but it's simply not my motivation Irrelevant. Stated motivation is cheap talk, not reliable introspectively, let alone coming from someone else. Or, in more detail: 1) Unchecked, this capability being misused will create echo chambers. 2) There is a social incentive to misuse it; lack of dissent increases perceived legitimacy and thus status. 3) Where social incentives to do a thing for personal benefit exist, basic social instincts push people to do that thing for personal benefit. 4) These instincts operate at a level below and before conscious verbalization. 5) The mind's justifier will, if feasible, throw up more palatable reasons why you are taking the action. 6) So even if you believe yourself to be using an action for good reasons, if there is a social incentive to be misusing it, you are very likely misusing it a significant fraction of the time. 7) Even doing this a fraction of the time will create an echo chamber. 8) For good group epistemics, preventing the descent into echo chambers is of utmost importance. 9) Therefore no given reason can be an acceptable reason. 10) Therefore this capability should not exist.

I think giving people the right and responsibility to unilaterally ban commenters on their posts is demanding too much of people's rationality, forcing them to make evaluations when they're among most likely to be biased, and tempting them with the power to silence their harshest or most effective critics. I personally don't trust myself to do this and have basically committed to not ban anyone or even delete any comments that aren't obvious spam, and kind of don't trust others who would trust themselves to do this.

Banning someone does not generally silence their harshest critics. It just asks those critics to make a top-level post, which generally will actually have any shot at improving the record and discourse in reasonable ways compared to nested comment replies. 

The thing that banning does is make it so the author doesn't look like he is ignoring critics (which he hasn't by the time he has consciously decided to ban a critic). 

[-][anonymous]148

which generally will actually have any shot at improving the record and discourse in reasonable ways compared to nested comment replies

I would believe this iff banned users were nonetheless allowed (by moderator fiat) to type up a comment saying "I have written a response to this post at [insert link]," which actually shows up in the comment section of the original post

Otherwise, I'd suspect a large part of the readers of the post will not even know there is such a response disagreeing with the original (because they just stumbled upon the post, or they clicked on a link to it from elsewhere on LW, or were just browsing some tag or the "recent activity" tab, etc).

(Not to mention that posts don't even have a sticker at the bottom saying "the author has banned the following users from commenting on their posts: [...]", which should absolutely appear if the point of allowing authors to ban commenters was actually to improve the record and discourse. 

You have to know to click on a whole different link (which basically gets advertised precisely nowhere on the front page) to gather that info, and unironically I don't even currently remember what that link is... and I think I'... (read more)

5habryka
Oh, also just for the record, we have a pingback section at the bottom of every post, above the comment section, which basically achieves exactly this. If you write a popular response to a post, it will show up right below the post for anyone to see!
1lesswronguser123
I find that feature extremely helpful a lot of old sequence posts are outdated or have detailed expansions which often ping the original post, that alongside with precise tagging makes lesswrong easier. I don't know how feasible this is, and how much usage it will garner but a pingback, tagging or bookmark feature for shortforms— since people are resorting to expanding various ideas in this format— would be useful, whereas in early era of lesswrong there were extremely short posts, which makes searching for them much easier.  (Or alternatively make shortforms more like twitter and add a low character limit)  Advanced search operators would also be welcome! Thanks.
3habryka
I think some UI thing in the space wouldn't be crazy. If banning was going to be used more frequently, it's something I would consider building (I would put it at the bottom of the comment section, but that's still a reasonable place to find it). We have something kind of close. At the bottom of every comment section you see a link to the moderation log, which allows you to see who is banned from whose posts. If banning was a thing that happened reasonably frequently, changing it to say "Moderation Log (3 users banned from this post, 2 comments deleted)" or something like that, would be reasonable to me. But it happens so rarely that I currently don't think it's worth the development effort (but if people really care, I would accept a PR that adds something like that).
6Said Achmiz
With respect, this is not close at all. The UI element should explicitly list which users have been banned, without making anyone click on anything. (And the list of deleted comments should explicitly list the authors of those comments, and link to the text of those comments in the moderation log.) I agree with @sunwillrise: the current design is absolutely not what the feature would look like if it were actually designed to serve readers and improve discussions.
3habryka
The feature was not designed for this purpose, it was mostly designed so that people who are interested in LW can generally see what kind of moderation actions are happening. I agree that if banning was more frequent I would add a more specific list (which is what I said, and you seem to have just ignored).  I don't think a full list makes sense, just because of clutter, but a number seems pretty reasonable (and ideally some way of highlighting the UI element if indeed there is some kind of relevant thing happening).

The thing that banning does is make it so the author doesn’t look like he is ignoring critics (which he hasn’t by the time he has consciously decided to ban a critic).

… of course banning has this effect, but this is obviously a massively misleading appearance in that case, so why would we want this? You seem to be describing a major downside of allowing authors to ban critics!

Like, suppose that I read a post, see that the author responds to all critical comments, and think: “Cool, this dude doesn’t ignore critics; on the contrary, he replies coherently and effectively to all critical comments; well done! This makes me quite confident that this post in particular doesn’t have any glaring flaws (since someone would point them out, and the author would have no good answer); and, more generally, it makes me think that this author is honest and unafraid to face criticism, and has thought about his ideas, and so is unlikely to be spouting nonsense.”

And then later I find out that actually, this author has banned one or more users from his posts after those users posted seriously critical comments for which the author had no good responses; and I think “… oh.”

Because that initial impres... (read more)

7habryka
Yeah, I agree this would be bad if it happened. I don't currently think it's happening, but see my response to sunwillrise on what I would do if it turned out to be an issue.  I also am really not interested in this discussion with you in-particular. You alone are like 30% of the reason why a ban system on this site is necessary. I think this site might have literally died if we had you and not a ban system, so it appears to me that you in-particular seem to particularly fail to understand the purpose of a ban system. I could not in good faith encourage someone to post on LW if they did not have the option of banning you and an extremely small number of other users from their post.
6Said Achmiz
I… don’t understand what this could mean. I didn’t describe some totally different phenomenon; I just described the thing you already said was the purpose of the ban system! How could it not be happening?? No, I understand quite well what you claim is the purpose of a ban system (as you have taken the time to explain your thinking on this, numerous times, and at some length). That is not the source of our disagreement at all. You think that there are (some? many?) authors whose contributions are valuable (such that it would be better to have those authors’ posts on LW than not to have them), but who experience such severe mental discomfort from being criticized (or even having their ideas challenged or questioned) in a sufficiently direct way that if they expect this to happen in response to posts they write on LW, then they will prefer not to write posts on LW. You believe that this is a loss for the forum, for the “rationalist community”, maybe for humanity as a whole, etc. Therefore, by letting those authors ban anyone they want from commenting on their posts, you enable those authors to post on LW as they please, without fear of the aforementioned mental discomfort; and this, according to you, is a gain for all relevant parties/groups, and advances the goals of Less Wrong. The counterfactual loss of the comments that will not be posted as a result of such bans is insignificant by comparison (although regrettable ceteris paribus). There’s nothing confusing or difficult to understand about this view. The only trouble is that it’s thoroughly and egregiously mistaken. If you don’t want to discuss this with me, well, that’s your right, of course. But I hope you can see why this unwillingness is quite predictable, conditional on the assumption that I’m simply correct about this.
7habryka
No you have strawmaned my position while asserting facts about my mental state with great confidence. As I have said in another thread, I have uniquely little interest in discussing this with you, so I won't respond further.

It's a localized silencing, which discourages criticism (beyond just the banned critic) and makes remaining criticism harder to find, and yes makes it harder to tell that the author is ignoring critics. If it's not effective at discouraging or hiding criticism, then how can it have any perceived benefits for the author? It's gotta have some kind of substantive effect, right? See also this.

6Ben Pace
In contrast with my fellow mod, I do feel worried about the suppression of criticism as a result of banning. I think that sort of thing is hard to admit to because we generally have pretty hard lines around that sort of thing around here, and it is plausible to me not worth putting any pressure on in this case. Something on my mind is that sometimes there are people that are extremely unpleasant to deal with? For instance, I know one occasional commenter in this site who I believe stole a bunch of money from another occasional commenter, though they never took it to court. I think that’d be v unpleasant if the former ended up replying to a lot of the latter’s posts. I would also say that sometimes people can be extremely unpleasant online. I know one person who (I believe) goes around the internet spreading falsehoods attempting to damage the reputation of another person, often directly, and in ways that seem to me a bit delusional, and I think that just based on that behavior it would be reasonable to ban someone. Perhaps you will say that these are decisions for the mods to make. Perhaps that is so. My guess is that having to send a message to one of the mods making your case, is a much much bigger trivial inconvenience than banning the person, and this is more likely to suppress the person’s contributions. I think this is a strong reason to let users decide themselves, and I would like to give them agency in that. Indeed, on social media sites like Twitter the only way to survive is via massive amounts of blocking, one could not conceivably report all cases to mods and get adjudication. The counterpoint is that banning can be abused, and conversation in spite of unpleasantness and conflict is critical to LessWrong. If a user banned a lot of good contributors to the site, I would probably desire to disable their ability to be able to do that. So I think it a good default to not allow it. But I think at the very least it should not be thought of as a policy wit
-3Said Achmiz
Isn’t this just… comments, but worse? (Because surely not all useful comments are written in direct response to a post; many times, they are responses to other people’s comments, or to the OP’s comments which are written in response to other people’s comments, etc.) Do you really think that an author who banned a commenter from commenting on their posts would be more happy if the commenter could instead write “response posts” which would be displayed under that author’s posts (and above the comments, no less!)??
3Three-Monkey Mind
This sounds like “the blogosphere, but more self-contained” (not unlike Substack).
3Ben Pace
Yes. I currently guess that I would prefer that people I find very unpleasant be able to critique my post but I not have to read it in order to read and engage with the comments of ppl I don’t find v unpleasant. I don’t mind that it happens, I mind that I have to read it when reading through all my other comments. (In case this is the crux: I am imagining the response post to be shown with the same UI as a post on the frontpage, a horizontal box that is much shorter vertically than almost all comments.)
-11Said Achmiz
5habryka
The thing that it changes is the degree to which the author's popularity or quality is being used to give a platform to other people, which I think makes a lot of sense to give the author some substantial control over. If you have a critique that people care about, you can make a top-level post, and if it's good it can stand on its own and get its own karma.  If it's really important for a critique to end up directly associated with a post, you can just ask someone who isn't banned to post a link to it under that post. If you can't find anyone who thinks it's worth posting a link to it who isn't yourself, then I think it's not that sad for your critique to not get seen.  Yes, this puts up a few trivial inconveniences, but the alternative of having people provide a platform to anyone without any choice in the matter, whose visibility gets multiplied proportional to their own reach and quality, sucks IMO a lot more.  My engagement with LW is kind of weird and causes me to not write as many top-level posts as I like, and the ones I do write are site announcements, but if I was trying to be a more standard author on LW, I wouldn't use it without the ability to ban (and for example, would almost certainly ban Said, who has been given multiple warnings by the moderation team, is by far the most heavily complained user on the site, and I would actively encourage many authors to ban if they don't want to have a kind of bad time, but I think is providing enough value to the site in other contexts that I don't think a site-wide ban would make sense).

the alternative of having people provide a platform to anyone without any choice in the matter, whose visibility gets multiplied proportional to their own reach and quality, sucks IMO a lot more.

On the contrary, this is an actively good thing, and should be encouraged. “If you write about your ideas on a public discussion forum, you also thereby provide a platform to your critics, in proportion to the combination of your own reach and the critics’ popularity with the forum’s membership[1]” is precisely the correct sort of dynamic to enable optimal truth-seeking.

Think about it from the reader’s perspective: if I read some popular, interesting, apparently-convincing idea, the first thing—the first thing!—that I want to know, after doing a “first-pass” evaluation of the idea myself, is “what do other people think about it”? This is not a matter of majoritarianism, note, but rather:

  • something akin to “many eyes make all bugs shallow” (are there problems I missed, but that other people have pointed out?)
  • other people are often more knowledgeable than I am, and more qualified to notice and point out serious problems with an idea or a work
  • heuristic evaluation on the basis of seeing h
... (read more)
3habryka
I agree with many of these things, but of course, bad commenters have driven away many more commenters and authors than banning has driven good commenters away. I don't think the current ban-system is generally causing serious, effective critics to be banned, and on-net is increasing the number of serious and effective critics. If we had a better karma system, I think there are some tools that I might want to make available to people that are better than banning. Maybe things like karma thresholds, or some other way of making it so a user needs to be in particular good standing to leave a comment. But unfortunately, our current karma system is not robust enough for that, and indeed, leaving many bad comments, is still unfortunately a way to get lots of karma.
3Said Achmiz
You can’t possibly know that. (Especially since “driven away” includes “discouraged from posting in the first place”.)

I think there are few people who have beliefs as considered on this topic as I do! And maybe no one who has as much evidence as I do (which doesn't mean I am right, people come to dumb beliefs while being exposed to lots of evidence all the time). 

I've conducted informal surveys, have done hundreds of user interviews, have had conversations about their LessWrong posting experienes with almost every core site contributor over the years, have had conversations with hundreds of people who decided not to post on LW but instead post somewhere else, conversations with people who moved from other platforms to LW, conversations with people who moved from LW to other platforms, and many more. 

I have interviewed people in charge of handling these tradeoffs at many of the other big content platforms out there, as well as dozens of people who run smaller forums and online communities. I have poured over analytics and stats and graphs trying to understand what causes people to write here instead of other places, and what causes them to grow as both a commenter and writer. 

All of these form a model of how things work that suggests to me that yes, it is true that bad commenters dri... (read more)

6Wei Dai
I really don't get the psychology of people who won't use a site without being able to unilaterally ban people (or rather I can only think of uncharitable hypotheses). Why can't they just ignore those they don't want to engage with, maybe with the help of a mute or ignore feature (which can also mark the ignored comments/threads in some way to notify others)? Gemini Pro's verdict on my feature idea (after asking it to be less fawning): The refined "Mute-and-Flag" system is a functional alternative, as it solves the author's personal need to be shielded from unwanted interactions and notifications. The continued preference for a unilateral block, then, is not driven by a personal requirement that the Mute-and-Flag system fails to meet. Instead, it stems from a differing philosophy about an author's role and responsibilities for the space they create. The conflict centers on whether an author is simply a participant who can disengage personally, or if they are the primary curator of the conversational environment they initiate. An author who prefers a block is often motivated by this latter role. They may want to actively "garden" the discussion to maintain quality for all readers, prevent reputational damage by proxy, or because they lack confidence in the community's ability to effectively moderate a disruptive user, even with flags. Ultimately, the choice between these systems reflects a platform's core design trade-off. A Mute-and-Flag system prioritizes public transparency and community-led moderation. A Unilateral Block system prioritizes authorial control and the ability to directly shape a discussion environment, while accepting the inherent risks of censorship and abuse. ---------------------------------------- My response to this is that I don't trust people to garden their own space, along with other reasons to dislike the ban system. I'm not going to leave LW over it though, but just be annoyed and disappointed at humanity whenever I'm reminded of it

Why can't they just ignore those they don't want to engage with, maybe with the help of a mute or ignore feature (which can also mark the ignored comments/threads in some way to notify others)?

I get a sense that you (and Said) are really thinking of this as a 1-1 interaction and not a group interaction, but the group dynamics are where most of my crux is.

I feel like all your proposals are like “have a group convo but with one person blanking another person” or “have a 6-person company where one person just ignores another person” and all of my proposals are “have 2 group convos where the ignored isn’t with the person they’re ignoring”, and I feel like the former is always unnatural and never works and the latter is entirely natural and works just fine.

If you ignore a person in a group convo, it’s really weird. Suppose the person makes some additional comment on a topic you’re discussing and now other people continue it. You are still basically interacting with that person’s comment; you couldn’t ignore it, because it directed the flow of conversation. Or instead perhaps you randomly stop engaging with threads and then ppl learn not to reply to that person because you won’t engage, and then they start to get subtly socially excluded in ways they didn’t realize were happening. These are confusing and unnatural and either can make the conversation “not worth it” or “net costly” to one of the participants.

2Wei Dai
My proposal can be viewed as two distinct group conversations happening in the same place. To recap, instead of a ban list, the author would have a mute list, then whenever the muted people comment under their post, that comment would be hidden from the author and marked/flagged on some way for everyone else. Any replies to such muted and flagged comments would themselves be muted and flagged. So conversation 1 is all the unflagged comments, and conversation 2 is all the flagged comments. If this still seems a bad idea, can you explain why in more detail?
8Wei Dai
TBC, I think the people demanding unilateral bans will find my proposal unacceptable, due to one of my "uncharitable hypotheses", basically for status/ego/political reasons, or subconsciously wanting to discourage certain critiques or make them harder to find, and the LW team in order to appeal to them will keep the ban system in place. One of my purposes here is just to make this explicit and clear (if it is indeed the case).
3habryka
We've considered something similar to this, basically having two comment sections below each post, one sorted to the top one sorted to the bottom. And authors can move things to the bottom comment section, and there is a general expectation that authors don't really engage with the bottom comment section very much (and might mute it completely, or remove author's ability to put things into the top comment section).  I had a few UI drafts of this, but nothing that didn't feel kind of awkward and confusing.  (I think this is a better starting point than having individual comment threads muted, though explaining my models would take a while, and I might not get around to it)
3Wei Dai
Maybe your explanation will change my mind, but your proposal seems clearly worse to me (what if a muted person responds to a unmuted comment? If it gets moved to the bottom, the context is lost? Or they're not allowed to respond to anything on the top section? What epistemic purpose does it serve to allow a person in a potentially very biased moment to unilaterally decide to make a comment or commenter much harder for everyone else to see, as few people would bother to scroll to the bottom?) and also clearly harder to implement. I feel like you're not providing much evidence against my hypothesis in the sibling thread, that LW team is forced to appeal to epistemically irrational/statusy motivations by allowing unilateral bans or other schemes that appeal to such motivations to the detriment of epistemics, and coming up with rationalizations to avoid admitting this. ETA: TBC, I'm not asking for an explicit admission. If you have to appeal to irrational/statusy motivations to attract certain authors, then you probably can't admit that since it would hurt their status and drive them away. I just want you to make a conscious calculation about this if you haven't already, and find out for myself and others whether there's other reasonable hypotheses.
5habryka
I am not sure I am understanding your proposal then. If you want people to keep track of two conversations with different participants, you need the comment threads to be visually separated. Nobody will be able to keep track of who exactly is muted when in one big comment thread, so as long as this statement is true, I can't think of any other way to implement that but to move things into fully separate sections:  And then the whole point of having a section like this (in my mind) is to not force the author to give a platform to random bad takes proportional to their own popularity without those people doing anything close to proportional work. The author is the person who attracted 95% of the attention in the first place, almost always by doing good work, and that control is the whole reason why we are even considering this proposal, so I don't understand what is to gain by doing this and not moving it to the bottom. In general it seems obvious to me that when someone writes great content this this should get them some control over the discussion participants and culture of the discussion. They obviously always have that as a BATNA by moving to their own blog or Substack or wherever, and I certainly don't want to contribute to a platform and community that gives me no control over its culture, given that the BATNA of getting to be part of one I do get to shape is alive and real and clearly better by my lights (and no, I do not consider myself universally conflicted out of making any kind of decision of who I want to talk to or what kind of culture I want to create, I generally think discussions and cultures I shape are better than ones I don't, and this seems like a very reasonable epistemic state to me).
4Wei Dai
The point of my proposal is to give authors an out if there are some commenters who they just can't stand to interact with. This is a claimed reason for demanding a unilateral ban, at least for some. If the author doesn't trust the community to vote bad takes down into less visibility, when they have no direct COI, why should I trust the author to do it unilaterally, when they do? Writing great content doesn't equate to rationality when it comes to handling criticism. LW has leverage in the form of its audience, which most blogs can't match, but obviously that's not sufficient leverage for some, so I'm willing to accept the status quo, but that doesn't mean I'm going to be happy about it.
7habryka
Comments almost never get downvoted. Most posts don't get that many comments. Karma doesn't reliably work as a visibility mechanism for comments.  I do think a good karma system would take into account authors banning users, or some other form of author feedback, and then that would be reflected in the default visibility of their comments. I don't have a super elegant way of building that into the site, and adding it this way seems better than none at all, since it seems like a much stronger signal than normal aggregate upvotes and downvotes. I do think the optimal version of this would somehow leverage the karma system and all the voting information we have available into making it so that authors have some kind of substantial super-vote to control visibility, but that is balanced against votes by other people if they really care about it, but we don't have that, and I don't have a great design for it that wouldn't be super complicated. Overall, I think you should model karma as currently approximately irrelevant for managing visibility of comments due to limited volume of comments, and the thread structure making strict karma sorting impossible, so anything of the form "but isn't comment visibility handled by the karma system" is basically just totally wrong, without substantial changes to the karma system and voting.  On the other hand karma is working great for sorting posts and causing post discoverability, which is why getting critics to write top-level posts (which will automatically be visible right below any post they are criticizing due to our pingback system) is a much better mechanism for causing their content to get appropriate attention. Both for readers of the post they are criticizing, and for people who are generally interested in associated content.

Here's my understanding of the situation. The interested parties are: 

  1. Prominent authors: Contribute the most value to the forum and influence over the forum's long term trajectory. They will move to other platforms if they think it will be better for their messages.
  2. Readers: Don't want to see low quality comments that are hard to filter out (though I think when there are a lot of comments, comment karma helps a lot and I'm a lot more concerned about prominent authors leaving than about needing to skim over comments)
  3. Prominent authors concerned with fairness: Authors like Wei who have equally or more valuable content and will prefer a forum that shows that the writer is allowing non-biased commenting from readers even if the reader (like me)  needs to be willing to do a little more work to see this.
  4. Suspected negative value commenters: Think their comments are valuable and being suppressed due to author bias
  5. Intelligent automated systems: Should probably just get everything since they have unlimited patience for reading low quality, annotated comments
  6. Forum developers: Their time is super valuable

Does this sound about right?

[Update: The guidelines above say "Before users can mo... (read more)

7Three-Monkey Mind
You forgot readers who also want to see debunkings of bad posts (without having to maintain a separate list of people who usually debunk bad posts).
5habryka
I think the group that is missing the most are other active commenters. Maybe you meant to include them in "authors" but I think it makes sense to break them out. The thing that IMO burns people the most is trying to engage in good faith with someone, or investing a lot of effort into explaining things, only to end up in a spot where they feel like that work ended up being mostly used against them in weird social ways, or their reward for sticking their head out and saying anything was being met with sneering. This applies to authors, but it also applies a lot to commenters.  One of the most central value propositions that makes people want to be on LW instead of the rest of the internet is the fact they don't have to do moderation themselves. Commenters want to participate in curated environments. Commenters love having people engage with what they say with serious engagement (even if combined with intense disagreement) and hate having things rudely dismissed or sneered at.  Commenters hate having to do norm enforcement themselves, especially from an authority-less position. Indeed, maybe the most common problem I hear from people about other forums, as well as community engagement more general, is that they feel like they end up spending a lot of their time telling other people to be reasonable, and then this creates social drama and a feeling of needing to rile up troops via a badly codified mechanism of social enforcement, and this is super draining and exhausting, and so they leave. Moderator burnout is also extremely common, especially when the moderators do not feel like they get to act with authority. By having both the LW moderation team do lots of active opinionated moderation, and allowing authors to do the same, we can create spaces that are moderated and can have any kind of culture, as opposed to just what happens by default on the internet. Realistically, you cannot settle norm disputes in individual comment threads, especially not with a rotating
7[anonymous]
The argument this statement is a part of does not logically follow. Commenters also want to participate in echo chambers and circlejerk over woo sometimes. That doesn't mean LW should be the spot for it. It does mean other places on the internet should probably supply that need, and indeed there is no shortage of such spaces. All else equal, creating an environment a part of the commenters hate is indeed bad. Here, all else is not equal, as has been repeatedly explained. Are you referring to yourself and the LW team here? You have certainly acted with a ton of authority in the past, are more than welcome to do so in the future, and the community largely has your backs and believes in you. I have a hard time thinking of another major forum where moderators are respected more than on LW. Except for the most important kind of culture, the culture which sits at the heart of what allows LW to be an epistemically productive environment (to the extent it is), namely one where applause lights, semantic stopsigns, nonsensical woo, vaguely religion-inspired obscurantism, etc. written by the authors get called out by commenters quickly, rapidly, and reliably.  You are empowering authors to set the culture of the commentary on their posts. This gets things precisely backwards, as has been repeatedly pointed out on this and related threads in the past few days. The culture this site has aspired to and hopefully continues to aspire to is the one that allows for fundamental epistemic norms (i.e., rules of reason as opposed to social rules, i.e., what the first book of the Sequences, Map and Territory, is all about) to be upheld. Sometimes this means empowering authors, sometimes this means empowering commenters. Sometimes this means enforcing certain social rules, sometimes it means making it clear in advance that policing of supposed social norms will not happen if certain conditions are met. Etc. The culture that matters is one that does not unilaterally cede control to aut
5Ben Pace
Not sure if this is a big crux, just stoping by to note a disagreement here: I think often the reason that Alice doesn’t want to talk to Bob can be because Bob was very unpleasant toward Alice in off-site contexts. This is not something that the mods are able to see, but is a valid reason for Alice to find Bob’s (otherwise reasonable seeming) comments very unpleasant.
-1habryka
Nope, I think we have plenty of authority. I was here referring to authors trying to maintain any kind of discussion quality in the absence of our help, and unfortunately, we are very limited in the amount of moderation we can do, as it already takes up a huge fraction of our staff time.  Yes, we all agree on that. Posts are a great tool for pointing out errors with other posts, as I have pointed out many times. Yes, comments are also great, but frequently discussions just go better if you move them to the top level, and the attention allocation mechanisms work so much better. Also de-facto people just almost never ban anyone else from their posts. I agree we maybe should just ban more people ourselves, though it's hard and I prefer the world where instead of banning someone like Said site-wide, we have a middle ground where individual authors who are into his style of commenting can still have him around. But if there is only one choice on this side, then clearly I would ban Said and other people in his reference class, as this site would quickly fall into something close to full abandonment if we did not actively moderate that. Like, I don't believe you that you want the site moderators to just ban many more people from the whole site. It just seems like a dumb loss for everyone. Ok, then tell me, what do you propose we do when people repeatedly get into unproductive conversations, usually generated by a small number of users on the site? Do you want us to just ban them from the site in general? Many times they have totally fine interactions with many sub-parts of the site, they just don't get along with some specific person. Empowering the users who have a history of contributing positively to the site (or at least a crude proxy of that in the form of karma) to have some control their own seems like the most economical solution.  We could also maintain a ban list where authors can appeal to us to ban a user from their posts, though honestly, I think there ar
4clone of saturn
Do you think anything is ever bad enough that it deserves to be rudely dismissed or sneered at? Or is that unacceptable to you in any possible context?
5habryka
I don't think "how bad something is" is the right dimension that determines whether sneering is appropriate, so I don't think there is a strict level of "badness" that makes sneering OK. I do think there are situations where it's appropriate, though very few on LW. Brainstorming some hypothetical situations:  * An author showed up posting some LLM slop that didn't get caught in our content review. A user explains why the LLM slop doesn't make any sense. The author responds with more LLM slop comments. It seems pretty reasonable to rudely dismiss the author who posted the LLM slop (though my guess is culturally it would still be better to do it with less of a sneering motion, but I wouldn't fault someone very much for it, and rudeness seems very appropriate). * If an organizational account was around that kept posting comments and posts that said things like "we (organization X) do not support this kind of work" and did other kind of markety things, and someone had already written about why intellectual discourse with organizational accounts doesn't really make much sense, then I think I can imagine reacting with sneering to be appropriate (though mostly the right choice is of course to just ban that kind of stuff at the moderation level) Maybe that helps? I don't know, there are not that many circumstances where it feels like the right choice. Mostly where I've seen cultures of sneering things tend to go off the rails quite badly, and it's one of the worst attractors in internet culture space.
5Ben Pace
Not Habryka, but I find that dismissal is regularly appropriate, and sometimes will be rude in-context (though the rudeness should not itself be the goal).  I think sneering is often passive-aggressive whereas I think it's healthy for aggression to be overt / explicit rather than hidden behind plausible-deniability / pretense. Obfuscation is anti-communication, and I think it's common that sneering is too (e.g. one bully communicating to other bullies that something is worth of scorn all-the-while seeming relatively innocuous to a passerby).
1Edwin Evans
The guidelines above say "Before users can moderate, they have to set one of the three following moderation styles on their profile...". But I don't see this displayed on user profiles. Is "Norm Enforcing" or "Reign of Terror" displayed anywhere? Also I don't think "Easy Going" really captures the "I Don't Put Finger on Scales" position.  If the author's policy is displayed somewhere and I just didn't find it then this seems good enough to me as a Reader.  I hope there is a solution that can make authors both like Eliezer and Wei happy. It will be nice to make Commenters happy also and I've thought less about that.
3habryka
The place where it gets displayed is below the comment box when you start typing something:  It's confusing for it to say "profile". It should ideally say "user settings", as the goal of that sentence was to explain to authors where they can set these and not to explain to readers where to find these. I'll edit it.
2[anonymous]
Do the readers not want to see low quality comments, or is it more so the authors who don't want to see them? And do the readers care primarily about the comments, or more so about the posts? The answer is probably "both" to both questions. Or at least, there are subsets of both groups that care about each. There is also another important party, which I think people like Said, Zack M Davis, me, (maybe?) Viliam, etc. belong to, which is "commenters/authors concerned with the degradation of epistemic standards" caused by happy death spirals over applause lights, semantic stopsigns, postrationalist woo, etc.
1Edwin Evans
Brainstorming: I wonder if it will be possible to have a subtle indicator at the bottom of the comment section for when comments have been silently modified by the author (such as a ban triggered). I think this may still be unfair to party 1, so perhaps there could instead be badges in prominent author profiles that indicate whether they fall into the "gardener" or "equal scales" position (plus perhaps a setting for users that is off by default but will allow them to see a note for when an article has silent moderations/restrictions by author) or a way for authors to display that they haven't made any silent edits/restrictions?

Said's comment that triggered this debate is 39/34, at the top of the comments section of the post and #6 in Popular Comments for the whole site, but you want to allow the author to ban Said from future commenting, with the rationale "you should model karma as currently approximately irrelevant for managing visibility of comments". I think this is also wrong generally as I've often found karma to be very helpful in exposing high quality comments to me, and keeping lower quality comments less visible toward the bottom, or allowing me to skip them if they occur in the middle of threads.

I almost think the nonsensical nature of this justification is deliberate, but I'm not quite sure. In any case, sigh...

For the record, I had not read that instance of banning, and it is only just at this late point (e.g. after basically the whole thread has wrapped [edit: the whole thread, it turned out, had not wrapped]) did I read that thread and realize that this whole thread was downstream of that. All my comments and points so far were not written with that instance in mind but on general principle.

(And if you're thinking "Surely you would've spoken to Habryka at work about this thread?" my response is "I was not at work! I am currently on vacation." Yes, I have chosen to — and enjoyed! — spending my vacation arguing the basic principles of moderation, criticism, and gardening.)

Initial impressions re: that thread:

  • For the record I had read Said's comment in "Top Comments" and not the original post (I'd read the opening 2-3 paragraphs), and had hit weak-agree-vote on Said's comment. I was disappointed to see a post endorsing religions and was grateful for a comment that made a good point (I especially agree with the opening sentence) that I could agree with and express my continued anti-religion stance.
  • I don't think Said's comment was otherwise good at engaging with the post (note that I agree-u
... (read more)
7Said Achmiz
(Apologies in advance for extreme slowness of replies; I am currently rate-limited such that I can only post one comment per day, on the whole site.) ---------------------------------------- Well… I hate to criticize when someone’s saying good things about me, but… frankly, I think that you shouldn’t’ve done that (vote on a comment without reading what it’s responding to, that is). I certainly disapprove of it. Indeed, I think that this highlights a serious mistake in the design of the “Top Comments” feature. For comparison, GreaterWrong’s Recent Comments view does not allow you to vote on the comments (the vote buttons are not displayed at all, although the current karma and agreement totals are); you must click through to the comment (either in context, or as a permalink) in order to vote on it. (This is no accident; @clone of saturn and I discussed this particular design choice, and we agreed not to display vote buttons in Recent Comments—nor in search results listings, nor when viewing comments from a user’s page, etc.—because we did not want to encourage users to vote on comments without having seen them in their conversational context. It seemed to us that allowing users to vote from such auxiliary views, where comments were displayed shorn of their context, would create unfortunate dynamics, contribute to the development of echo chambers, etc.) If you would have written the same comment, then why is it bad when I write it? And if it wouldn’t’ve been as bad, because you’d’ve written it differently… then what makes you sure that it would’ve been as good? (EDIT: In other words: if you say something differently, then you’ve said something different. This isn’t an ironclad rule—there are such things as logically equivalent statements with no difference in connotation or valence; and there are also such things as ideas that, in order to be communicated effectively, must be described from multiple angles. But it is a strong heuristic. This is also why I am so

Plenty of authors are “willing to engage” with “critics”—as long as the “critics” are the sort that take as an axiom that the author’s work is valuable, important, and interesting, and that the author himself is intelligent, well-intentioned, well-informed, and sane; and as long as their “criticism” is of the sort that says “how very fascinating your ideas are; I would love to learn more about your thinking, but I have not yet grasped your thesis in its fullness, and am confused; might you deign to enlighten me?” (in other words, “here’s a prompt for you to tell us more about your amazing ideas”). (You might call this “intellectual discussion as improv session”—where, as in any improv, the only allowed replies are “yes, and…”.)

It is challenging and unpleasant to be in an interaction with someone who is exuding disgust and contempt for you, and it's not a major weakness in people that they disprefer conversations like that.

A good thing to do in such situations is to do post-level responses rather than comment-level replies. I've seen many post-level back-and-forths where people disrespect the other person's opinions (e.g. Scott Alexander / Robin Hanson on healthcare, Scott Alexander... (read more)

It is challenging and unpleasant to be in an interaction with someone who is exuding disgust and contempt for you, and it’s not a major weakness in people that they disprefer conversations like this.

There are two mistakes here, I’d say.

First: no, it absolutely is a “major weakness in people” that they prefer to avoid engaging with relevant criticism merely on the basis of the “tone”, “valence”, etc., of the critics’ words. It is, in fact, a huge weakness. Overcoming this particular bias is one of the single biggest personal advances in epistemic rationality that one can make.

(Actually, I recently read a couple of tweets by @Holly_Elmore, with whom I certainly haven’t always agreed, but who describes this sort of thing very aptly.)

Second: you imply a false dichotomy between the “improv session” sort of faux-criticism I describe, and “exuding disgust and contempt”. Those are not the only options! It is entirely possible to criticize someone’s ideas, very harshly, while exhibiting (and experiencing) no significant emotionally-valenced judgment of the person themselves.

One of the best things that I’ve read recently was “ArsDigita: From Start-Up to Bust-Up” by Philip Greenspun, which... (read more)

9Drake Morrison
Or put another way, "Your strength as a rationalist is the extent to which it takes more Charisma to persuade you of false things, and less Charisma to persuade you of true things"   I do think many people could be served by trying to find the truth in harsh criticisms, to wonder if part of the sting is the recognition the critic was right. You're example of ArsDigita was quite helpful in getting a concrete demonstration of the value of that kind of critique.  The thing is, Greenspun failed. People are not empty-machines of perfect reasoning. There's an elephant in our brains. If the critique is to land, if it is to change someone's mind or behavior, it has to get through to the elephant.  Indeed. It is also possible (I claim) to give pointed criticism while remaining friendly. The elephant doesn't like it when words look like they come from an enemy. If you fail to factor in the elephant, and your critique doesn't land, that is your own mistake. Just as they have failed to see the value of the critique, you have failed to see the weight of the elephant.  The executives and other board members of ArsDigita failed, but if Greenspun could have kept their ear by being friendlier, and thereby increased the chances of changing their minds or behavior, Greenspun also failed at rationality.  If it is rational to seek the truth of criticism even when it hurts, then it is also rational to deliver your criticism in a friendly way that will actually land. Or put another way, your strength as a rationalist is the extent to which it takes less Wisdom to notice your plans will fail.
3habryka
FWIW, I mostly don't buy this framing. I think people being passively-aggressively hostile towards you in the way some LW commenters seem to valorize is I think reasonably well-correlated with indeed just not understanding your core points, not being valuable to engage with, and usually causing social dynamics in a space to go worse.  To be clear, this is a very small minority of people! But I think mostly when people get extremely frustrated at this extremely small minority of people, they pick up on it indeed being very rarely worth engaging with them deeper, and I don't think the audience ends up particularly enlightened either (the associated comments threads are ones I glance over most reliably, and definitely far far underperform the marginal top-level posts in terms of value provided to the reader, which they usually trade off against). I think people definitely have some unhealthy defensiveness, but the whole framing of "oh, you just need to placate the dumb elephant in people's brains" strikes me as a very bad way to approach resolving that defensiveness successfully. It matters whether you surround yourself with sneering people, it really has a very large effect on you and your cognition and social environment and opportunities to trade.
3Drake Morrison
Agreed. I was trying to point out how refusing to be friendly, even from a cynical point of view, is counterproductive. 
7habryka
Look, Said, you obviously call people stupid and evil. Maybe you have successfully avoided saying those literal words, but your comments frequently drip of derision, and that derision is then indeed followed up with calls for the targets of that derision to leave and to stop doing things.  Those preferences are fine, I think there do indeed exist many stupid and evil people, but it just seems absurd to suggest that paragraphs like this is not equivalent to calling people "stupid" or "evil": This is in direct reference to the preferences of the other people in the conversation! This is not some kind of far-mode depiction of humanity. This is you representing the preferences of your fellow commenters and posters.  You obviously do not respect these preferences! You obviously think they are dumb and stupid! And IDK, I think if you owned that and said it in straightforward words the conversation might go better, but it seems completely and absurdly farcical to pretend these words do not involve those judgements.
[-][anonymous]101

but it just seems absurd to suggest that paragraphs like this is not equivalent to calling people "stupid" or "evil":

It's... obviously not equivalent to saying people are dumb or evil? 

It is equivalent to saying people have soft egos. But that doesn't mean they are dumb or evil. I know plenty of smart and good people who have trouble receiving any meaningful criticism. Heck, I used to (in my opinion) be one of those people when I was younger!

I suspect the proportion of people with soft egos is significantly larger than the proportion of people who are stupid and evil.

2habryka
No, if you meant to say that they have soft egos without implying that they are dumb and stupid you would use different words. Seriously, actually imagine someone standing in front of you saying these words. Of course they are implying the recipients of those words are at least stupid! It is generally universally considered a mark of derision and implication of stupidity to frame your interlocutors preferences in exaggerated tones, using superlatives and universals. "They prefer to not have obvious gaps in their reasoning pointed out", "they prefer that people treat all of their utterances as deserving of nothing less".  If someone wanted to just communicate that people have a complicated and often sensitive relationship to criticism, without judging them as stupid or evil, they would at the very least omit those superlatives. The sentences would say:  "People are often hesitant to have gaps in their reasoning pointed out, and they almost universally prefer others treating what they say with curiosity, kindness and collaboration, instead of direct and un-veiled criticism...". That sentence does not drip with derision! It's not hard! And the additional words and superlatives do exactly one thing, they communicate that derision.  Indeed, it is exactly this extremely frustrating pattern, where passive aggressiveness gets used to intimidate conversational partners and force them into dumb comment threads of attrition, while somehow strenuously denying any kind of judgement is being cast that makes all of these conversations so frustrating. People aren't idiots. People can read the subtext. I can read the subtext, and I really have very little patience for people trying to claim it isn't there.
7Zack_M_Davis
Yes—the words communicate what Achmiz actually means: not just the fact that people often have a sensitive relationship to criticism, but that he judges them negatively for it. Is that a banned opinion? Is "I think less of people who have a sensitive relationship to criticism" not something that Less Wrong commenters are allowed to think?
6habryka
No, but it's a thing that Said for some reason was denying in his comments above: It is clear that Said has and expresses strong negative feelings about the people he is writing to. This is totally fine, within reasonable means. However, writing paragraphs and whole comments like the above, and then somehow trying to claim that he does not make claims about his interlocutor being "stupid or evil or any such thing", seems just totally absurd to me.
4Said Achmiz
I disagree with your characterization (and am entirely willing to continue defending my position on this matter), but see my other just-written comment about why this may be irrelevant. I thus defer any more substantive response on this point, for now (possibly indefinitely, if you agree with what I say in the linked comment).
4[anonymous]
What does this have to do with anything I wrote in my previous comment? I said he means people have "soft egos." What relation is to between them having a "complicated and often sensitive relationship to criticism?" I don't think Said believes people have a "complicated and often sensitive relationship to criticism"; I think they generally cannot receiving any meaningful criticism. "You have a complicated relationship to criticism" simply has a completely different meaning than "You can't take criticism." You are reading subtext... that isn't there? Obviously? Frankly, for all you're commenting about frustrating patterns and lack of patience, from my perspective it's a lot more frustrating to deal with someone that makes up interpretations of words that do not align with the text being used (as you are doing here) than with someone who thinks everyone has weak egos. "He thinks I'm stupid or evil" vs "He thinks I can't engage with people who say I have obvious gaps in my reasoning" have both different connotations and different denotations.

FWIW I regularly read a barely-veiled contempt/derision into Said's comments for many people on LessWrong, including in the passage that Habryka quotes. My guess is that we should accept that some people strongly read this and some people do not, and move on with the conversation, rather than insist that there is an 'obvious' reading of intent/emotion.

(To be clear I am willing to take the side of the bet that the majority of people will read contempt/derision for other commenters into Said's comments, including the one you mention. Open to setting up a survey on this if you are feel confident it will not show this.)

5[anonymous]
Given the current situation, I think it's understandable for me not to commit to anything beyond the immediate short-term as relates to this site. I'd rather not write this comment either, but you've made a good-faith and productive offer, so it'd be rude of me to go radio silence (even though I should,[1] and will, after this one). But as long as I'm here... I also read something-describable-as-contempt in that Said comment, even though it's not the word I'd ideally use for it.  But, most importantly, I think it's "contempt for their weak egos"[2] and not "contempt for their intelligence or morality." And this is both the original point of discussion and the only one I have presented my case on, because it's the only one I care about (in this convo). 1. ^ Or might have to 2. ^ Because of how this prevents them from having good epistemics/ contributing meaningfully to a truth-seeking forum

"contempt for their weak egos"

Look, man, it's definitely "contempt for them" not just "contempt for their weak egos'". 

It's not like Said is walking around distinguishing between people's ego's and the rest of their personality or identity. If someone wanted to communicate "contempt for your weak ego, because of how it prevents you from having good epistemic/contributing meaningfully to a truth-seeking forum" you would use very different words. You would say things like "I have nothing against you as a whole, but I do have something against this weak ego of yours, which I think is holding you back". 

In as much as you are just trying to say "contempt for them, because of their weak egos", then sure, whenever someone acts contemptuous they will have some reason. In this case the reason is "I judge your ego to be weak" but that doesn't really change anything.

3habryka
No, I don't really think that is how communication works. I think if we have a conversation in which different people repeatedly interpret the same word to have drastically different meaning, then the thing to do is to settle on the meaning of those words, and if necessary ask participants in conversations to disambiguate and use new words, not to just ignore this and move on.  I do not think much hope and good conversations are along the path of trying to just accept that for some people the words "grube" means "a large golden sphere" and to another person means "an imminent threat to punch the other person", if "grube" is a common topic of discussion. At the very least both parties need to mutually recognize both interpretations, even if they do not come naturally to them. Yes, I agree it's not crucial to settle what the "most obvious" reading is in all circumstances, but it's actually really important that people in the conversation have at least some mutual understanding of how other people interpret what they say, and adjust accordingly. (In this case, I don't think any actual communication failure at the level that sunwillrise is describing is happening.)
6habryka
Seriously, if you are incapable of understanding and parsing the subtext that is present in that comment, I do not think you are capable of participating productively in at least this online discussion.  I am really really not making things up here. I am confident if you run the relevant sections of text by any remotely representative subset of the population, you will get close to full consensus that the relevant section invokes substantial judgement about both the intelligence and moral character of the people involved. It's really not hard. It's not a subtle subtext. 
-7[anonymous]
-5habryka
1Zack_M_Davis
I think we need to disambiguate "stupid" here. It's not implying that they're low-IQ. It's implying that their ego is interfering with their intellectual performance, effectively making them stupid.
3habryka
You can of course make a point about something making someone worse without implying they are evil and stupid in the judgement-related meanings of those words, which are clearly being invoked here.  I am not calling people "stupid" in the relevant sense if I say that they are sleep deprived, even if yes, the sleep deprivation is making them currently less smart.  We are talking here about the degree to which Said and other commenter invoke derision as part of their writing. Your comment... seems weirdly intentionally dense at trying to somehow redefine those words to be about their purely denotative meaning, which is indeed the exact thing I am complaining about here. Please stop.
5Zack_M_Davis
To be clear, I agree that the comment in question is expressing judgement and derision! I can see how you might think I was playing dumb by commenting on the denotation of stupid without clarifying that, but hopefully the fact that I am willing to clarify that after it's been pointed out counts for something?
2habryka
But I don't think you clarified. You offered the distinction between two separate value-neutral definition of stupidity, which I think we both knew were not what the topic at hand was about.  If you had said "I think we need to disambiguate between the object-level effects of people shielding themselves from criticism, which might in effect make them stupider, and the underlying judgement of people as 'unworthy of engagement with' and associated derision", then I would not have objected at all. Indeed, I think that distinction seems helpful! But coming into a discussion where the topic at hand is clearly the judgement and derision dimension, and proposing a distinction orthogonal to that, reads to me as an attempt at making the pointing at the judgement and derision dimension harder. Which is a very common tactic, indeed it is the central tactic associated with passive aggression.
3[anonymous]
You are the one who is trying to label Said's words as saying his interlocutors are "stupid" or "evil." You are the one who is trying to bring the connotations of those words into play when the most (and frankly, only) reasonable interpretation of Said's literal language, which you quoted[1], is not aligned with what a neutral outside observer would understand as being "people are stupid/evil." Frankly, I really don't like doing this kind of thing generally because it kinda sucks, but since I lack a lab setup where I can ask this question to 100 different volunteers and do some empirical study on it, the next-best alternative was this: Asking GPT-4o about this (feel free to replicate it, I tried different prompts and ran it multiple times with the same general answer) Me: "Of course people have such preferences! Indeed, it’s not shocking at all! People prefer not to have their bad ideas challenged, they prefer not to have obvious gaps in their reasoning pointed out, they prefer that people treat all of their utterances as deserving of nothing less than “curious”, “kind”, “collaborative” replies (rather than pointed questions, direct and un-veiled criticism, and a general “trial by fire”, “explore it by trying to break it” approach)?! Well… yeah. Duh. Humans are human. No one is shocked."  Consider the following two interpretations:  1. the writer is saying (most) people are stupid or evil  2. the writer is saying (most) people have soft egos  Which interpretation seems more likely? GPT-4o: Between the two interpretations: 1. The writer is saying (most) people are stupid or evil 2. The writer is saying (most) people have soft egos Interpretation 2 — that the writer is saying most people have soft egos — is much more likely. Here's why: * The tone of the passage isn't moralizing (calling people evil) or condescending (labeling them as stupid). Instead, it takes a matter-of-fact, even somewhat sympathetic view: "Humans are human. No one is shocked." * Th
-3[anonymous]
Moreover, saying this (as a mod) to an outsider who tried meaningfully to help the discussion out by pointing out how words can have multiple meanings seems to be in really bad taste. Calling it "intentionally dense" is also... very strange and doesn't make sense in context?
7Ben Pace
Sometimes rationalists try to actively avoid paying attention to dynamics that are irrelevant to truthseeking (e.g. try to avoid paying attention to status dynamics when discussing whether a claim is true or false), but active ignorance can be done in an appropriate, healthy way, and also in an inappropriate, pathological way.  Here, in trying to ignore subtext and focus on the denotative meaning, Zack here basically failed to respond to Habryka's request to focus on the implicit communication, and then Habryka asked him to not do that. (By Zack's reply I believe he is also non-zero self-aware of what cognitive tactic he was employing. I think such self-awareness is healthy.)
0Zack_M_Davis
The cognitive tactics go both ways. Team Said has an incentive to play dumb about the fact that comments from our team captain often feature judgemental and derisive subtext. It makes sense for Habryka to point that out. (And I'm not going to deny it after it's been pointed out, gross.) But at the same time, Team Hugbox Censorship Cult has an incentive to misrepresent the specifics of the judgement and derision: "called people stupid and evil" is a more compelling pretext for censorship (if you can trick stakeholders into believing it) than "used a contemptuous tone while criticizing people for evading criticism."
4[anonymous]
@Ben Pace And the question of whether Said, in that (and other) comments, was calling people "stupid or evil," is the only point of discussion in this thread. As Habryka said at the beginning: Which I responded to by saying: Then the whole thing digressed into whether there is "contempt" involved, which seems to be very logically rude from the other conversation participants (in particular, one of the mods), the following dismissive paragraph in particular: It... doesn't change anything if Said is calling people "stupid or evil" or if he's calling them something else? That's literally the only reason this whole argumentative thread (the one starting here) exists. Saying "sure" while failing to acknowledge you're not addressing the topic at hand is a classic instance of logical rudeness. I suppose it is "absurd", showcases "you are [not] capable of participating productively in at least this online discussion", "weirdly dense," "intentionally dense," a "skill issue," "gaslighting," etc, to focus on whatever is being actually debated and written instead of on long-running grievances mods have against a particular user.  Habryka is free to express whatever views he has on the Said matter, but I would have hoped and expected that site norms would not allow him to repeatedly insult (see above) and threaten to ban another user who has (unlike Habryka) followed those conversational norms instead of digressing into other matters.
0habryka
Look, I gave you an actual moderator warning to stop participating in this conversation. Please knock it off, or I will give you at least a temporary ban for a week until some other moderators have time to look at this. The whole reason why I am interested in at least giving you a temporary suspension from this thread is because you are not following reasonable conversational norms (or at least in this narrow circumstance appear to be extremely ill-suited for discussing the subject-matter at hand in a way that might look like being intentionally dense, or could just be a genuine skill issue, I don't know, I feel genuinely uncertain).  It is indeed not a norm on LessWrong to not express negative feelings and judgements! There are bounds to it, of course, but the issue of contention is passive-aggression, not straightforward aggression. In any case, I think after reviewing a lot of your other comments for a while, I think you are overall a good commenter and have written many really helpful contributions, and I think it's unlikely any long-term ban would make sense, unless we end up in some really dumb escalation on this thread. I'll still review things with the other mods, but my guess is you don't have to be very worried about that.  I am however actually asking you as a mod to stay out of this discussion (and this includes inline reacts), as I do really think you seem much worse on this topic than others (and this seems confirmed by sanity-checking with other people who haven't been participating here).
-9Said Achmiz
4Said Achmiz
Well, let’s recap a bit, because it’s easy to get lost in a game of Telephone with long threads like this. There was a claim about my comments: I replied: To which a response was: And Zack wrote: This whole tangent began with a claim that if someone’s comments on your posts are sufficiently unpleasant toward you personally, then it’s reasonable to “want a certain level of distance from” this person (which idea apparently justifies banning them from your posts—a leap of logic I remain skeptical about, but never mind). And I’d started writing, in this reply to Zack, a comment about how I took issue with this or that characterization of my writing on LW, but then it occurred to me to ask a question (which is mostly for Ben, I guess, but also for anyone else who cares to weigh in on this) is: Just how load-bearing is this argument? I mean, what if I banned someone because I just don’t like their face; or, conversely, because I disagree with their political views, even though I have absolutely no feelings about them personally, nor any opinions about their behavior? Is that ok? As I understand it, the LW system would have zero problem with this, right? I can ban literally any member from my posts for literally any reason, or for no reason at all—correct? I could ban some new guy who just joined yesterday and hasn’t written so much as a single comment and about whom I know absolutely nothing? If all of the above is true, then what exactly is the point of litigating the subtle tonal nuances of my comments? I mean, we can keep arguing about whether I do or do not say this, or imply that, or whether this or the other descriptor can accurately be applied to my comments, and so on… by all means. But is there a purpose to it? Or was this just a red herring?
2habryka
Because I think it is more likely than not that I want to give you a site-wide ban and would like to communicate reasons for that, and hear counterarguments before I do it.  The other reason I am participating in this is to avoid a passive aggressive culture take hold on LessWrong. The combination of obvious passive aggression combined with denial of any such aggression taking place is one of the things that people have most consistently complained about from you and a few other commenters, and one way to push back on that is to point out the dynamic and enforce norms of reasonable discourse. No, you can't ban people for any reason. As we've said like 10+ times in this discussion and previous discussions of this, if someone was going completely wild with their banning we would likely step in and tell them to knock it off.  In general we will give authors a bunch of freedom, and I on the margin would like authors to moderate much more actively, but we are monitoring what people get banned for, and if things trend in a worrying direction, either adjust people's moderation power, or tell individual authors to change how they do things, or stop promoting that authors posts to the frontpage.
-6Said Achmiz
4Ben Pace
(I had drafted a long reply to this which still needed more work, but I've rather gone over the limit of how much time to spend arguing about moderation on LessWrong this month, so I decided not to finish it. Nonetheless, FWIW, I thought this was a good comment and made some good counterpoints and I upvoted it. I think you're right that it is often a weakness to be strongly affected by it, and that post-replies have many weakness compared to arguing in the comments, but I would want to defend that there are many worthy environments for public writing about how the world works where it makes sense for people with that weakness to optimize at-all for comfort over criticism, and also that it's not a weakness in many contexts to use contempt/disgust to track real threats and people who aren't worth talking to, it's just accurate.)
3habryka
We luckily have shortform for that!
2Ben Pace
No apology necessary! I am grateful for the slowdown in rate of replies, I am becoming busier again. But thanks for flagging.
2Ben Pace
Just noting briefly that I've gone back and read the whole post; I stand by the agree-react on your comment, and think I was correct in my assumption that his post did not provide a strong counterargument to the point you made at the top of your comment.
4habryka
I agree it's working fine for that specific comment thread! But it's just not really true for most posts, which tend to have less than 10 comments, and where voting activity after 1-2 rounds of replies gets very heavily dominated by the people who are actively participating in the thread, which especially as it gets heated, causes things to then end up being very random in their vote distribution, and to not really work as a signal anymore. The popular comments section is affected by net karma, though I think it's a pretty delicate balance. My current guess is that indeed the vast majority of people who upvoted Said's comment didn't read Gordon's post, and upvoted Said comment because it seemed like a dunk on something they didn't like, irrespective of whether that actually applied to Gordon's post in any coherent way.  I think the popular comment section is on-net good, but in this case seems to me to have failed (for reasons largely unrelated to other things discussed in this thread), and it has happened a bunch of times that it promoted contextless responses to stuff that made the overall discussion quality worse. Fundamentally, the amount of sorting you can do in a comment section is just very limited. I feel like this isn't a very controversial or messy point. On any given post you can sort maybe 3-4 top-level threads into the right order, so karma is supplying at most a few bits of prioritization for the order.  In the context of post lists, you often are sorting lists of posts hundred of items long, and karma is the primary determinant whether something gets read at all. I am not saying it has absolutely no effect, but clearly it's much weaker (and indeed, does absolutely not reliably prevent bad comments from getting lots of visibility and does not remotely reliably cause good comments to get visibility, especially if you wade into domains where people have stronger pre-existing feelings and are looking for anything to upvote that looks vaguely like thei
2Wei Dai
BTW my old, now defunct user script LW Power Reader had a feature to adjust the font size of comments based on their karma, so that karma could literally affect visibility despite "the thread structure making strict karma sorting impossible". So you could implement that if you want, but it's not really relevant to the current debate since karma obviously affects visibility virtually even without sorting, in the sense that people can read the number and decide to skip the comment or not.
2Ben Pace
We did that on April 1st in 2018 I believe.
2Wei Dai
Assuming your comment was serious (which on reflection I think it probably was), what about a modification to my proposed scheme, that any muted commenter gets an automatic downvote from the author when they comment? Then it would stay at the bottom unless enough people actively upvoted it? (I personally don't think this is necessary because low quality comments would stay near the bottom even without downvotes just from lack of upvotes, but I want to address this if it's a real blocker for moving away from the ban system.)
2habryka
I don't currently like the muted comment system for many practical reasons, though I like it as an idea!  We could go into the details of it, but I feel a bit like stuff is getting too anchored on that specific proposal, and explaining why I don't feel excited about this one specific solution out of dozens of ways of approaching this feels like it would both take a long time, and not really help anyone. Though if you think you would find it valuable I could do it. Let me know if you want to go there, and I could write more. I am pretty interested in discussing the general principles and constraints though, I've just historically not gotten that much out of discussions where someone who hasn't been trying to balance a lot of the complicated design considerations comes in with a specific proposal, but have gotten a lot of value out of people raising problems and considerations (and overall appreciate your thoughts in this thread).
4Wei Dai
Yeah I think it would help me understand your general perspective better if you were to explain more why you don't like my proposal. What about just writing out the top 3 reasons for now, if you don't want to risk investing a lot of time on something that might not turn out to be productive?

In my mind things aren't neatly categorized into "top N reasons", but here are some quick thoughts: 

(I.) I am generally very averse to having any UI element that shows on individual comments. It just clutters things up quickly and requires people to scan each individual comment. I have put an enormous amount of effort into trying to reduce the number of UI elements on comments. I much prefer organizing things into sections which people can parse once, and then assume everything has the same type signature.

(II.) I think a core thing I want UI to do in the space is to hit the right balance between "making it salient to commenters that they are getting more filtered evidence" and "giving the author social legitimacy to control their own space, combined with checks and balances". 

I expect this specific proposal to end up feeling like a constant mark of shame that authors are hesitant to use because they don't feel the legitimacy to use it, and most importantly, make it very hard for them to get feedback on whether others judge them for how they use it, inducing paranoia and anxiety, which I think would make the feature largely unused. I think in that world it isn't really hel... (read more)

1Wei Dai
To reduce clutter you can reuse the green color bars that currently indicate new comments, and make it red for muted comments. Authors might rarely ban commenters because the threat of banning drives them away already. And if the bans are rare then what's the big deal with requiring moderator approval first? I would support letting authors control their space via the mute and flag proposal, adding my weight to its social legitimacy, and I'm guessing others who currently are very much against the ban system (thus helping to deprive it of social legitimacy) would also support or at least not attack it much in the future. I and I think others would be against any system that lets authors unilaterally exert very strong control of visibility of comments such as by moving them to a bottom section. But I guess you're actually talking about something else, like how comfortable does the UX make the author, thus encouraging them to use it more. It seems like you're saying you don't want to make the muting to be too in your face, because that makes authors uncomfortable and reluctant to use it? Or you simultaneously want authors to have a lot of control over comment visibility, but don't want that fact to be easily visible (and the current ban system accomplishes this)? I don't know, this just seems very wrong to me, like you want authors to feel social legitimacy that doesn't actually exist, ie if most people support giving authors more control then why would it be necessary to hide it.

To reduce clutter you can reuse the green color bars that currently indicate new comments, and make it red for muted comments.

No, the whole point of the green bars is to be a very salient indicator that only shows in the relatively rare circumstance where you need it (which is when you revisit a comment thread you previously read and want to find new comments). Having a permanent red indicator would break in like 5 different ways: 

  • It would communicate a temporary indicator, because that's the pattern that we established with colored indicators all across the site. All color we have is part of dynamic elements.
  • It would introduce a completely new UI color which has so far only been used in the extremely narrow context of downvoting
  • Because the color has only been used in downvoting it would feel like a mark of shame making the social dynamics a bunch worse
  • How would you now indicate that a muted comment is new?
  • The green bar is intentionally very noticeable, and red is even more attention grabbing, making it IMO even worse than a small icon somewhere on the comment in terms of clutter.

To be clear, I still appreciate the suggestion, but I don't think it's a good one in this context.... (read more)

As an aside, I think one UI preference I suspect Habryka has more strongly than Wei Dai does here is that the UI look the same to all users. For similar reasons why WYSIWYG is helpful for editing, when it comes to muting/threading/etc it’s helpful for ppl to all be looking at the same page so they can easily model what others are seeing. Having some ppl see a user’s comments but the author not, or key commenters not, is quite costly for social transparency, and understanding social dynamics.

4Wei Dai
My proposal was meant to address the requirement that some authors apparently have to avoid interacting with certain commenters. All proposals dealing with this imply multiple conversations and people having to model different states of knowledge in others, unless those commenters are just silenced altogether, so I'm confused why it's more confusing to have multiple conversations happening in the same place when those conversations are marked clearly. It seems to me like the main difference is that Habryka just trusts authors to "garden their spaces" more than I do, and wants to actively encourage this, whereas I'm reluctantly trying to accommodate such authors. I'm not sure what's driving this difference though. People on Habryka's side (so far only he has spoken up, but there's clearly more given voting patterns) seem very reluctant to directly address the concern that people like me have that even great authors are human and likely biased quite strongly when it comes to evaluating strong criticism, unless they've done so somewhere I haven't seen. Maybe it just comes down to differing intuitions and there's not much to say? There's some evidence available though, like Said's highly upvoted comment nevertheless triggering a desire to ban Said. Has Habryka seen more positive evidence that I haven't?
5habryka
No, what are you talking about? The current situation, where people can make new top level posts, which get shown below the post itself via the pingback system, does not involve any asymmetric states of knowledge?  Indeed, there are lots of ways to achieve this without requiring asymmetric states of knowledge. Having the two comment sections, with one marked as "off-topic" or something like that also doesn't require any asymmetric states of knowledge. Unmoderated discussion spaces are not generally better than moderated discussion spaces, including on the groupthink dimension! There is no great utopia of discourse that can be achieved simply by withholding moderation tools from people. Bandwidth is limited and cultural coordination is hard and this means that there are harsh tradeoffs to be made about which ideas and perspectives will end up presented. I am not hesitant to address the claim directly, it is just the case that on LessWrong, practically no banning of anyone ever takes place who wouldn't also end up being post-banned by the moderators and so de-facto this effect just doesn't seem real. Yes, maybe there are chilling effects that don't produce observable effects, which is always important to think about with this kind of stuff, but I don't currently buy it.  The default thing that happens is when you leave a place unmoderated is just that the conversation gets dominated by whoever has the most time and stamina and social resilience, and the overall resulting diversity of perspectives trends to zero. Post authors are one obvious group to moderate spaces, especially with supervision from site moderators. There are lots of reasonable things to try here, but a random blanket "I don't trust post authors to moderate" is simply making an implicit statement that unmoderated spaces are better, because on the margin LW admins don't have either the authority or the time to moderate everyone's individual posts. Authors are rightly pissed if we just show up and b
3Wei Dai
In the discussion under the original post, some people will have read the reply post, and some won't (perhaps including the original post's author, if they banned the commenter in part to avoid having to look at their content), so I have to model this. Sure, let's give people moderation tools, but why trust authors with unilateral powers that can't be overriden by the community, such as banning and moving comments/commenters to a much less visible section?
4habryka
"Not being able to get the knowledge if you are curious" and "some people have of course read different things" are quite different states of affairs!  I am objecting to the former. I agree that of course any conversation with more than 10 participants will have some variance in who knows what, but that's not what I am talking about.
3Wei Dai
It would be easy to give authors a button to let them look at comments that they've muted. (This seems so obvious that I didn't think to mention it, and I'm confused by your inference that authors would have no ability to look at the muted comments at all. At the very least they can simply log out.)
4habryka
I mean, kind of. The default UI experience of everyone will still differ by a lot (and importantly between people who will meaningfully be "in the same room") and the framing of the feature as "muted comments" does indeed not communicate that.  The exact degree of how much it would make the dynamics more confusing would end up depending on the saliency of the author UI, but of course commenters will have no idea what the author UI looks like, and so can't form accurate expectations about how likely the author is to end up making the muted comments visible to them.  Contrast to a situation with two comment sections. The default assumption is that the author and the users just see the exact same thing. There is no uncertainty about whether maybe the author has things by default collapsed whereas the commenters do not. People know what everyone else is seeing, and it's communicated in the most straightforward way. I don't even really know what I would do to communicate to commenters what the author sees (it's not an impossible UI challenge, you can imagine a small screenshot on the tooltip of the "muted icon" that shows what the author UI looks like, but that doesn't feel to me like a particularly elegant solution). One of the key things I mean by "the UI looking the same for all users" is maintaining common knowledge about who is likely to read what, or at least the rough process that determines what people read and what context they have. If I give the author some special UI where some things are hidden, then in order to maintain common knowledge I now need to show the users what the author's UI looks like (and show the author what the users are being shown about the author UI, but this mostly would take care of itself since all authors will be commenters in other contexts). 
3Ben Pace
I’m not certain that this is the crux, but I’ll try again to explain that why I think it’s good to give people that sort of agency. I am probably repeating myself somewhat. I think incompatibilities often drive people away (e.g. at LessOnline I have let ppl know they can ask certain ppl not to come to their sessions, as it would make them not want to run the sessions, and this is definitely not due to criticism but to conflict between the two people). That’s one reason why I think this should be available. I think bad commenters also drive people away. There are bad commenters who seem fine when inspecting any single comment but when inspecting longer threads and longer patterns they’re draining energy and provide no good ideas or arguments. Always low quality criticisms, stated maximally aggressively, not actually good at communication/learning. I can think of many examples. I think it’s good to give individuals some basic level of agency over these situations, and not require active input from mods each time. This is for cases where the incompatibility is quite individual, or where the user’s information comes from off-site interactions, and also just because there are probably a lot of incompatibilities and we already spend a lot of time each week on site-moderation. And furthermore ppl are often quite averse to bringing up personal incompatibilities with strangers (i.e. in a DM to the mods who they've never interacted with before and don't know particularly well). Some people will not have the principles to tend their garden appropriately, and will inappropriately remove people with good critiques. That’s why it’s important that they cannot prevent the user from writing posts or quick takes about their content. Most substantial criticisms on this site have come in post and quick takes form, such as Wentworth’s critiques of other alignment strategies, or the sharp left turn discourse, or Natalia’s critiques of Guzey’s sleep hypotheses / SMTM’s lithium hypothe
6Wei Dai
This is something I currently want to accommodate but not encourage people to use moderation tools for, but maybe I'm wrong. How can I get a better sense of what's going on with this kind of incompatibility? Why do you think "definitely not due to criticism but to conflict"? It seems like this requires a very different kind of solution than either local bans or mutes, which most people don't or probably won't use, so can't help in most places. Like maybe allow people to vote on commenters instead of just comments, and then their comments get a default karma based on their commenter karma (or rather the direct commenter-level karma would contribute to the default karma, in addition to their total karma which currently determines the default karma). I'm worried about less "substantial" criticisms that are unlikely to get their own posts, like just pointing out a relatively obvious mistake in the OP, or lack of clarity, or failure to address some important counterargument.
5Ben Pace
I mean I've mostly gotten a better sense of it by running lots of institutions and events and had tons of complaints bubble up. I know it's not just because of criticism because (a) I know from first-principles that conflicts exist for reasons other than criticism of someone's blogposts, and (b) I've seen a bunch of these incompatibilities. Things like "bad romantic breakup" or "was dishonorable in a business setting" or "severe communication style mismatch", amongst other things. You say you're not interested in using "moderation tools" for this. What do you have in mind for how to deal with this, other than tools for minimizing interaction between two people? It's a good idea, and maybe we should do it, but I think doesn't really address the thing of unique / idiosyncratic incompatibilities. Also it would be quite socially punishing for someone to know that they're publicly labelled net negative as a commenter, rather than simply that their individual comments so-far have been considered poor contributions, and making a system this individually harsh is a cost to be weighed, and it might make it overall push away high-quality contributors more than it helps. This seems then that making it so that a short list of users are not welcome to comment on a single person's post is much less likely to cause these things to be missed. The more basic mistakes can be noticed by a lot of people. If it's a mistake that only one person can notice due to their rare expertise or unique perspective, I think they can get a lot of karma by making it a whole quick take or post. Like, just to check, are we discussing a potential bad future world if this feature gets massively more use? Like, right now there are a ton of very disagreeable and harsh critics on LessWrong and there's very few absolute bans. I'd guess absolute bans being on the order of 30-100 author-commenter pairs over the ~7 years we've had this, and weekly logged-in users being ~4,000 these days. The effect size so
3habryka
I think better karma systems could potentially be pretty great, though I've historically always found it really hard to find something much better, mostly for complexity reasons. See this old shortform of mine on a bunch of stuff that a karma system has to do simultaneously:  https://www.lesswrong.com/posts/EQJfdqSaMcJyR5k73/habryka-s-shortform-feed?commentId=8meuqgifXhksp42sg 
0Said Achmiz
Uh… no. No, I absolutely do not think of this as a 1-1 interaction. No, in a threaded comment system, each subthread (at the very least, each thread starting with a top-level comment!) is its own group conversation.
7Ben Pace
But it is one that a user the author finds unpleasant may join at any time, and suddenly become a conversation with them in it. Users from different threads regularly cross-pollinate.
-10Said Achmiz
-3M. Y. Zuo
If most other commentators all accept seeing each other’s input… then why should a small minority’s opinion or preferences matter enough to change what the overwhelming majority can see or comment on, anywhere on this site? I can’t think of any successful forum whatsoever where that is the case, other than those where the small minority is literally paying the majority somehow. If it was a whitelist system where everyone is forbidden from commenting by default there might be a sensible argument here… but in the current norm it can only cause more issues down the road. Over the long time frame there will definitely be some who exploit it to play tricks… and once that takes hold I’m pretty sure LW will go down the tubes as even for the very virtuous and respectable… nobody is 100% confident that their decisions are free from any sort politiking or status games whatsoever. And obviously for Duncan I doubt anyone is even 95% confident.
5Ben Pace
Because the post author is the person who gets the majority of the credit for the conversation existing in the first place. Something that makes them not-post-at-all is very costly for everyone.
5Said Achmiz
This logic also applies to commenters whose top-level comments start discussions.
2Ben Pace
True! What's relevant about the current setup is that banned users can post anywhere else on the site equally well after they're banned from a particular author (e.g. quick takes, posts, open thread) whereas previously there was nowhere for the author to post in a way that would reliably not have the unpleasant-to-them-user able to post in reply.
3M. Y. Zuo
That doesn’t seem true in my experience. For example I recently wanted to post a comment asking a question about the new book that’s been heavily promoted and I found, only after writing it out, that So8res inexplicably banned me from commenting. And I can’t see any other place where I could post a specific question about that book “equally well”.
2habryka
I think a shortform would work fine?
5Drake Morrison
So like, do you distrust writers using substack? Because substack writers can just ban people from commenting. Or more concretely, do you distrust Scott to garden his own space on ACX? Giving authors the ability to ban people they don't want commenting is so common that it feels like a Chesterton's Fence to me. 

So like, do you distrust writers using substack? Because substack writers can just ban people from commenting. Or more concretely, do you distrust Scott to garden his own space on ACX?

It's normally out of my mind, but whenever I'm reminded of it, I'm like "damn, I wonder how many mistaken articles I read and didn't realize it because the author banned or discouraged their best (would be) critiques." (Substack has other problems though like lack of karma that makes it hard to find good comments anyway, which I would want to fix first.)

Giving authors the ability to ban people they don't want commenting is so common that it feels like a Chesterton's Fence to me.

It could also just be a race to the bottom to appeal to unhealthy motivation, kind of like YouTube creating Shorts to compete with TikTok.

6Three-Monkey Mind
Substack seems like it has the usual rigors of the rest of the Internet — namely, one isn’t going to see strong objections to articles posted there in the comments section. This isn’t very rigorous. LW’s rigors are higher if authors aren’t banning, or can’t ban, people from commenting on their posts because if someone tries to advance an attractive but underbaked/wrongheaded/etc. idea, he/she will be rebutted right there in the comments. This makes things posted to LW, in general, better than most of the rest of the Internet. I want LW to be better than the most of the rest of the Internet.
5Drake Morrison
Agreed on wanting LW to be better than the rest of the Internet.  My model is something like: The site dies without writers -> writers only write if they enjoy writing on the platform -> writers don't enjoy writing without the ability to ban commenters that they find personally aversive -> Give authors the ability to ban commenters on their posts. I'm cognizant of the failure where good critiques get banned. Empirically, however, I don't think that's a problem here on LW. Long, well-written critiques are some of the most upvoted posts on the site. I think it's fine if the critique lives in the larger LW archipelago, and not in the island of one author. The rigor lives in the broader site, not just in an individual posts comment section. 
3Said Achmiz
If the critique “lives in the larger LW archipelago”, then: 1. It won’t be highly upvoted, because… 2. Almost nobody will read it; and therefore… 3. It won’t be posted in the first place. You don’t get to have both (a) well-written criticism being commonplace, and (b) writers never having to read comments that they find “personally aversive”. Pick one.
1Drake Morrison
Doesn't the existence proof of long, well-written and highly upvoted critiques disprove your point? There's plenty of comments that are critiques, and the author of the post doesn't ban them because the critique wasn't cruel. Even if an author starts constantly banning anyone who disagrees with them, they'll get a reputation pretty fast.  It feels to me like you are vigorously defending against a failure mode that is already handled by existing reputational effects like karma, and people being free to write their own post in response. 
-3Said Achmiz
… no? Why in the world would it? How can the existence of things in the current regime disprove my point about what would happen in a different regime?
3Drake Morrison
Both regimes share the property wherein someone can disagree and write a lengthy critique as a top-level post. This empirically does happen, and they are sometimes highly upvoted and widely read. Hence existence proof. The regimes are not different in this regard. 
5[anonymous]
"Lengthy critique"≠ "good critique." Really, this has been covered before. Not every[1] good, useful, or even necessary critical comment can be turned into a post in a way that makes sense. See the example in the linked comment: 1. ^ I'd even go further and say "not even a large portion of"
4FeepingCreature
I think you failed to establish that the long, well-written and highly-upvoted critiques lived in the larger LW archipelago, so there's a hole in your existence proof. On that basis, I would surmise that on priors Said assumed you were referring to comments or on-site posts.

There are so many critical posts just here on LessWrong that I feel like we are living in different worlds. The second most upvoted post on the entire site is a critique, and there's dozens more about everything from AI alignment to discussion norms.

2Three-Monkey Mind
I thought commenters on Substack posts were exclusively people paying money to the post’s author until a week or two ago. I figured I’d been wrong all along when I saw this one post getting negative comments from commenters scolding the author for not abiding by their thede’s taboos which they consider universal.
0Said Achmiz
I absolutely didn’t trust Scott to garden his own space on SSC (and correctly so, in retrospect, as his pattern of bans was quite obviously jam-packed with political bias). I don’t read ACX comments much, but I don’t expect that anything’s changed since the SSC days, in this regard. (And this is despite the fact that I both respect Scott as a writer and thinker, and like him as a person.) I don’t even trust myself to moderate a forum that I run (where, despite being the sole administrator of the site, I am not only formally excluded from having moderation privileges, but I don’t even pick the moderators).[1] It’s not a Chesterton’s fence at all, because (a) it’s very new (it wasn’t like this before the blog era!), and (b) we know perfectly well why it came about (hint: the answer is “politics”). ---------------------------------------- 1. Now, why do you think I set things up like that? Specifically, what do I personally gain from this setup (i.e., setting aside answers like “I have a principled belief that this is the correct way to run a forum”)? ↩︎
5habryka
Comment threads are conversations! If you have one person in a conversation who can't see other participants, everything gets confusing and weird. Points brought up in an adjacent thread suddenly have to be relitigated or repeated. The conversation has trouble moving forward, its hard to build any kind of common knowledge of assumptions, and people get annoyed at each other for not knowing the same things, because they aren't reading the same content.  I would hate comment sections on LW where I had no idea which other comments my interlocutors have read. I don't always assume that every person I am responding to has read all other comments in a thread, but I do generally assume they have read most of them, and that this is informing whatever conversation we are having in this subthread.  Comment threads are a useful mechanism for structuring local interactions, but whole comment sections proceed as a highly interconnected conversation, not as individual comment threads, and splintering the knowledge and participation graph of that seems much more costly than alternatives to me. (Contrast LW with Twitter where indeed the conversations generally proceed much more just based on a single thread, or quote-threads, and my experience there is that whenever I respond to someone, I just end up having the same conversation 10 times, as opposed to on LW, where I generally feel that when I respond to someone, I won't have 10 more people comment or ask with the same confusion)

Comment threads are conversations! If you have one person in a conversation who can't see other participants, everything gets confusing and weird.

The additional confusion seems pretty minimal, if the muted comments are clearly marked so others are aware that the author can't see them. (Compare to the baseline confusion where I'm already pretty unsure who has read which other comments.)

I just don't get how this is worse than making it so that certain perspectives are completely missing from the comments.

1Three-Monkey Mind
What’s the “and-flag” part for? So the mute gets recorded in https://www.lesswrong.com/moderation?
3Wei Dai
Each muted comment/thread is marked/flagged by an icon, color, or text, to indicate to readers that the OP author can't see it, and if you reply to it, your reply will also be hidden from the author.

I quite dislike the idea of people being able to moderate their content in this fashion - that just isn't what a public discussion is in my view - but thanks for being transparent about this change.

6habryka
Yeah, I agree that there is an important distinction between a public discussion that you know isn't censored in any way, and one that is intentionally being limiting in what can be said. I would be worried about a world where the majority of frontpage posts was non-public in the sense you said, but do think that the marginal non-fully-public conversation doesn't really cause much damage, as long as it's easy to create a public conversation in another thread that isn't limited in the same way. I do think it's very important for users to see whether a post is moderated in any specific way, which is why I tried to make the moderation guidelines at the top of the comment thread pretty noticeable.

Here's a hypothesis for the crux of the disagreement in this comments section:

There's a minor identity crisis about whether LW is/should primarily be a community blog or a public forum.

If it is to be a community blog, then the focus is in the posts section, and the purpose of moderation should be to attract all the rationality bloggers to post their content in one place.

If it is to be a public forum/reddit (I was surprised at people referring to it like so), then the focus is in the comments section, and the main purpose of moderation should be to protect all viewpoints and keep a bare minimum of civility in a neutral and open discussion.

6Said Achmiz
No, I don’t think that’s the crux. In fact, I’ll go further and say that believing these two things are somehow distinct is precisely what I disagree with. Ever read the sequences? Probably you have. Now go back through those posts, and count how many times Eliezer is responding to something a commenter said, arguing with a commenter, using a commenter’s argument as an example, riffing on a commenter’s objection… and then go back and read the comments themselves, and see how many of them are full of critical insight. (Robin Hanson’s comments alone are a gold mine! And he’s only the first of many.) Attracting “rationality bloggers” is not just useless, but actively detrimental, if the result is that people come here to post “rationality content” which is of increasingly questionable value and quality—because it goes unchallenged, unexamined, undiscussed. “Rationality content” which cannot stand up to (civil, but incisive) scrutiny is not worthy of the name! LW should be a community blog and a public forum, and if our purpose is the advancement of “rationality” in any meaningful sense whatsoever, then these two identities are not only not in conflict—they are inseparable.
9habryka
While it seems clearly correct to me that all content should have a space to be publicly discussed at some point, it is not at all clear to me that all of that needs to happen simultaneously. If you create an environment where people feel uncomfortable posting their bad ideas and initial guesses on topics, for fear of being torn to shreds by critical commenters, then you simply won’t see that content on this site. And often this means those people will not post hat content anywhere, or post it privately on Facebook, and then a critical step in the idea pipeline will be missing from this community. Most importantly, the person you are using as the central example here, namely Eliezer, has always deleted comments and banned people, and was only comfortable posting his content in a place where he had control over the discussion. The amazing comment sections you are referring to are not the result of a policy of open discussion, but of a highly moderated space in which unproductive contributions got moderated and deleted.
6Said Achmiz
… good? I… am very confused, here. Why do you think this is bad? Do you want to incentivize people to post bad ideas? Why do you want to see that content here? What makes this “step in the idea pipeline”—the one that consists of discussing bad ideas without criticism—a “critical” one? Maybe we’re operating under some very different assumptions here, so I would love it if you could elaborate on this. This is only true under a very, very different (i.e., much more lax) standard of what qualifies as “unproductive discussion”—so different as to constitute an entirely other sort of regime. Calling Sequence-era OB/LW “highly moderated” seems to me like a serious misuse of the term. I invite you to go back to many of the posts of 2007-2009 and look for yourself.
8Gurkenglas
Weren't you objecting to the poster tracelessly moderating at all, rather than the standard they intended to enforce? Surely present-you would object to a reinstatement of OB as it was?
8habryka
People being able to explore ideas strikes me as a key part of making intellectual progress. This involves discussing bad arguments and ideas, and involves discussing people’s initial hunches about various things that might or might not turn out to be based in reality, or point to good arguments. I might continue this discussion at some later point in time, but am tapping out for at least today, since I need to deal with a bunch of deadlines. I also notice that I am pretty irritated, which is not a good starting point for a productive discussion.
5Said Achmiz
Fair enough. And thanks for the elaboration—I have further thoughts, of course, but we can certainly table this for now.

What was the logic behind having a karma threshold for moderation? What were you afraid would happen if low karma people could moderate, especially on their personal blog?

8habryka
The karma threshold for personal blogs is mostly just to avoid bad first interactions for posters and commenters. If you create a post that is super incendiary, and then you go on and delete all comments that disagree with you on it, then we would probably revoke your moderation privileges, or have to ban you or something like that, or delete the posts, which seems like a shitty experience for everyone. And similarly as a commenter, it’s a pretty shitty experience to have your comment deleted. And if you have someone who doesn’t have any experience with the community and who just randomly showed up from the internet, then either of these seems pretty likely to happen, and it seemed better to me to avoid them from the start by requiring a basic level of trust before we got out the moderation tools.
1Elizabeth
That makes sense. Why such a high threshold for front page posts?
5habryka
Allowing someone to moderate their own frontpage posts is similar to them being a side-wide moderator. They can now moderate a bunch of public discussion that is addressed to the whole community. That requires a large amount of trust, and so a high karma threshold seemed appropriate.

Does allowing users to moderate mean the moderation team of the website will not also be moderating those posts? If so, that seems to have two implications: one, this eases the workload of the moderation team; two, this puts a lot more responsibility on the shoulders of those contributors.

Ah, sorry, looks like I forgot to mention that in the post above. There is a checkbox you can check on your profile that says "I'm happy for LW site moderators to help enforce my policy", which then makes it so that the sitewide moderators will try to help with your moderation.

We will also continue enforcing the frontpage guidelines on all frontpage posts, in addition to whatever guidelines the author has set up.

6ryan_b
No worries, thank you for the clarification. I would like to state plainly that I am in favor of measures taken to mitigate the workload of the moderation team: I would greatly prefer shouldering some of the burden myself and dealing with known-to-be-different moderation policies from some contributors in exchange for consistent, quality moderation of the rest of the website.

I'm still somewhat uncomfortable with authors being able to moderate front-page comments, but I suppose it could be an interesting experiment to see if they use this power responsibly or if it gets abused.

I think that there should also be an option to collapse comments (as per Reddit), instead of actually deleting them. I would suggest that very few comments are actually so bad that they need to be deleted, most of the time it's simply a matter of reducing the incentive to incite controversy in order to get more people replying to your comment.

Anyway, I'm really hoping that it encourages some of the old guard to post more of their content on Less Wrong.

-2PDV
I don't think it's an interesting experiment. The outcome is obvious: it will be abused to silence competing points of view.

I think this is extremely bad. Letting anyone, no matter how prominent, costlessly remove/silence others is toxic to the principle of open debate.

At minimum, there should be a substantial penalty for banning and deleting comments. And not a subtraction, a multiplication. My first instinct would be to use the fraction of users you have taken action against as a proportional penalty to your karma, for all purposes. Or, slightly more complex, take the total "raw score" of karma of all users you've taken action against, divide by the total "... (read more)

0ChristianKl
Nobody is silenced here in the sense that their ability to express themselves gets completely removed. If someone has a serious objection to a given post they are free to write a rebuttal to the post on their personal page. This policy rewards people for writing posts instead of writing comments and that's a good choice. The core goal is to get more high quality posts and comments are a lesser concern.
3PDV
People absolutely are silenced by this, and the core goal is to get high quality discussion, for which comments are at least as important as posts. Writing a rebuttal on your personal page, if you are low-status, is still being silenced. To be able to speak, you need not just a technical ability to say things, but an ability to say them to the audience that cares. Under this moderation scheme, if I have an novel, unpopular dissenting view against a belief that is important to the continuing power of the popular, they can costlessly prevent me from getting any traction.
9habryka
No, you can still get traction, if your argument is good enough. It just requires that your rebuttal itself, on the basis of its own content and quality, attracts enough attention to be read, instead of you automatically getting almost as much attention as the original author got just because you are the first voice in the room. If you give exposure to whoever first enters a conversation opened by someone with a lot of trust, then you will have a lot of people competing to just be the first ones dominating that discussion, because it gives their ideas a free platform. Bandwith is limited, and you need to allocate bandwidth by some measure of expected quality, and authors should feel free to not have their own trust and readership given to bad arguments, or to people furthering an agenda that is not aligned with what they want. There should be some mechanisms by which the best critiques of popular content get more attention than they would otherwise, to avoid filter bubble effects, but critiques should not be able to just get attention by being aggressive in the comment section of a popular post, or by being the first comment, etc. If we want to generally incentivize critiques, then we can do that via our curation policies, and by getting people to upvote critiques more, or maybe by other technical solutions, but the current situation does not strike me as remotely the best at giving positive incentives towards the best critiques.
1aNeopuritan
If a nobody disagrees with, being less wrong than, Yudkowsky, they'll be silenced for all practical purposes. And I do think there was a time when people signalled by going against him, which was the proof of non-phyggishness. Phygs are bad. You could try red-letter warnings atop posts saying, "there's a rebuttal by a poster banned from this topic: [link]", but I don't expect you will, because the particular writer obviously won't want that.
6ChristianKl
Comments on the personal page show up for people who browse Popular Posts/Community and also for people who look at the Daily list of posts. Giving people with a history of providing value contributions (=high status) a better ability to have an audience is desirable.
5aNeopuritan
Definitely put on the Ialdabaoth hat. You do not in any circumstances have to consciously devise any advantage to hand to high-status people, because they already get all conceivable advantages for free.
4ChristianKl
High-status people get advantages for free because it's beneficial for agents to give them advantages. For a high status person it's easy to stay away and publish their content on their own blog and have an audience on their own blog. This makes it more important to incentive them to contribute. Companies have bonus system to reward the people who already have the most success in the company because it's very important to keep high performers happy.
3FeepingCreature
I think this only works if your standards for posts are in sync with those of the outside world. Otherwise, you're operating under incompatible status models and cannot sustain your community standards against outside pressure; you will always be outcompeted by the outside world (who can pretty much always offer more status than you can simply by volume) unless you can maintain the worth of your respect, and you cannot do that by copying outside appraisal.
If a comment of yours is ever deleted, you will automatically receive a PM with the text of your comment, so you don’t lose the content of your comment.

My intuition is that it would be better to allow users to see posts of their own that were deleted in a grayed out way instead of going through the way of sending an PM.

If there's a troll, sending a troll a PM that one of their post got deleted creates a stronger invitation to respond. That especially goes for deletions without giving reasons.

In addition I would advocate that posts that are deleted ... (read more)

2habryka
We considered the grayed-out way, but it was both somewhat technically annoying, and I did also feel like it is justified to notify people if one of their comments was deleted, without them having to manually check the relevant section of the comment area. The PM comes from a dummy account, and I think makes it clear that there is no use in responding. But unsure whether that was what you were pointing to with "stronger invitation to respond". And yep, all deleted comments are visible to sunshines.
2ChristianKl
If you have a person who writes a trolling post out of anger, the event of them getting a PM that their post was deleted triggers the anger again. This can lead to more engagement. On the other hand, just greying out the post doesn't produce engagement with the topic in the same strength. Given that we don't have that many angry trolls at the moment, I however don't think this is an important issue.

Will there be a policy on banned topics, such as e.g. politics, or will that be left to author discretion as part of moderation? Perhaps topics that are banned from promotion / front page (regardless of upvotes and comments) but are fine otherwise?

If certain things are banned, can they please be listed and defined more explicitly? This came up recently in another thread and I wasn't answered there.

5habryka
We have the frontpage post and commenting guidelines here, which are relatively explicit: https://www.lesserwrong.com/posts/tKTcrnKn2YSdxkxKG/frontpage-posting-and-commenting-guidelines