Lots of new users have been joining LessWrong recently, who seem more filtered for "interest in discussing AI" than for being bought into any particular standards for rationalist discourse. I think there's been a shift in this direction over the past few years, but it's gotten much more extreme in the past few months. 

So the LessWrong team is thinking through "what standards make sense for 'how people are expected to contribute on LessWrong'?" We'll likely be tightening up moderation standards, and laying out a clearer set of principles so those tightened standards make sense and feel fair. 

In coming weeks we'll be thinking about those principles as we look over existing users, comments and posts and asking "are these contributions making LessWrong better?". 

Hopefully within a week or two, we'll have a post that outlines our current thinking in more detail. 

Generally, expect heavier moderation, especially for newer users. 

Two particular changes that should be going live within the next day or so:

  • Users will need at least N karma in order to vote, where N is probably somewhere between 1 and 10.
  • Comments from new users won't display by default until they've been approved by a moderator.

Broader Context

LessWrong has always had a goal of being a well-kept garden. We have higher and more opinionated standards than most of the rest of the internet. In many cases we treat some issues as more "settled" than the rest of the internet, so that instead of endlessly rehashing the same questions we can move on to solving more difficult and interesting questions.

What this translates to in terms of moderation policy is a bit murky. We've been stepping up moderation over the past couple months and frequently run into issues like "it seems like this comment is missing some kind of 'LessWrong basics', but 'the basics' aren't well indexed and easy to reference." It's also not quite clear how to handle that from a moderation perspective. 

I'm hoping to improve on "'the basics' are better indexed", but meanwhile it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z)

In some cases you can get away without doing that while participating in local object level conversations, and pick up norms along the way. But if you're getting downvoted and you haven't read them, it's likely you're missing a lot of concepts or norms that are considered basic background reading on LessWrong. I recommend starting with the Sequences Highlights, and I'd also note that you don't need to read the Sequences in order, you can pick some random posts that seem fun and jump around based on your interest.

(Note: it's of course pretty important to be able to question all your basic assumptions. But I think doing that in a productive way requires actually understand why the current set of background assumptions are the way they are, and engaging with the object level reasoning)

There's also a straightforward question of quality. LessWrong deals with complicated questions. It's a place for making serious progress on those questions. One model I have of LessWrong is something like a university – there's a role for undergrads who are learning lots of stuff but aren't yet expected to be contributing to the cutting edge. There are grad students and professors who conduct novel research. But all of this is predicated on there being some barrier-to-entry. Not everyone gets accepted to any given university. You need some combination of intelligence, conscientiousness, etc to get accepted in the first place.

See this post by habryka for some more models of moderation

Ideas we're considering, and questions we're trying to answer:

  • What quality threshold does content need to hit in order to show up on the site at all? When is the right solution to approve but downvote immediately? 
  • How do we deal with low quality criticism? There's something sketchy about rejecting criticism. There are obvious hazards of groupthink. But a lot of criticism isn't well thought out, or is rehashing ideas we've spent a ton of time discussing and doesn't feel very productive.
  • What are the actual rationality concepts LWers are basically required to understand to participate in most discussions? (for example: "beliefs are probabilistic, not binary, and you should update them incrementally")
  • What philosophical and/or empirical foundations can we take for granted for building off of (i.e. reductionism, meta-ethics)
  • How much familiarity with the existing discussion of AI should you be expected to have to participate in comment threads about that? 
  • How does moderation of LessWrong intersect with moderating the Alignment Forum?

Again, hopefully in the near future we'll have a more thorough writeup about our answers to these. Meanwhile it seemed good to alert people this would be happening.

LW Team is adjusting moderation policy
New Comment
185 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Pinned by Raemon

I'm about to process the last few days worth of posts and comments. I'll be linking to this comment as a "here are my current guesses for how to handle various moderation calls".

[-]Raemon3923

Succinctly explain the main point.

When we're processing many new user-posts a day, we don't have much time to evaluate each post. 

So, one principle I think is fairly likely to become a "new user guideline" is "Make it pretty clear off the bat what the point of the post is." In ~3 sentences, try to make it clear who your target audience is, and what core point you're trying to communicate to them. If you're able to quickly gesture at the biggest-bit-of-evidence or argument that motivates your point, even better. (Though I understand sometimes this is hard).

This isn't necessarily how you have to write all the time on LessWrong! But your first post is something like an admissions-essay and should be optimized more for being legibly coherent and useful. (And honestly I think most LW posts should lean more in this direction)

In some sense this is similar to submitting something to a journal or magazine. Editors get tons of submissions. For your first couple posts, don't aim to write something that takes a lot of works for us to evaluate.

Corollary: Posts that are more likely to end up in the reject pile include...

  • Fiction, especially if it looks like it's trying to make some kind of p
... (read more)
[-]Ruby154

I've always like the the pithy advice "you have to know the rules to break the rules" which I do consider valid in many domains.

Before I let users break generally good rules like "explain what your point it up front", I want to know that they could keep to the rule before they don't. The posts of many first time users give me the feeling that their author isn't being rambly on purpose, they don't know how to write otherwise (or aren't willing to).

[-]Raemon3628

Some top-level post topics that get much higher scrutiny:

1. Takes on AI

Simply because of the volume of it, the standards are higher. I recommend reading Scott Alexander's Superintelligence FAQ as a good primer to make sure you understand the basics. Make sure you're familiar with the Orthogonality Thesis and Instrumental Convergence. I recommend both Eliezer's AGI Ruin: A List of Lethalities  and Paul Christiano's response post so you understand what sort of difficulties the field is actually facing.

I suggest going to the most recent AI Open Questions thread, or looking into the FAQ at https://ui.stampy.ai/ 

 

2. Quantum Suicide/Immortality, Roko's Basilisk and Acausal Extortion. 

In theory, these are topics that have room for novel questions and contributions. In practice, they seem to attract people who seem... looking for something to be anxious about? I don't have great advice for these people, but my impression is that they're almost always trapped in a loop where they're trying to think about it in enough detail that they don't have to be anxious anymore, but that doesn't work. They just keep finding new subthreads to be anxious about.

For Acausal Trade, I do ... (read more)

1Sherrinford
Maybe an FAQ for the intersection of #1, #2 and #3, "depressed/anxious because of AI", might be a good thing to be able to link to, though?
1Celarix
Bit of a shame to see this one, but I understand this one. It's crunch time for AGI alignment and there's a lot on the line. Maybe those of us interested in self-help can go to/post their thoughts on some of the rationalsphere blogs, or maybe start their own. I got a lot of value out of the more self-help and theory of mind posts here, especially Kaj Sotala's and Valentine's work on multiagent models of mind, and it'd be cool to have another place to continue discussions around that.
[-]Raemon3222

A key question when I look at a new user on LessWrong trying to help with AI is, well, are they actually likely to be able to contribute to the field of AI safety? 

If they are aiming to make direct novel intellectual contributions, this is in fact fairly hard. People have argued back and forth about how much raw IQ, conscientiousness or other signs of promise a person needs to have. There has been some posts arguing that people are overly pessimistic and gatekeeping-y about AI safety.

But, I think it's just pretty importantly true that it takes a fairly significant combination of intelligence and dedication to contribute. Not everyone is cut out for doing original research. Many people pre-emptively focus on community building and governance because that feels easier and more tractable to them than original research. But those areas still require you to have a pretty understanding of the field you're trying to govern or build a community for.

If someone writes a post on AI that seems like a bad take, which isn't really informed by the real challenges, should I be encouraging that person to make improvements and try again? Or just say "idk man, not everyone is cut out for this?"

H... (read more)

Draft in progress. Common failures modes for AI posts that I want to reference later:

Trying to help with AI Alignment

"Let's make the AI not do anything." 

This is essentially a very expensive rock. Other people will be building AIs that do do stuff. How does your AI help the situation over not building anything at all?

"Let's make the AI do [some specific thing that seems maybe helpful when parsed as an english sentence], without actually describing how to make sure they do exactly or even approximately that english sentence"

The problem is a) we don't know how to point an AI at doing anything at all, and b) your simple english sentence includes a ton of hidden assumptions.

(Note: I think Mark Xu sort of disagreed with Oli on something related to this recently, so I don't know that I consider this class of solution is completely settled. I think Mark Xu thinks that we don't currently know how to get an AI to do moderately complicated actions with our current tech, but, our current paradigms for how to train AIs are likely to yield AIs that can do moderately complicated actions)

I think the typical new user who says things like this still isn't advancing the current paradigm though,... (read more)

[-]Ruby131

Here's a quickly written draft for an FAQ we might send users whose content gets blocked from appearing on the site.


The “My post/comment was rejected” FAQ

Why was my submission rejected?

Common reasons that the LW Mod team will reject your post or comment:

  • It fails to acknowledge or build upon standard responses and objections that are well-known on LessWrong. 
    • The LessWrong website is 14 years old and the community behind it older still. Our core readings [link] are over half a million words. So understandably, there’s a lot you might have missed!
    • Unfortunately, as the amount of interest in LessWrong grows, we can’t afford to let cutting-edge content get submerged under content from people who aren’t yet caught up to the rest of the site.   
  • It is poorly reasoned. It contains some mix of bad arguments and obviously bad positions that it does not feel worth the LessWrong mod team or LessWrong community’s time or effort responding to.
  • It is difficult to read. Not all posts and comments are equally well-written and make their points as clearly. While established users might get more charity, for new users, we require that we (and others) can easily follow what you’re say
... (read more)

It fails to acknowledge or build upon standard responses and objections that are well-known on LessWrong. 

  • The LessWrong website is 14 years old and the community behind it older still. Our core readings [link] are over half a million words. So understandably, there’s a lot you might have missed!
  • Unfortunately, as the amount of interest in LessWrong grows, we can’t afford to let cutting-edge content get submerged under content from people who aren’t yet caught up to the rest of the site.

I do want to emphasize a subtle distinction between "you have brought up arguments that have already been brought up" and "you are challenging basic assumptions of the ideas here". I think challenging basic assumptions is good and well (like here), while bringing up "but general intelligences can't exist because of no-free-lunch theorems" or "how could a computer ever do any harm, we can just unplug it" is quite understandably met with "we've spent 100s or 1000s of hours discussing and rebutting that specific argument, please go read about it <here> and come back when you're confident you're not making the same arguments as the last dozen new users".

I would like to make sure new users are s... (read more)

2Ruby
Can be made more explicit, but this is exactly why the section opens with "acknowledge [existing stuff on topic]".
8Ben Pace
Well, I don't think you have to acknowledge existing stuff on this topic if you have a new and good argument.  Added: I think the phrasing I'd prefer is "You made an argument that has already been addressed extensively on LessWrong" rather than "You have talked about a topic without reading everything we've already written about on that topic".
7Ruby
I do think there is an interesting question of "how much should people have read?" which is actually hard to answer. There are people who don't need to read as much in order to say sensible and valuable things, and some people that no amount of reading seems to save.  The half a million words is the Sequences. I don't obviously want a rule that says you need to have read all of them in order to post/comment (nor do I think doing so is a guarantee), but also I do want to say that if you make mistakes the the Sequences would teach you not to make, that could be grounds for not having your content approved. A lot of the AI newbie stuff I'm disinclined to approve is the kind that makes claims that are actually countered in the Sequences, e.g. orthogonality thesis, treating the AI too much like humans, various fallacies involving words.  
3Ruby
How do you know you have a new and good argument if you don't know the standards thing said on the topic And relatedly, why should I or other readers on LW assume that you have a new and good argument without any indication that you know the arguments in general? This is aimed at users making their very first post/comment. I think it is likely a good policy/heuristic for the mod team in judging your post that claims "AIs won't be dangerous because X", tells me early on that you're not wasting my time because you're already aware of all the standard arguments.  In a world where everyday a few dozen people who started thinking about AI two weeks ago show up on LessWrong and want to give their "why not just X?", I think it's reasonable to say "we want you to give some indication that you're aware of the basic discussion this site generally assumes".
-4Portia
I find it hilarious that you can say this, while simultaneously, the vast majority of this community is deeply upset they are being ignored by academia and companies, because they often have no formal degrees or peer reviewed publication, or other evidence of having considered the relevant science. Less wrong fails the standards of these fields and areas. You routinely re-invent concepts that already exist. Or propose solutions that would be immediately rejected as infeasible if you tried to get them into a journal. Explaining a concept your community takes for granted to outsiders can help you refresh it, understand it better, and spot potential problems. A lot of things taken for granted here are rejected by outsiders because they are not objectively plausible. And a significant number of newcomers, while lacking LW canon, will have other relevant knowledge. If you make the bar too high, you deter them. All this is particularly troubling because your canon is spread all over the place, extremely lengthy, and individually usually incomplete or outdated, and filled in implicitly from prior forum interactions. Academic knowledge is more accessible that way.
8Ruby
Writing hastily in the interests of time, sorry if not maximally clear.   It's very much a matter of how many newcomers there are relative to existing members. If the number of existing members is large compared to newcomers, it's not so bad to take the time to explain things. If the number of newcomers threatens to overwhelm the existing community, it's just not practical to let everyone in. Among other factors, certain conversation is possible because you can assume that most people have certain background and even if they disagree, at least know the things you know. The need for getting stricter is because of the current (and forecasted) increase in new user. This means we can't afford to become 50% posts that ignore everything our community has already figured out.  LessWrong is an internet forum, but it's in the direction of a university/academic publication, and such publications only work because editors don't accept everything. 
7lc
Source?
2the gears to ascension
my guess is that that claim is slightly exaggerated, but I expect sources exist for an only mildly weaker claim. I certainly have been specifically mocked for my username in places that watch this site, for example.
5lc
This is an example of people mocking LW for something. Portia is making a claim about LW users' internal emotional states; she is asserting that they care deeply about academic recognition and feel infuriated they're not getting it. Does this describe you or the rest of the website, in your experience?
2the gears to ascension
lens portia's writing out of frustrated tone first and it makes more sense. they're saying that recognition is something folks care about (yeah, I think so) and aren't getting to an appropriate degree (also seems true). like I said in my other comment - tone makes it harder to extract intended meaning.
7lc
Well, I disagree. I have literally zero interest in currying the favor of academics, and think Portia is projecting a respect and yearning for status within universities onto the rest of us that mostly doesn't exist. I would additionally prefer if this community were able to set standards for its members without having to worry about or debate whether or not asking people to read canon is a status grab.
5the gears to ascension
sure. I do think it's helpful to be academically valid sometimes though. you don't need to care, but some do some of the time somewhat. maybe not as much as the literal wording used here. catch ya later, anyhow.
3the gears to ascension
strong agree, single upvote: harsh tone, but reasonable message. I hope the tone doesn't lead this point to be ignored, as I do think it's important. but leading with mocking does seem like it's probably why others have downvoted. downvote need not indicate refusal to consider, merely negative feedback to tone, but I worry about that, given the agree votes are also in the negative.
4Portia
Thank you. And I apologise for the tone. I think the back of my mind was haunted by Shoggoth with a Smiley face giving me advice for my weekend plans, and that emotional turmoil came out the wrong way. I am in the strange position of being on this forum, and in academia, and seeing both sides engage in the same barrier keeping behaviour, and call it out as elitist and misguided in the other but a necessary way to ensure quality and affirm your superior identity in your own group is jarring. I've found valuable and admirable practices and insights in both, else I would not be there.
1Thoth Hermes
Any group that bears a credential, or performs negative selection of some kind, will bear the traits you speak of. 'Tis the nature of most task-performing groups human society produces. Alas, one cannot escape it, even coming to a group that once claimed to eschew credentialism. Nonetheless, it is still worthwhile to engage with these groups intellectually.
1papetoast
I cannot access www.lesswrong.com/rejectedcontent (404 error). I suspect you guys forgot to give access to non-moderators, or you meant www.lesswrong.com/moderation (But there are no rejected posts there, only comments)
2Raemon
We didn't build that yet but plan to soon. (I think what happened was Ruby wrote this up in a private google doc, I encouraged him to post it as a comment so I could link to it, and both of us forgot it included that explicit link. Sorry about that, I'll edit it to clarify)
Pinned by Raemon

Here's my best guess for overall "moderation frame", new this week, to handle the volume of users. (Note: I've discussed this with other LW team members, and I think there's rough buy-in for trying this out, but it's still pretty early in our discussion process, other team members might end up arguing for different solutions)

I think to scale the LessWrong userbase, it'd be really helpful to shift the default assumptions of LessWrong to "users by default have a rate limit of 1-comment-per day" and "1 post per week." 

If people get somewhat upvoted, they... (read more)

8Ben Pace
Natural times I expect this to be frustrating are when someone's written a post, got 20 comments, and tries to reply to 5 of them, but is locked after the first one. 1 per day seems too strong there. I might say "unlimited daily comments on your own posts". I also think I'd prefer a cut-off where after which you're trusted to comment freely. Reading the positive-selection post (which I agree with), I think some bars here could include having written a curated post or a post with 200+ karma or having 1000 karma on your account.
4Raemon
I'm not particularly attached to these numbers, but fyi the scale I was originally imagining was "after the very first upvote, you get something like 3 comments a day, and after like 5-10 karma you don't have a rate limit." (And note, initially you get one post and one comment, so you get to reply to your post's first comment) I think in practice, in the world where you receive 4 comments but a) your post hasn't been upvoted much and b) none of your responses to the first three comments didn't get upvoted, my expectation is you're a user I'd indeed prefer to slow down, read up on site guidelines and put more effort into subsequent comments. I think having 1000 karma isn't actually a very high bar, but yeah I think users with 2+ posts that either have 100+ karma or are curated, should get a lot more leeway.
2Ben Pace
Ah good, I thought you were proposing a drastically higher bar.
6Raemon
Here are some principles that are informing some of my thinking here, some pushing in different directions * Karma isn't that great a metric – I think people often vote for dumb reasons, and they vote highest in drama-threads that don't actually reflect important new intellectual principles. I think there are maybe ways we can improve on the karma system, and I want to consider those soon. But I still think karma-as-is is at least a pretty decent proxy metric to keep the site running smoothly and scaling. * Because karma is only a proxy metric, I'd still expect moderator judgment to play a significant role in making sure the system isn't going off the rails in the immediate future * each comment comes with a bit of an attentional cost. If you make a hundred comments and get 10 karma (and no downvotes), I think you're most likely not a net-positive contributor. (i.e. each comment maybe costs 1/5th of a karma in attention or something like that) * in addition, I think highly upvoted comments/posts tend to be dramatically more valuable than weakly upvoted comments/posts. (i.e. a 50 karma comment is more than 10 times as valuable as a 5 karma comment, most of the time [with an exception IMO for drama threads] The current karma system kinda encourages people to write lots of comments that get slightly upvoted and gives them the impression of being an established regular. I think in most cases users with a total average karma of ~1-2 are typically commenting in ways that are persistently annoying in some way, in a way that'd be sort of fine with each individual comment but adds up to some kind of "death by a thousand cuts" thing that makes the site worse. On the other hand, lots of people drawn to LessWrong have a lot of anxiety and scrupulosity issues and I generally don't want people overthinking this and spending a lot of time worrying about it. My hope is to frame the thing more around positive rewards than punishments.
5Elizabeth
I suggest not counting people's comments on their own posts towards the rate limit or the “barely upvoted” count. This both seems philosophically correct, and avoids penalizing authors of medium-karma posts for replying to questions (which often don’t get much if any karma).
3Raemon
Yeah that probably makes sense.
4Vladimir_Nesov
There should be fast tracks that present no practical limits to the new users. First few comments should be available immediately upon registration, possibly regenerating quickly. This should only degrade if there is downvoting or no upvoting, and the limits should go away completely according to an algorithm that passes backtesting on first comments made by users in good standing who started commenting within the last 3-4 years. That is, if hypothetically such a rate-limiting algorithm were to be applied 3 years ago to a user who started commenting then, who later became a clearly good contributor, the algorithm should succeed in (almost) never preventing that user from making any of the comments that were actually made, at the rate they were actually made. If backtesting shows that this isn't feasible, implementing this feature is very bad. Crowdsource moderation instead, allow high-Karma users to rate-limit-vote on new users, but put rate-limit-level of new users to "almost unlimited" by default, until rate-limit-downvoted manually.
2Ruby
I'm less optimistic than Ray about rate limits, but still think they're worth exploring. I think getting the limits/rules correct will be tricky since I do care about the normal flow of good conversation not getting impeded.  I think it's something we'll try soon, but not sure if it'll be the priority for this week.
2Vladimir_Nesov
Imagine a system that lets a user write their comments or posts in advance, and then publishes comments according to these limits automatically. Then these limits wouldn't be enough. On the other hand, if you want to write a comment, you want to write it right away, instead of only starting to write it the next day because you are out of commenting chits. It's very annoying if the UI doesn't allow you to do that and instead you need to write it down in a file on your own device, make a reminder to go back to the site once the timeout is up, and post it at that time, all the while remaining within the bounds of the rules. Also, being able to reply to responses to your comments is important, especially when the responses are requests for clarification, as long as that doesn't turn into an infinite discussion. So I think commenting chits should accumulate to a maximum of at least 3-4, even if it takes a week to get there, possibly even more if it's been a month. But maybe an even better option is for all but one of these to be "reply chits" that are weaker than full "comment chits" and only work for replies-to-replies to your own comments or posts. While the full "comment chits" allow commenting anywhere. I don't see a way around the annoyance of feasibility of personally managed manual posting schedule workarounds other than implementing the queued-posting feature on LW, together with ability to manage the queue, arranging the order/schedule in which the pending comments will be posted. Which is pretty convoluted, putting this whole development in question.
2Raemon
LessWrong already stores comments you write in local storage so you can edit it over the rose of the day and post it later. I… don’t see a reason to actively facilitate users having an easier time posting as often as possible, and not sure I understand your objection here.
4Vladimir_Nesov
An obvious issue that could be fixed by the UI but isn't, that can be worked around outside the UI, is deliberate degradation of user experience. The blame is squarely on the developers, because it's an intentional decision by the developers. This should be always avoided, either by not creating this situation, or by fixing the UI. If this is not done, users will be annoyed, I think justifiably. When users want to post, not facilitating that annoys them. If you actually knew that we want them to go away, you could've banned them already. You don't actually know, that's the whole issue here, so some of them are the reason there is a site at all, and it's very important to be a good host for them.

After chatting for a bit about what to do with low-quality new posts and comments, while being transparent and inspectably fair, the LW Team is currently somewhat optimistic about adding a section to lesswrong.com/moderation which lists all comments/posts that we've rejected for quality. 

We haven't built it yet, so for immediate future we'll just be strong downvoting content that doesn't meet our quality bar. And for immediate future, if existing users in good standing want to defend particular pieces as worth inclusion they can do so here. 

This ... (read more)

What are the actual rationality concepts LWers are basically required to understand to participate in most discussions?

I am prior to having this bar be set pretty high, like 80-100% of the sequences level. I remember years ago when I finished the sequences, I spent several months practicing everyday rationality in isolation, and only then deigned to visit LessWrong and talk to other rationalists, and I was pretty disappointed with the average quality level, and like I dodged a bullet by spending those months thinking alone rather than with the wider community.

It also seems like average quality has decreased over the years.

Predictable confusion some will have: I’m talking about average quality here. Not 90th percentile quality posters.

6MondSemmel
I think I'd prefer setting the bar lower, and instead using downvotes as a filter for merely low-quality (rather than abysmal-quality) content. For instance, most posts on LW receive almost no comments, so I'd suspect that filtering for even higher quality would just dry up the discussion even more.

The main reason I don’t reply to most posts is because I’m not guaranteed an interesting conversation, and it is not uncommon that I’d just be explaining a concept which seems obvious if you’ve read the sequences, which aren’t super fun conversations to have compared to alternative uses of my time.

For example, the other day I got into a discussion on LessWrong about whether I should worry about claims which are provably useless, and was accused of ignoring inconvenient truths for not doing so.

If the bar to entry was a lot higher, I think I’d comment more (and I think others would too, like TurnTrout).

4MondSemmel
Maybe we have different experiences because we tend to read different LW content? I skip most of the AI content, so I don't have a great sense of the quality of comments there. If most AI discussions get a healthy amount of comments, but those comments are mostly noise, then I can certainly understand your perspective.
7Ben Pace
In my experience actively getting terrible comments can be more frustrating than a lack-of-comments is demotivating.
3awg
Agreed. I think this also trends exponentially with the number of terrible comments. It is possible to be overwhelmed to death and have to completely relocate/start over (without proper prevention). One thing that I think in the long term might be worth considering is something like the SomethingAwful approach: a one-time payment per account that is high enough to discourage trolls but low enough for most anyone to afford in combination with a strong culture and moderation (something LessWrong already has/is working on).

Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I'm not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.

After that, a few ideas:

(1) While I don't 100% agree with every point he made, I think Duncan Sabien did an incredible job with "Basics of Rationalist Discourse" - https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like "Our rough thoughts on how to be a good a contributor here, which might get updated from time to time". Or just link Duncan's piece as "non-canonical for rules but a great starting place." I'd hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien's guidelines, there'd be a rather high quality standard.

(2) I wonder if there's some reasonably precise questions you could ask new users to check for understanding and could be there... (read more)

[-]Ruby158

I did the have the idea of there being regions with varying standards and barriers, in particular places where new users cannot comment easily and place where they can, as an idea.

4TekhneMakre
This feels like a/the natural solution. In particular, what occurred to me was: 1. Make LW about rationality again. 2. Expand the Alignment Forum: 2.1. By default, everything is as it is currently: a small set of users post, comment, and upvote, and that's what people see by default. 2.2. There's another section that's open to whoever. The reasoning being that the influx is specifically about AI, not just a big influx.

The idea of AF having both a passing-the-current-AF-bar section and a passing-the-current-LW-bar section is intriguing to me. With some thought about labeling etc., it could be a big win for non-alignment people (since LW can suppress alignment content more aggressively by default), and a big win for people trying to get into alignment (since they can host their stuff on a more professional-looking dedicated alignment site), and no harm done to the current AF people (since the LW-bar section would be clearly labeled and lower on the frontpage).

I didn’t think it through very carefully though.

2niplav
I like this direction, but I'm not sure how broadly one would want to define rationality: Would a post collecting quotes about intracranial ultrasound stimulation for meditation enhancement be rationality related enough? What about weird quantified self experiments? In general I appreciate LessWrong because it is so much broader than other fora, while still staying interesting.
2TekhneMakre
Well, at least we can say, "whatever LW has been, minus most AI stuff".
[-]Raemon1510

I do agree Duncan's post is pretty good, and while I don't think it's perfect I don't really have an alternative I think is better for new users getting a handle on the culture here.

I'd be willing to put serious effort into editing/updating/redrafting the two sections that got the most constructive pushback, if that would help tip things over the edge.

3Ilio
If you could add QAs, this could turn into a certificate that one has verifiably read or copy&past what LessWrong expects its users to know before writing under some tag (with a specific set of QAs for each main tags). Of course LW could also certify that this or that trusted user is welcome on all tags, and chose what tag can or can’t appear in front page.

I vaguely remember being not on board with that one and downvoting it. Basics of Rationalist Discourse doesn't seem to get to the core of what rationality is, and seems to preclude other approaches that might be valuable. Too strict and misses the point. I would hate for this to become the standard.

I don't really have an alternative I think is better for new users getting a handle on the culture here

Culture is not systematically rationality (dath ilan wasn't built in a day). Not having an alternative that's better can coexist with this particular thing not being any good for same purpose. And a thing that's any good could well be currently infeasible to make, for anyone.

Zack's post describes the fundamental difficulty with this project pretty well. Adherence to most rules of discourse is not systematically an improvement for processes of finding truth, and there is a risk of costly cargo cultist activities, even if they are actually good for something else. The cost could be negative selection by culture, losing its stated purpose.

6Ben Pace
I would vastly prefer new users to read it than to not read anything at all.
1Said Achmiz
There is no way that a post which some (otherwise non-banned) members of the site are banned from commenting on should be used as an onboarding tool for the site culture. The very fact of such bannings is a clear demonstration of the post’s unsuitability for purpose.
[-]gjm2821

How does the fact of such bannings demonstrate the post's unsuitability for purpose?

I think it doesn't. For instance, I think the following scenario is clearly possible:

  • There are users A and B who detest one another for entirely non-LW-related reasons. (Maybe they had a messy and distressing divorce or something.)
  • A and B are both valuable contributors to LW, as long as they stay away from one another.
  • A and B ban one another from commenting on their posts, because they detest one another. (Or, more positively, because they recognize that if they start interacting then sooner or later they will start behaving towards one another in unhelpful ways.)
  • A writes an excellent post about LW culture, and bans B from it just like A bans B from all their posts (and vice versa).

If you think that Duncan's post specifically shouldn't be an LW-culture-onboarding tool because he banned you specifically from commenting on it, then I think you need reasons tied to the specifics of the post, or the specifics of your banning, or both.

(To be clear: I am not claiming that you don't have any such reasons, nor that Duncan is right to ban you from commenting on his frontpaged posts, nor that Duncan's "basics" post is good, nor am I claiming the opposite of any of those. I'm just saying that the thing you're saying doesn't follow from the thing you're saying it follows from.)

-2Said Achmiz
I suspect that you know perfectly well that the sort of scenario you describe doesn’t apply here. (If, by some chance, you did not know this: I affirm it now. There is no “messy and distressing divorce”, or any such thing; indeed I have never interacted with Duncan, in any way, in any venue other than on Less Wrong.) The other people in question were, to my knowledge, also banned from commenting on Duncan’s posts due to their criticism of “Basics of Rationalist Discourse” (or due to related discussions, on related topics on Less Wrong), and likewise have no such “out of band” relationship with Duncan. (All of this was, I think, obvious from context. But now it has been said explicitly. Given these facts, your objection does not apply.)
[-]gjm2019

I did not claim that my scenario describes the actual situation; in fact, it should be very obvious from my last two paragraphs that I thought (and think) it likely not to.

What I claimed (and still claim) is that the mere fact that some people are banned from commenting on Duncan's frontpage posts is not on its own anything like a demonstration that any particular post he may have written isn't worthy of being used for LW culture onboarding.

Evidently you think that some more specific features of the situation do have that consequence. But you haven't said what those more specific features are, nor how they have that consequence.

Actually, elsewhere in the thread you've said something that at least gestures in that direction:

the point is that if the conversational norms cannot be discussed openly [...] there's no reason to believe that they're good norms. How were they vetted? [...] the more people are banned from commenting on the norms as a consequence of their criticism of said norms, the less we should believe that the norms are any good!

I don't think this argument works. (There may well be a better version of it, along the lines of "There seem to be an awful lot of people Duncan... (read more)

-4Said Achmiz
This is false. There certainly is a conflict, if “the way Duncan found disagreeable” is “robust”. Sorry, no. It’s the very argument in question. The “styles of argument” are “the ones that are directly critical of the heart of the claims being made”.
[-]gjm1812

It is at best debatable that "this is false". Duncan (who, of course, is the person who did the banning) explicitly denies that you were banned for criticizing his proposed norms. Maybe he's just lying, but it's certainly not obvious that he is and it looks to me as if he isn't.

Duncan has also been pretty explicit about what he dislikes about your interactions with him, and what he says he objects to is definitely not simply "robust disagreement". Again, of course it's possible he's just lying; again, I don't see any reason to think he is.

You are claiming very confidently, as if it's a matter of common knowledge, that Duncan banned you from commenting on his frontpage posts because he can't accept direct criticism of his claims and proposals. I do not see any reason to think that that is true. You have not, so far as I can see, given any reason to think it is true. I think you should stop making that claim without either justification or it-seems-to-me-that hedging.

(Since it's fairly clear[1] that this is a matter of something like enmity rather than mere disagreement and in such contexts everything is liable to be taken as a declaration of What Side One Is On, I will say that I th... (read more)

8Vladimir_Nesov
As a decoupled aside, something not being a matter of common knowledge is not grounds for making claims of it less confidently, it's only grounds for a tiny bit of acknowledgment of this not being common knowledge, or of the claim not being expected to be persuasive in isolation.
4gjm
I agree. If you are very certain of X but X isn't common knowledge (actually, "common knowledge" in the technical sense isn't needed, it's something like "agreed on by basically everyone around you") then it's fine to say e.g. "I am very certain of X, from which I infer Y", but I think there is something rude about simply saying "X, therefore Y" without any acknowledgement that some of your audience may disagree with X. (It feels as if the subtext is "if you don't agree with X then you're too ignorant/stupid/crazy for me to care at all what you think".) In practice, it's rather common to do the thing I'm claiming is rude. I expect I do it myself from time to time. But I think it would be better if we didn't.
4Vladimir_Nesov
My point is that this concern is adequately summarized by something like "claiming without acknowledgment/disclaimers", but not "claiming confidently" (which would change credence in the name of something that's not correctness). I disagree that this is a problem in most cases (acknowledgment is a cost, and usually not informative), but acknowledge that this is debatable. Similarly to the forms of politeness the require more words, as opposed to forms of politeness that, all else equal, leave the message length unchanged. Acknowledgment is useful where it's actually in doubt.
2gjm
In this case, Said is both (1) claiming the thing very confidently, when it seems pretty clear to me that that confidence is not warranted, and (2) claiming it as if it's common knowledge, when it seems pretty clear to me that it's far from being common knowledge.
1Said Achmiz
But of course he would deny it. As I’ve said, that’s the problem with giving members the power to ban people from their posts: it creates a conflict of interest. It lets people ban commenters for simply disagreeing with them, while being able to claim that it’s for some other reason. Why would Duncan say “yeah, I banned these people because I don’t like it when people point out the flaws in my arguments, the ways in which something I’ve written makes no sense, etc.”? It would make him look pretty bad to admit that, wouldn’t it? Why shouldn’t he instead say that he banned the people in question for some respectable reason? What downside is there, for him? And given that, why in the world would we believe him when he says such things? Why would we ever believe any post author who, after banning a commenter who’s made a bunch of posts disagreeing with said author, claims that the ban was actually for some other reason? It doesn’t make any sense at all to take such claims seriously! The reason why the “ban people from your own post(s)” feature is bad is that it gives people an incentive to make such false claims, not just to deceive others (that would be merely bad) but—much worse!—to deceive themselves about their reasons for issuing bans. The obvious reason to think so is that, having written something which is deserving of strong criticism—something which is seriously flawed, etc.—both letting people point this out, in clear and unmerciful terms, and banning them but admitting that you’ve banned them because you can’t take criticism, is unpleasant. (The latter more so than the former… or so we might hope!) Given the option to simply declare that the critics have supposedly violated some supposed norm (and that their violation is so terrible, so absolutely intolerable, that it outweighs the benefit of permitting their criticisms to be posted—quite a claim!), it would take an implausible, an almost superhuman, degree of integrity and force of will to resist doing ju
8gjm
We would believe it because * on the whole, people are more likely to say true things than false things * Duncan has said at some length what he claims to find unpleasant about interacting with you, it isn't just "Said keeps finding mistakes in what I have written", and it is (to me) very plausible that someone might find it unpleasant and annoying * (I'm pretty sure that) other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts. You don't give any concrete reason for disbelieving the plausible explanations Duncan gives, you just say -- as you could say regardless of the facts of the matter in this case -- that of course someone banning someone from commenting on their posts won't admit to doing so for lousy reasons. No doubt that's true, but that doesn't mean they all are doing it for lousy reasons. It seems pretty obvious to me where enmity could come from. You and Duncan have said a bunch of negative things about one another in public; it is absolutely commonplace to resent having people say negative things about you in public. Maybe it all started with straightforward disagreement about some matter of fact, but where we are now is that interactions between you and Duncan tend to get hostile, and this happens faster and further than (to me) seems adequately explained just by disagreements on the points ostensibly at issue. (For the avoidance of doubt, I was not at all claiming that whatever enmity there might be started somewhere other than LW.)
[-]Dagon1714

[ I don't follow either participant closely enough to have a strong opinion on the disagreement, aside from noting that the disagreement seems to use a lot of words, and not a lot of effort to distill their own positions toward a crux, as opposed to attacking/defending.   ]

on the whole, people are more likely to say true things than false things

In the case of contentious or adversarial discussions, people say incorrect and misleading things .  "more likely true than false" is a uselessly low bar for seeking any truth or basing any decisions on.  

9Said Achmiz
This is a claim so general as to be meaningless. If we knew absolutely nothing except “a person said a thing”, then retreating to this sort of maximally-vague prior might be relevant. But we in fact are discussing a quite specific situation, with quite specific particular and categorical features. There is no good reason to believe that the quoted prior survives that descent to specificity unscathed (and indeed it seems clear to me that it very much does not). It’s slightly more specific, of course—but this is, indeed, a good first approximation. Of course it is! What is surprising about the fact that being challenged on your claims, being asked to give examples of alleged principles, having your theories questioned, having your arguments picked apart, and generally being treated as though you’re basically just some dude saying things which could easily be wrong in all sorts of ways, is unpleasant and annoying? People don’t like such things! On the scale of “man bites dog” to the reverse thereof, this particular insight is all the way at the latter end. The whole point of this collective exercise that we’re engaged in, with the “rationality” and the “sanity waterline” and all that, is to help each other overcome this sort of resistance, and thereby to more consistently and quickly approach truth. Let’s see some examples, then we can talk. If Alice criticizes one of Bob’s posts, and Bob immediately or shortly thereafter bans Alice from commenting on Bob’s posts, the immediate default assumption should be that the criticism was the reason for the ban. Knowing nothing else, just based on these bare facts, we should jump right to the assumption that Bob’s reasons for banning Alice were lousy. If we then learn that Bob has banned multiple people who criticized him robustly/forcefully/etc., and Bob claim that the bans in all of these cases were for good reasons, valid reasons, definitely not just “these people criticized me”… then unless Bob has some truly heroic ev
[-]gjm1813

You continue to assert, with apparent complete confidence, a claim about Duncan's motivations that (1) Duncan denies, (2) evidently seems to at least two people (me and dxu) to be far from obviously true, and (3) you provide no evidence for that engages with any specifics at all. The trouble with 3 is that it cuts you off from the possibility of getting less wrong. If in fact Duncan's motivations were not as you think they are, how could you come to realise that?

(Maybe the answer is that you couldn't, because you judge that in the situation we're in the behaviour of someone with the motivations you claim is indistinguishable from that of someone with the motivations Duncan claims, and you're willing to bite that bullet.)

I don't agree with your analysis of the Alice/Bob situation. I think that in the situation as described, given only the information you give, we should be taking seriously at least these hypotheses: (1) Bob is just very ban-happy and bans anyone who criticizes him, (2) Bob keeps getting attacked in ban-worthy ways, but the reason that happens is that he's unreasonably provoking them, (3) Bob keeps getting attacked in ban-worthy ways, for reasons that don't reflect b... (read more)

-3Said Achmiz
Of course he denies it. I already explained that we’d expect him to deny it if it were true. Come on! This is extremely obvious stuff. Why would he not deny it? And if indeed he’d deny it if it were true, and obviously would also deny it if it were false, then it’s not evidence. Right? Bayes! Yes, many people on Less Wrong have implausible degrees of “charity” in their priors on human behavior. But of course it does no such thing! It means merely that I have a strong prior, and have seen no convincing evidence against. Same way I come to realize anything else: updating on evidence. (But it’d have to be some evidence!) Pretty close to indistinguishable, yeah. (1) is the obvious default (because it’s quite common and ordinary). (2) seems to rest on the meaning of “unreasonably”; I think we can mostly conflate it with (3). And (3) certainly happens but isn’t anywhere close to the default. Also, your (1) says “anyone”, but it could also be “anyone over a certain threshold of criticism strength/salience/etc.”. That makes it even more the obvious default. Well, for one thing, “what did Bob say” can’t be given much weight, as I noted above. The interpretation of third parties seems mostly irrelevant. If Carol observes the situation, she can reach her own conclusion without consulting Dave. Dave’s opinion shouldn’t be any kind of meaningful input into Carol’s evaluation. As for “what did Alice’s criticism look like”, sure. We have to confirm that there aren’t any personal insults in there, for instance. Easy enough. Yes, of course! I agree completely! In the quoted bit, Duncan says pretty much exactly what we’d expect him to say if what he were very annoyed at being repeatedly questioned, challenged, and contradicted by some other commenter, in ways that he found himself unable to convincingly respond to, and which inability made him look bad. It makes sense that Duncan would, indeed, describe said commenter’s remarks in tendentious ways, using emotionally charge
[-]gjm1415

You say that if you were wrong about Duncan's motivations then you would discover "by updating on evidence" but I don't understand what sort of evidence you could possibly see that would make you update enough to make any difference. (Again, maybe this is a bullet you bite and you're content with just assuming bad faith and having no realistic way to discover if you're wrong.)

Although you say "Bayes!" it seems to me that what you're actually doing involves an uncomfortable amount of (something like) snapping probabilities to 0 and 1. That's a thing everyone does at least a bit, because we need to prune our hypothesis spaces to manageable size, but I think in this case it's making your reasoning invalid.

E.g., you say: Duncan would deny your accusation if it were true, and he would deny it if it were false, hence his denial tells us nothing. But that's all an oversimplification. If it were true, he might admit it; people do in fact not-so-infrequently admit it when they do bad things and get called out on it. Or he might deny it in a less specific way, rather than presenting a concrete explanation of what he did. Or he might just say nothing. (It's not like your accusation had been m... (read more)

5Raemon
I think I do want to ask everyone to stop this conversation because it seems weirdly anchored on one particular example, that, as far as I can tell, was basically a central of what we wanted the author-moderation norms to be for in Meta-tations on Moderation, and they shouldn't be getting dragged through a trial-like thing for following the rules we gave them. If I had an easy lock-thread button I'd probably have hit that ~last night. We do have a lock thread functionality but it's a bit annoying to use.

they shouldn't be getting dragged through a trial-like thing for following the rules we gave them

They don't need to be personally involved. The rules protect author's posts, they don't give the author immunity from being discussed somewhere else.

This situation is a question that merits discussion, with implications for general policy. It might have no place in this particular thread, but it should have a place somewhere convenient (perhaps some sort of dedicated meta "subreddit", or under a meta tag). Not discussing particular cases restricts allowed forms of argument, distorts understanding in systematic ways.

2gjm
Replying just to acknowledge that I've seen this and am entirely content to drop it here.
1M. Y. Zuo
Tangentially, isn't there already plenty of onboarding material that's had input from most of the moderating team? Just not including the stuff that hasn't been vetted by a large majority/unanimity of the team seems to be straightforward.
3Said Achmiz
Apologies, but I have now been forbidden from discussing the matter further. Please feel free to contact me via private message if you’re interested in continuing the discussion. But if you want to leave things here, that’s also perfectly fine. (Strictly speaking, the ball at this point is in my court, but I wouldn’t presume to take the discussion to PM unilaterally; my guess is that you don’t think that’s particularly worth the effort, and that seems to me to be a reasonable view.)
[-]dxu154

This is a claim so general as to be meaningless. If we knew absolutely nothing except “a person said a thing”, then retreating to this sort of maximally-vague prior might be relevant. But we in fact are discussing a quite specific situation, with quite specific particular and categorical features. There is no good reason to believe that the quoted prior survives that descent to specificity unscathed (and indeed it seems clear to me that it very much does not).

The prior does in fact survive, in the absence of evidence that pushes one's conclusion away from it. And this evidence, I submit, you have not provided. (And the inferences you do put forth as evidence are—though this should be obvious from my previous sentence—not valid as inferences; more on this below.)

it isn’t just “Said keeps finding mistakes in what I have written”

It’s slightly more specific, of course—but this is, indeed, a good first approximation.

This is a substantially load-bearing statement. It would appear that Duncan denies this, that gjm thinks otherwise as well, and (to add a third person to the tally) I also find this claim suspicious. Numerical popularity of course does not determine the truth (or... (read more)

8Said Achmiz
Categories like “conflicts of interest”, “discussions about who should be banned”, “arguments about moderation in cases in which you’re involved”, etc., already constitute “evidence” that push the conclusion away from the prior of “on the whole, people are more likely to say true things than false things”, without even getting into anything more specific. You’ve misunderstood. My point was that “Said keeps finding mistakes in what I have written” is a good first approximation (but only that!) of what Duncan allegedly finds unpleasant about interacting with me, not that it’s a good first approximation of Duncan’s description of same. A single circumspectly disagreeing comment on a tangential, secondary (tertiary? quaternary?) point, buried deep in a subthread, having minimal direct bearing on the claims in the post under which it’s posted. “Robust disagreement”, this ain’t. (Don’t get me wrong—it’s a fine comment, and I see that I strong-upvoted it at the time. But it sure is not anything at all like an example of the thing I asked for examples of.) Please do. So far, the example count remains at zero. Given that you did not, in fact, find an example, I think that this question remains unmotivated. Most people don’t bother to think about other people’s posts in sufficient detail and sufficiently critically to have anything much to say about them. Of the remainder, some agree with Duncan. Of the remainder of those, many don’t care enough to engage in arguments, disagreements, etc., of any sort. Of the remainder of those, many are either naturally disinclined to criticize forcefully, to press the criticism, to make points which are embarrassing or uncomfortable, etc., or else are deterred from doing so by the threat of moderation. That cuts the candidate pool down to a small handful. Separately, recall that Duncan has (I think more than once now) responded to similar situations by leaving (or “leaving”) Less Wrong. (What is the significance of his choice to
9dxu
The strength of the evidence is, in fact, a relevant input. And of the evidential strength conferred by the style of reasoning employed here, much has already been written. Then your response to gjm's point seems misdirected, as the sentence you were quoting from his comment explicitly specifies that it concerns what Duncan himself said. Furthermore, I find it unlikely that this is an implication you could have missed, given that the first quote-block above speaks specifically of the likelihood that "people" (Duncan) may or may not say false things with regards to a topic in which they are personally invested; indeed, this back-and-forth stemmed from discussion of that initial point! Setting that aside, however, there is a further issue to be noted (one which, if anything, is more damning than the previous), which is that—having now (apparently) detached our notion of what is being "approximated" from any particular set of utterances—we are left with the brute claim that "'Said keeps finding mistakes in what Duncan have written' is a good approximation of what Duncan finds unpleasant about interacting with Said"—a claim for which I don't see how you could defend even having positive knowledge of, much less its truth value! After all, neither of us has telepathic access to Duncan's inner thoughts, and so the claim that his ban of you was been motivated by some factor X—which factor he in fact explicitly denies having exerted an influence—is speculation at best, and psychologizing at worst. I appreciate the starkness of this response. Specifically, your response makes it quite clear that the word "robust" is carrying essentially entirety of the weight of your argument. However, you don't appear to have operationalized this anywhere in your comment, and (unfortunately) I confess myself unclear as to what you mean by it. "Disagreement" is obvious enough, which is why I was able to provide an example on such short notice, but if you wish me to procure an example of wh
1Said Achmiz
Please see my reply to gjm. Yes. A strong default. I stand by what I said. A high one. This seems to me to be only an ordinarily high “default” confidence level, for things like this. See my above-linked reply to gjm, re: “the opinions of onlookers”. People on Less Wrong downvote for things other than “this is wrong”. You know this. (Indeed, this is wholly consonant with the designed purpose of the karma vote.) Likewise see my above-linked reply to gjm. I refer there to the three quote–reply pairs above that one. I must object to this. I don’t think what I’ve accused Duncan of can be fairly called “misconduct”. He’s broken no rules or norms of Less Wrong, as far as I can tell. Everything he’s done is allowed (and even, in some sense, encouraged) by the site rules. He hasn’t done anything underhanded or deliberately deceptive, hasn’t made factually false claims, etc. It does not seem to me that either Duncan, or Less Wrong’s moderation team, would consider any of his behavior in this matter to be blameworthy. (I could be wrong about this, of course, but that would surprise me.) Yes, of course. Duncan has said as much, repeatedly. It would be strange to disbelieve him on this. Just as obviously, I don’t agree with his characterization! (As before, see my above-linked reply to gjm for more details.) This seems clearly wrong to me. The operation is of course commutative; it doesn’t matter in the least whose name goes where. In any engagement between Alice and Bob, Alice can decide that Bob is engaging unproductively, at the same time as Bob decides that Alice is engaging unproductively. And of course Bob isn’t going to decide that it’s he who is the one engaging unproductively with Alice (and vice-versa). And both formulations can be summarized as “Bob decides that he is unlikely to engage in productive discussion with Alice” (regardless of whether Bob or Alice is allegedly to blame; Bob, clearly, will hold the latter view; Alice, the former). In any case,
4Duncan Sabien (Deactivated)
This is incoherent. Said is hiding the supposer with this use of passive voice. A coherent rewrite of this sentence would either be: or Both of these sentences are useless, since the first is just saying "I, Said, allege what I allege" and the second is just saying "what Duncan alleges is not what he alleges." (Or I guess, as a third version, what dxu or others are alleging?) I note that Said has now done something between [accusing me of outright lying] and [accusing me of being fully incompetent to understand my own motivations and do accurate introspection] at least four or five times in this thread. I request moderator clarification on whether this is what we want happening a bunch on LessWrong. @Raemon 
8Raemon
My current take is "this thread seems pretty bad overall and I wish everyone would stop, but I don't have an easy succinct articulation of why and what the overall moderation policy is for things like this." I'm trying to mostly focus on actually resolving a giant backlog of new users who need to be reviewed while thinking about our new policies, but expect to respond to this sometime in the next few days.  What I will say immediately to @Said Achmiz is "This point of this thread is not to prosecute your specific complaints about Duncan. Duncan banning you is the current moderation policy working as intended. If you want to argue about that, you should be directing your arguments at the LessWrong team, and you should be trying to identify and address our cruxes." I have more to say about this but it gets into an effortcomment that I want to allocate more time/attention to. I'd note: I do think it's an okay time to open up Said's longstanding disagreements with LW moderation policy, but, like, all the previous arguments still apply. Said's comments so far haven't added new information we didn't already consider. I think it is better to start a new thread rather than engaging in this one, because this thread seems to be doing a weird mix of arguing moderation-abstract-policies while also trying to prosecute one particular case in a way that feels off.
3Said Achmiz
But that seems to me to be exactly what I have been doing. (Why else would I bother to write these comments? I have no interest in any of this except insofar as it affects Less Wrong.) And how else can I do this, without reference to the most salient (indeed, the only!) specific example in which I have access to the facts? One cannot usefully debate such things in purely abstract fashion! (Please note that as I have said, I have not accused Duncan of breaking any site rules or norms; he clearly has done no such thing.)
7Raemon
You currently look like you're doing two things – arguing about what the author-moderation norms should be, and arguing whether/how we should adopt a particular set of norms that Duncan advocated. I think those two topics are getting muddied together and making the conversation worse. My answer to the "whether/how should we adopt the norms in Basics of Rationalist Discourse?" is addressed here. If you disagree with that, I suggest replying to that with your concrete disagreement on that particular topic. I think if you also want to open up "should LW change our 'authors can moderate content' policy", I think it's better to start a separate thread for that. Duncan's blocking-of-you-and-others so far seems like a fairly central example of what the norms were intended to protect, on purpose, and so far you haven't noted any example relating to the Duncan thread that seem... at all particularly unusual for how we expected authors to use the feature?  Like, yes, you can't be confident whether an author blocks someone due to them disagreeing, or having a principled policy, or just being annoyed. But, we implemented the rules because "commenters are annoying" is actually a central existential threat to LessWrong.  If we thought it was actually distorting conversation in a bad way, we'd re-evaluate the policy. But I don't see reason to think that's happening (given that, for example, Zack went ahead and wrote a top-level post about stuff. It's not obvious this outcome was better for Duncan, so we might revisit the policy for that reason, but, not for 'important arguments are getting surpressed' reasons). Part of the whole point of the moderation policy is that it's not the job of individual users to have to defend their right to use the moderation tools, so I do now concretely ask you to stop arguing about Duncan-in-particular. 

You currently look like you’re doing two things – arguing about what the author-moderation norms should be, and arguing whether/how we should adopt a particular set of norms that Duncan advocated. I think those two topics are getting muddied together and making the conversation worse.

These two things are related, in the way to which I alluded in my very first comment on this topic. (Namely: the author-moderation feature shouldn’t exist [in its current form], because it gives rise to situations like this, where we can’t effectively discuss whether we should do something like adopting Duncan’s proposed norms.) I’m not just randomly conflating these two things for no reason!

My answer to the “whether/how should we adopt the norms in Basics of Rationalist Discourse?” is addressed here. If you disagree with that, I suggest replying to that with your concrete disagreement on that particular topic.

Uh… sorry, I don’t see how that comment is actually an answer to that question? It… doesn’t seem to be…

Duncan’s blocking-of-you-and-others so far seems like a fairly central example of what the norms were intended to protect, on purpose, and so far you haven’t noted any example relating

... (read more)
0Said Achmiz
Sorry, that wasn’t meant to be ambiguous; I thought it would be clear that the intended meaning was more like (see below for details) the latter (“Duncan alleges that he”), and definitely not the former—since, as you say, the former interpretation is tautological. (Though, yes, it also covers third parties, under the assumption—which so far seems to be borne out—that said third parties are taking as given what you [Duncan] claim re: what you find unpleasant.) No, not quite. Consider the following three things, which are all different: (a) “Alice’s description of something which Alice says she finds unpleasant” (b) “The thing Alice claims to find unpleasant, according to Alice’s description of it” (c) “The thing Alice claims to find unpleasant, in (claimed, by someone who isn’t Alice) reality (which may differ from the thing as described by Alice)” Obviously, (a) is of a different kind from (b) and (c). I was noting that I was not referring to (a), but instead to (c). (An example: Alice may say “wow, that spider really scared me!”. In this case, (a) is “that spider” [note the double quote marks]; (b) is a spider [supposedly]; and (c) may be, for example, a harvestman [also supposedly].) In other words: there’s some phenomenon which you claim to find unpleasant. We believe your self-report of your reaction to this thing. It remains, however, to characterize the thing in question. You offer some characterization. It seems to me that there’s nothing either incoherent or unusual about me disputing the characterization—without, in the process, doubting your self-report, accusing you of lying, claiming that you’re saying something other than what you’re saying, etc. Well, as I’ve said (several times), I don’t think that you’re lying. (You might be, of course; I’m no telepath. But it seems unlikely to me.) Take a look, if you please, at my description of your perspective and actions, found at the end of this comment. As I say there, it’s my hope that you’ll find t
2Duncan Sabien (Deactivated)
It was not; I both strong downvoted and, separately, strong disagreed. (I missed the call to end the conversation; sorry for replying.)
4Raemon
(I’ve basically asked Said to stop replying here and would prefer everyone else to stop replying to as well)
4Duncan Sabien (Deactivated)
(Adhering to Ray's request of making <1 reply per hour, though in this case I was already planning to do so.) The above fails to note something analogous to "arrested while driving ≠ arrested for driving." It is not in fact the case that anyone was blocked for disagreeing with or criticizing the things that I had written, though it is true that a couple of people have been blocked while disagreeing or criticizing. EDIT: I went and looked up the fancy words for this: post hoc, ergo propter hoc. What they were blocked for was not disagreement. I shall not enumerate the dozens-if-not-hundreds of people who have disagreed with me often and at length (and even sometimes with some vehemence) without being blocked, but I'll note that you can find multiple instances of people on my block list disagreeing with me previously in ways that were just fine. Metaphor: if you were to disagree with someone while throwing bricks at them, subsequently going "aHA! They blocked me for disagreeing!" would be disingenuous.
-3Said Achmiz
I didn’t say anything about “blocked for disagreeing [or criticizing]”. (Go ahead, check!) What I said was: To deny this, it seems to me, is untenable.
7Duncan Sabien (Deactivated)
Here Said is, as far as I can tell, arguing that "blocked for disagreeing or criticizing" is not straightforwardly synonymous with "blocked due to disagreeing or criticizing." In any event, none of the people in question were blocked for disagreeing or criticizing, and (saying it the other way too, just in case I'm missing some meaningful semantic difference) none of them were blocked due to disagreeing or criticizing, either. I again mention that it's not at all hard to find instances of people disagreeing with me or criticizing me or my ideas quite hard, without getting blocked, and that there are even plentiful instances of several of the blocked people having done so in the past (which did not, in the past, result in them getting blocked).
[-]Raemon3319

I think the important bit from Said's perspective is that these people were blocked for reasons not-related-to whether they had something useful to say about those rules, so we may be missing important things.

I'll reiterate Habryka's take on "I do think if we were to canonize some version of the rules, that should be in a place that everyone can comment on." And I'd go on to say: on that post we also should relax some of the more opinionated rules about how to comment. i.e. we should avoid boxing ourselves in so that it's hard to criticize the rules in practice.

I think a separate thing Said cares about is that there is some period for arguing about the rules before they "get canonized." I do think there should be at least some period for this, but not worried about it being particularly long because 

a) the mod team has had tons of time to think about what norms are good and people have had tons of time to argue, and I think this is mostly going to be cementing things that were already de-facto site norms, 

b) people can still argue about the rules after the fact (and I think comments on The Rules post, and top level posts about site norms, should have at least some more leeway about how to argue. I think there'll probably still be some norms like, 'don't do ad hominem attacks' but don't expect that to actually cause an issue)

That said, I certainly don't promise that everyone will be happy with the rules, the process here will not be democratic, it'll be the judgment of the LW mod team.

3Duncan Sabien (Deactivated)
Strong upvote, strong agree.
3Duncan Sabien (Deactivated)
Or an indication that some otherwise non-banned members of the site are actually kind of poor at exhibiting one or more of the basics of rationalist discourse and have been tolerated on LW for other reasons unrelated to their quality as thinkers, reasoners, and conversational partners. For instance, they might think that, because they can't think of a way, this means that there literally exists no way for a thing to be true (or be prone to using exaggerated language that communicates that even though it doesn't reflect their actual belief). (The Basics post was written because I felt it was needed on LW, because there are people who engage in frequent violation of good discourse norms and get away with it because it's kind of tricky to point at precisely what they're doing that's bringing down the quality of the conversations. That doesn't mean that my particular formulation was correct (I have already offered above to make changes to the two weakest sections), but it is not, in fact, the case, that [a user who's been barred from commenting but otherwise still welcome on LW as a whole] is necessarily in possession of good critique. Indeed they might be, but they might also be precisely the kind of user who was the casus belli of the post in the first place.)

All of this is irrelevant, because the point is that if the conversational norms cannot be discussed openly (by people who otherwise aren’t banned from the site for being spammers or something similarly egregious), then there’s no reason to believe that they’re good norms. How were they vetted? How were they arrived at? Why should we trust that they’re not, say, chock-full of catastrophic problems? (Indeed, the more people[1] are banned from commenting on the norms as a consequence of their criticism of said norms, the less we should believe that the norms are any good!)

Of all the posts on the site, the post proposing new site norms is the one that should be subjected to the greatest scrutiny—and yet it’s also the post[2] from which more critics have been banned than from almost any other. This is an extremely bad sign.


  1. Weighted by karma (as a proxy for “not just some random person off the street, but someone whom the site and its participants judge to have worthwhile things to say”). (The ratio of “total karma of people banned from commenting on ‘Basics of Rationalist Discourse’” to “karma of author of ‘Basics of Rationalist Discourse’” is approximately 3:1. If karma represents

... (read more)

For what it's worth, the high level point here seems right to me (I am not trying to chime into the rest of the discussion about whether the ban system is a good idea in the first place). 

If we canonize something like Duncan's post I agree that we should do something like copy over a bunch of it into a new post, give prominent attribution to Duncan at the very top of the post, explain how it applies to our actual moderation policy, and then we should maintain our own ban list. 

I think Duncan's post is great, but I think when we canonize something like this it doesn't make sense for Duncan's ban list to carry over to the more canonized version.

This is certainly well and good, but it seems to me that the important thing is to do something like this before canonizing anything. Otherwise, it’s a case of “feel free to discuss this, but nothing will come of it, because the decision’s already been made”.

The whole point of community discussion of something like this is to serve as input into any decisions made. If you decide first, it’s too late to discuss. This is exactly what makes the author-determined ban lists so extraordinarily damaging in a case like this, where the post in question is on a “meta” topic (setting aside for the moment whether they’re good or bad in general).

1Thoth Hermes
The most interesting thing you said is left to your footnote: I would love to see if this pattern is also present in other posts.  There should be some mechanism to judge posts / comments that have a mix of good and bad karma, especially since there are two measures of this now. In the olden-days it still would have been possible to do, since an overall score of "0" could still be a very high-quality information signal, if the total number of votes is high. This is only even more the case now. At issue is whether low-quality debate is ever fruitful. That piece of data in your footnote suggests that the issue might instead be whether or not there even is low-quality debate, or at least whether or not we can rely on moderators' judgement calls or the sum of all (weighted) votes to make such calls.
2Said Achmiz
Certainly we can’t rely on the judgment of post authors! (And I am not even talking about this case in particular, but—just in general, for post authors to have the power to make such calls introduces a massive conflict of interest, and incentivizes ego-driven cognitive distortions. This is why the “post authors can ban people from their posts” feature is so corrosive to anything resembling good and useful discussion… truly, it seems to me like an egregious, and entirely unforced, mistake in system design.)
1Thoth Hermes
The mistake in system design started with the implementation of downvoting, but I have more complicated reasoning for this. If you have a system that implements downvoting, the reason for having that feature in place is to prevent ideas that are not easily argued away from being repeated. I tend to be skeptical of such systems because I tend to believe that if ideas are not easily argued away, they are more likely to have merit to them. If you have a post which argues for the enforcement of certain norms which push down specific kinds of ideas that are hard to argue away, one of which is the very idea that this ought to be done, it creates the impression that there are certain dogmas which can no longer be defended in open dialogue. 
2Said Achmiz
I am not sure that I’d go quite that far, but I certainly sympathize with the sentiment. And with this, I entirely agree.
3Said Achmiz
Note, by the way, that these two things are not at all mutually exclusive. It might, indeed, be the case that the post was motivated by some kinds of people/critiques—which have/are good critiques. (Indeed that’s one of the most important and consequential sorts of criticism of the post: that among its motivations was one or more bad motivations, which we should not endorse, and which we should, in fact, oppose, as following it would have bad effects.)

(I think this is a thread that, if I had a "slow mode" button to make users take longer to reply, I'd probably have clicked it right about now. I don't have such a button, but Said and @Duncan_Sabien can you guys a) hold off for a couple hours on digging into this and b) generally take ~an hour in between replies here if you were gonna keep going)

2the gears to ascension
who are the banned users? I'm not sure how to access the list and would like it mildly immortalized in case you change it later.
2Raemon
See lesswrong.com/moderation 
2the gears to ascension
I don't see any bans related to that post there.
2Ben Pace
See the lower section, on user bans.
1the gears to ascension
Ah. So, these people have banned duncan from commenting on their frontpage posts? or, duncan has banned them from commenting on his frontpage posts? I guess you're implying the latter. Makes sense.
4the gears to ascension
shrug. people can make posts in reply. Zack has done so. no great loss, I think - if anything, it created slightly more discussion.
1Nicholas / Heather Kross
Agreed w.r.t. "basic questions" we could ask new users. The subreddit /r/ControlProblem now makes people take a quiz before they can post, to filter for people who e.g. know and care about the orthogonality thesis. (The quiz is pretty easy to pass if you're familiar with the basic ideas of AI safety/alignment/DontKillEveryoneism.)

New qualified users need to remain comfortable (trivial inconveniences like interacting with a moderator at all are a very serious issue), and for new users there is not enough data to do safe negative selection on anything subtle. So talking of "principles in the Sequences" is very suspicious, I don't see an operationalization that works for new users and doesn't bring either negative selection or trivial inconvenience woes.

4MondSemmel
Agreed. Talking from the perspective of a very occasional author, suppose I post a LW essay and then link it in r/slatestarcodex. I want it to be as easy as possible for readers to comment on my essay wherever I link it. If it's impossible or even just inconvenient for them to simply comment on the essay, they might not do so. In which case, why would the author post their stuff on LW, specifically?
6Steven Byrnes
I also see that as a downside (although not the end of the world—it depends on the filters). Like, is LW a blogging platform? Or is it a discussion forum for a particular online community? Right now we kinda have it both ways—when describing why I post on LW, I might says something like “it’s a very nice blogging platform, and also has a great crowd of regular readers & commenters who I tend to know and like”. But the harder it is for random people to comment on my posts, the less it feels like a “blogging platform”, and the more it feels like I’m just talking within a gated community, which isn’t necessarily what I’m going for. Right now I have a pretty strong feeling that I don’t want to start a substack / wordpress / whatever and cross-post everything to LW, mostly for logistical reasons (more annoying to post, need to fix typos in two places), plus it splits up the comment section. But I do get random people opening a LW account to comment on my posts sometimes, and I like that †, and if that stops being an option it would be a marginal reason for me to switch to “separate blog + crossposting”. Wouldn’t be the end of the world, just wanted to share. Hmm, I might also / alternatively mitigate the problem by putting an “email-me” link / invitation at the bottom of all my posts. Random thought: Just like different authors get to put different moderation guidelines on their own posts, maybe different authors could also get to put different barriers-to-new-user-comments on their own posts?? I haven’t really thought it through, it’s just an idea that popped into my head. † Hmm, actually, I’m happy about the pretty-low-friction ability of anyone to comment on my LW posts in the case of e.g. obscure technical posts, and neuroscience posts, and random posts. I haven’t personally written posts that draw lots of really bad takes on AI, at least not so far, and I can see that being very annoying.
2Elizabeth
Some authors would view the moderation as a feature, not a bug.
2Raemon
I think avoiding the negative selection failure modes is an important point. I'm mulling over how to think about it. Do you have a thing you're imagining with "positive selection?" that you expect to work?

Stop displaying user's Karma total, so that there is no numbers-go-up reward for posting lots of mediocre stuff, instead count the number of comments/posts in some upper quantile by Karma (which should cash out as something like 15+ Karma for comments). Use that number where currently Karma is used, like vote weights. (Also, display the number of comments below some negative threshold like -3 in the last few months.)

6evand
Something like an h-index might be better than a total.
4Vladimir_Nesov
In some ways this sounds better than either my proposal or Raemon's. But there is still the spurious upvoting issue, so a metric should be able to not get too excited about a few highly upvoted things.
2Raemon
Mmm. Yeah something in that space makes sense. FYI a similar I've thinking about is "you see the total karma of the user's top ~20 comments/posts", which you can initially improve by writing somewhat-good-comments but you'll quickly max that out. That metric emphasizes "what was their best content like?", your metric is something like "how much 'at least pretty solid' content do they have?" and I'm not sure which is better.
4Vladimir_Nesov
There are some viral posts (including on community drama) where (half of) everything gets unusually highly upvoted, compared to normal. So the inclination I get is to recalibrate thresholds according to post's popularity and the current year, to count fewer such comments (that's too messy, isn't worth it, but other things should be robust to this effect). This is why I specifically proposed number of 15+ Karma comments, not their total Karma. Also, the total number still counts as some sort of "total contribution" as opposed to the less savory "user quality".
[-]Ben1713

One of the things I really like about LW is the "atmosphere", the way people discuss things. So very well done at curating that so far. I personally would be nervous about over-pushing "The Sequences". I didn't read much of them a little while into my LW time, but I think I picked up the vibes fine without them.

I think the commenting guidelines are an excellent feature which have probably done a lot of good work in making LW the nice place it is (all that "explain, not persuade" stuff). I wonder how much difference it would make if the first time a new user posts a comment they could be asked to tick a box saying "I read the commenting guidelines".

Would it make sense to have a "Newbie Garden" section of the site? The idea would be to give new users a place to feel like they're contributing to the community, along with the understanding that the ideas shared there are not necessarily endorsed by the LessWrong community as a whole. A few thoughts on how it could work:

  • New users may be directed toward the Newbie Garden (needs a better name) if they try to make a post or comment, especially if a moderator deems their intended contribution to be low-quality. This could also happen by default for all users with karma below a certain threshold.
  • New users are able to create posts, ask questions, and write comments with minimal moderation. Posts here won't show up on the main site front page, but navigation to this area should be made easy on the sidebar.
  • Voting should be as restricted here as on the rest of the site to ensure that higher-quality posts and comments continue trickling to the top.
  • Teaching the art of rationality to new users should be encouraged. Moderated posts that point out trends and examples of cognitive biases and failures of rationality exhibited in recent newbie contributions, and that advise on how to correct for
... (read more)
7Vaniver
I think this works at universities because teachers are paid to grade things (they wouldn't do it otherwise) and students get some legible-to-the-world certificate once they graduate.  Like, we already have a wealth of curriculum / as much content for newbies as they can stand; the thing that's missing is the peer reading group and mentors. We could probably construct peer reading groups (my simple idea is that you basically put people into groups based on what <month> they join, varying the duration until you get groups of the right size, and then you have some private-to-that-group forum / comment system / whatever), but I don't think we have the supply of mentors. [This is a crux--if someone thinks they have supply here or funding for it, I want to hear about it.] 
7Elizabeth
I think the peer thing is pretty good, and recreates the blind-leading-the-blind aspect of early lesswrong. 
7the gears to ascension
You are implying that blind-leading-the-blind is good, not bad, here, correct? I'm interested to hear more of your thoughts on why that will result in collective intelligence and not collective decoherence; it seems plausible to me, but some swarm algorithms work and some don't.
7Elizabeth
I'm claiming that blind-leading-the-blind can work at all, and is preferable to a low-karma section containing both newbies and long time members whose low karma reflects quality issues. Skilled mentorship is almost certainly better, but I don't think that's available at the necessary scale. 
[-]Ruby161

A few observations triggered the LessWrong team focusing on moderation now. The initial trigger was our observations of new users signing up plus how the distribution of quality of new submissions had worsened (I found myself downvoting many more posts), but some analytics helps drive home the picture:

(Note that not every week does LessWrong get linked in the Times...but then again....maybe it roughly does from this point onwards.)

LessWrong data from Google Analytics: traffic has doubled Year-on-Year

Comments from new users won't display by default until they've been approved by a moderator.

I'm pretty sad about this from a new-user experience, but I do think it would have made my LW experience much better these past two weeks.

it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z)

Some slight subtlety: you can get these principles in other ways, for example great scientists or builders or people who've read Feynman can get these. But I think the sequences gives them really well and also helps a lot with setting the site culture.

How do we deal with low quality criticism? There's something sketchy about rejecting criticism. There are obvious hazards of groupthink. But a lot of criticism isn't well thought out, or is rehashing ideas we've spent a ton of time discussing and doesn't feel very productive.

My current guess is something in the genre of "schelling place to discuss the standard arguments" to point folks to. I tried to start one here responding to basic AI x-risk questions people had.

you can get these principles in other ways

I got them via cultural immersion. I just lurked here for several months while my brain adapted to how the people here think. Lurk moar!

I've been looking at creating a GPT-powered program which can automatically generate a test of whether one has absorbed the Sequences. It doesn't currently work that well and I don't know whether it's useful, but I thought I should mention it. If I get something that I think is worthwhile, then I'll ping you about it.

Though my expectation is that one cannot really meaningfully measure the degree to which a person has absorbed the Sequences.

[-]nim102

I too expect that testing whether someone can talk as if they've absorbed the sequences would measure password-guessing more accurately than comprehension.

The idea gets me wondering whether it's possible to design a game that's easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.

As someone who's gone through the sequences yet isn't sure whether they "really" "fully" understand them, I'd be interested in taking and retaking such a test from time to time (if it was accurate) to quantify any changes to my comprehension.

3tailcalled
I think the test could end up working as an ideology measure rather than a password-guessing game. It's tricky to me, because the Sequences teach a network of ideas that are often not directly applicable to problem-solving/production tasks, but rather relevant for analyzing or explaining ideas. It's hard to really evaluate an analysis/explanation without considering its downstream applications, but also it's hard to come up with a task that is simultaneously: * Big enough that it has both analysis/explanation and downstream applications, * Small enough that it can be done quickly as a single person filling out a test.

This isn't directly related to moderation issue, but there are a couple of features I would find useful, given the the recent increase in post and comment volume:

  • A way to hide the posts I've read (or marked as read) from my own personal view of the front page. (Hacker News has this feature)

  • Keeping comment threads I've collapsed, collapsed across page reloads.

I support a stricter moderation policy, but I think these kinds of features would go a long way in making my own reading experience as pleasant as it's always been.

5the gears to ascension
Re: #1, could you comment on what additional functionality on top of this button would help you? css - the repetition is to elevate the css specificity, overriding even css styles from sites that use !important in their code; apply to all sites with the stylus extension in chrome: a:not(:visited):not([href="#"])[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href], a:not([href="#"]):not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href] * { color: #32a1ce !important; } a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href], a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href] * { color: #939393 !important; }
5Max H
Oh, yes, that's basically what I'm looking for, not sure how I missed it. Thanks! I think a bulk toggle for read / unread would still potentially be useful, but this is most of what I want.
[-]nim149

Thank you for the transparency!

Comments from new users won't display by default until they've been approved by a moderator.

It sounds like you're getting ready to add a pretty significant new workload to the tasks already incumbent upon the mod team. Approving all comments from new users seems like a high volume of work compared to my impression of your current duties, and it seems like the moderation skill threshhold for new user comment approval might potentially be lower than it is for moderators' other duties.

You may have already considered this possibility and ruled it out, but I wonder if it might make sense to let existing users above a given age and karma threshhold help with the new user comment queue. If LW is able to track who approved a given comment, it might be relatively easy to take away the newbie-queue-moderation permissions from anybody who let too many obviously bad ones through.

I would be interested in helping out with a newbie comment queue to keep it moving quickly so that newbies can have a positive early experience on lesswrong, whereas I would not want to volunteer for the "real" mod team because I don't have the requisite time and skills for reliably s... (read more)

9Zac Hatfield-Dodds
It's also quite plausible to me that carefully prompted language models, with a few dozen carefully explained examples and detailed instructions on the decision criteria, would do a good job at this specific moderation task. Less clear what the payoff period of such an investment would be so I'm not actually recommending it, but it's an option worth considering IMO.
8Ruby
Agree, I've said to the team that I think we could get some mileage out of this kind of thing.
4dxu
Were such a proposal to be adopted, I would be likewise willing to participate.
3Ben Pace
I really appreciate the thought here (regardless of whether it works out) :) 
[-]Shmi137

One thing I would love to see that is missing on a lot of posts is a summary upfront that makes it clear to the reader the context and the main argument or just content. (Zvi's posts are an excellent example of this.) At least from the newbies. Good writers, like Eliezer and Scott Alexander can produce quality posts without a summary. Most people posting here are not in that category. It is not wrong to post a stream of consciousness or an incomplete draft, but at least spend 5 minutes writing up the gist in a paragraph upfront. If you can't be bothered, or do not have the skill of summarizing, 

By the way, GPT will happily do it for you if you paste your text into the prompt, as a whole or in several parts. GPT-4/Bing can probably also evaluate the quality of the post and give feedback on how well it fits into the LW framework and what might be missing or can be improved. Maybe this part can even be automated.

Scott Garrabrant once proposed being able to add abstracts to posts that would appear if you clicked on posts on the frontpage. Then you could read the summary, and only read the rest of the post if you disagreed with it.

I'm not entirely sure what I want the longterm rule to be, but I do think it's bad for the comment section of Killing Socrates to be basically discussing @Said Achmiz specifically where Said can't comment. It felt a bit overkill to make an entire separate overflow post for a place where Said could argue back, but it seemed like this post might be a good venue for it.

I will probably weigh in here with my own thoughts, although not sure if I'll get to it today.

6Said Achmiz
I appreciate the consideration. I don’t know that I particularly have anything novel or interesting to say about the post in question; I think it mostly stands (or, rather, falls) on its own, and any response I could make would merely repeat things that I’ve said many times. I could say those things again, but what would be the point? Nobody will hear them who hasn’t already heard. (In any case, some decent responses have already been written by other commenters.) There is one part (actually a quote from Vaniver) which I want to object to, specifically in the context of my work: In my professional (design) experience, I have found the above to be completely untrue. My work is by no means perfect now, nor was it perfect when I started; nor will I claim that I’ve learned nothing and have not improved. But it’s simply not the case that I started out “repeatedly producing disappointing work” and only then (and thereby) learned to make good work. On the contrary, I started out with a strong sense and a good understanding of what bad design was, and what made it bad; and then I just didn’t do those things. Instead of doing bad and wrong things, I did good and correct things. Knowing what is good and what is bad, and why, made that relatively straightforward. (Is there a “rationality lesson” to be drawn from this? I don’t know; perhaps, perhaps not. But it stands as a non-metaphorical point, either way.)

By "not visible" do you mean "users won't see them at all" or "collapsed by default" like downvoted comments?

You mean for the new comments by unapproved users? Those won't be visible at all.

Im noticing two types of comments I would consider problematic increasing lately-- poorly thought out or reasoned long posts, and snappy reddit-esque one-line comments. The former are more difficult to filter for, but dealing with the second seems much easier to automate-- for example, have a filter which catches any comment below a certain length too be approved manually (potentially with exceptions for established users)

There's also a general attitude that goes along with that-- in general, not reading full posts, nitpicking things to be snarky about... (read more)

Do you keep metrics on moderated (or just downvoted) posts from users, in order to analyze whether a focus on "new users" or "low total karma" users is sufficient?  

I welcome a bit stronger moderation, or at least encouragement of higher-minimum-quality posts and comments.  I'm not sure that simple focus on newness or karma for the user (as opposed to the post/comment) is sufficient.  

I don't know whether this is workable, but encouraging a bit more downvoting norms, as opposed to ignoring and  moving on, might be a way to distribute this gardening work a bit, so it's not all on the moderators.

Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.

My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed force... (read more)

Disclaimer: I myself am a newer user from last year.

I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden. 

0Legionnaire
I think more downvoting being the solution depends on the goals. If our goal is only to maintain the current quality, that seems like a solution. If the goal is to grow in users and quality, I think diverting people to a real-time discussion location like Discord could be more effective.  Eg. a new user coming to this site might not have any idea a particular article exists that they should read before writing and posting their 3 page thesis on why AI will/wont be great, only to have their work downvoted (it is insulting and off-putting to be downvoted) and in the end we may miss out on persuading/gaining people. In a chat a quick back and forth could steer them in the right direction right off the bat.
2dxu
Agreed—which raises to mind the following question: does LW currently have anything like an official/primary public chatroom (whether hosted on Discord or elsewhere)? If not, it may be worth creating one, announcing it in a post (for visibility), and maintaining a prominently visible link to it on e.g. the sidebar (which is what many subreddits do).

I've always found it a bit odd that Alignment Forum submissions are automatically posted to LW. 

If you apply some of these norms, then imo there are questionable implications, i.e. it seems weird to say that one should have read the sequences in order to post about mechanistic interpretability on the Alignment Forum.

5habryka
The AI Alignment Forum was never intended as the central place for all AI Alignment discussion. It was founded at a time when basically everyone involved in AI Alignment had read the sequences, and the goal was to just have any public place for any alignment discussion.  Now that the field is much bigger, I actually kind of wish there was another forum where AI Alignment people could go to, so we would have more freedom in shaping a culture and a set of background assumptions that allow people to make further strides and create a stronger environment of trust.  I personally am much more interested in reading about mechanistic interpretability from people who have read the sequences. That one in-particular is actually one of the ones where a good understanding of probability theory, causality and philosophy of science seems particularly important (again, it's not that important that someone has acquired that understanding via the sequences instead of some other means, but it does actually really benefit from a bunch of skills that are not standard in the ML or general scientific community).  I expect we will make some changes here in the coming months, maybe by renaming the forum or starting off a broader forum that can stand more on its own, or maybe just shutting down the AI Alignment Forum completely and letting other people fill that niche. 
4the gears to ascension
similarly, I've been frustrated that medium quality posts on lesswrong about ai often get missed in the noise. I want alignmentforum longform scratchpad, not either lesswrong or alignmentforum. I'm not even allowed to post on alignmentforum! some recent posts I've been frustrated to see get few votes and generally less discussion: * https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency - this one deserves at least 35 imo * www.lesswrong.com/posts/fzGbKHbSytXH5SKTN/penalize-model-complexity-via-self-distillation * https://www.lesswrong.com/posts/bNpqBNvfgCWixB2MT/towards-empathy-in-rl-agents-and-beyond-insights-from-1 * https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems * ... many more open in tabs I'm unsure about.
6Gordon Seidoh Worley
There's been a lot of really low quality posts lately, so I know I've been having to skim more and read fewer things from new authors. I think resolving general issues around quality should help valuable stuff rise to the top, regardless of whether it's on AF or not.
4Garrett Baker
[Justification for voting behavior, not intending to start a discussion. If I were I would have commented on the linked post] I’ve read the model distillation post, and it is bad, so strong disagree. I don’t think that person understands the arguments for AI risk and in particular don’t want to continuously reargue the “consequentialism is simpler, actually” line of discussion with someone who hasn’t read pretty basic material like risks from learned optimization.
2the gears to ascension
I still think this one is interesting and should get more attention, though: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
2the gears to ascension
fair enough. I've struck it from my comment.

What's a "new user"? It seems like this category matters for moderation but I don't see a definition of it. (Maybe you're hoping to come up with one?)

8Raemon
There's maybe two tiers of new user: * User that has never posted or commented before * User that has posted or commented a couple times, but not been given a stamp-of-approval by moderators which means we don't have to pay special attention to them anymore. Until a user has been approved, moderators at least glance at every comment and post they make.

Stackoverflow has a system where users with more karma get more power. When it comes to the job of deciding whether or not to approve comments of new users, I don't see why that power should be limited to a handful of mods. Have you thought about giving that right out at a specific amount of karma?

Nice! Through much of 2022, I was pretty worried that Lesswrong would eventually stop thriving for some reason or another. This post is a strong update in the direction of "I never had anything to worry about, because the mods will probably adapt ahead of time and stay well ahead of the curve on any issue".

I'm also looking forward to the results of the Sequences requirement. I've heard some good things about rationality engines contributing to humans solving alignment, but I'm not an expert on that approach.

How do we deal with low quality criticism?

Criticism that's been covered before should be addressed by citing prior discussion and flagging the post as a duplicate unless they can point out some way their phrasing is better.
Language models are potentially very capable of making the process of citing dupes much more efficient, and I'm going to talk to AI Objectives about this stuff at some point in the next week and this is one of the technologies we're planning on discussing.

(less relevant to the site, but general advice: In situations where a bad critic is... (read more)

2Raemon
This is a cool suggested use of language models. I'll think about whether/how to implement it on LW.

Cheering over here! This seems like a tricky problem and I'm so happy about how you seem to be approaching it. :)

I'm especially pleased with the stuff about "people need to read the sequences, but shit the sequences are long, which particular concepts are especially crucial for participation here?", as opposed to wishing people would read the sequences and then giving up because they're long and stylistically polarizing (which is a mental state I've often found myself occupying).

re: discussing criticism - I'd love to see tools to help refer back to previous discussions of criticism and request clarification of the difference. Though of course this can be an unhelpful thought-stopper, I think more often than not it's simply context retrieval and helps the new criticism clarify its additions. ("paper does not clarify contributions"?)

A thing I'm finding difficult at the moment is that criticism just isn't really indexed in a consistent way, because, well, everyone has subtly different frames on what they're criticizing and why.

It's possible there should be, like, 3 major FAQs that just try to be a comprehensive index on frequent disagreements people have with LW consensus, and people who want to argue about it are directed there to leave comments, and maybe over time the FAQ becomes even more comprehensive. It's a lot of work, might be worth it anyway

(I'm imagining such an FAQ mostly linking to external posts rather than answering everything itself)

My main take away is that I'm going to be cau authoring posts with people I'm trying to get into AI safety, so they aren't stonewalled by moderation.

Will the karma thing affect users who've joined before a certain period of time? Asking this because I joined quite a while ago but have only 4 karma right now.

6Raemon
It's likely to apply starting roughly 4 months ago (i.e. when ChatGPT was released). But, this is just a guess and we may make changes to the policy.

I'm strongly in favor of the sequences requirement. If I had been firmly encouraged/pressured into reading the sequences when I joined LW at ~march/april 2022, my life would have been much better and more successful by now. I suspect this would be the case for many other people. I've spent a lot of time thinking about ways that LW could set people up to steer themselves (and eachother) towards self-improvement, like the Battle School in Ender's Game, but it seems like it's much easier to just tell people to read the Sequences.

Something that I'm worried abo... (read more)

2mako yass
It seems to me that building trust by somehow confirming that a person understands certain important background knowledge (some might call this knowledge a "religious story", those stories that inspire a certain social order wherever they're common knowledge), but I haven't ever seen a nice, efficient social process for confirming the presence of knowledge within a community. It always seems very ad hoc. The processes I've seen too demand very uncritical, entry-level understandings of the religious stories, or just randomly misfire sometimes, or are vulnerable to fakers who have no deep or integrated understanding of the stories, or sometimes there will be random holes in peoples' understandings of the stories that cause problems to occur even when everyone's being good faith. Maybe this stuff just inherently requires good old fashioned time and effort and I should stop looking for an easy way through.

Thanks. It's gotten to the point where I have completely hidden the "AI" tag from my list of latest posts.

Highly recommend this, wish the UI was more discoverable. For people who want this themselves:  you can change how posts are weighted for you by clicking "customize feed" to right of "front page". You'll be shown some default tags, and can also add more. If you hover over a tag you can set it to hidden, reduced (posts with tag treated as having 50% less karma), promoted (+25 karma), or a custom amount. 

It occurs to me I don't know how tags stack on a given post, maybe staff can clarify? 

All of the additive modifiers that apply (eg +25 karma, for each tag the post has) are added together and applied first, then all of the multiplicative modifiers (ie the "reduced" option) are multiplied together and applied, then time decay (which is multiplicative) is applied last. The function name is filterSettingsToParams.

2Raemon
I think @jimrandomh is most likely to know.
5Ben Pace
I have it at -200.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

I wonder if there would be a use for an online quiz, of the sort that asks 10 questions picked randomly from several hundred possible questions, and which records time taken to complete the quiz and the number of times that person has started an attempt at it (with uniqueness of person approximated by ip address, email address or, ideally, lesswrong username) ?

Not as prescriptive as tracking which sequences someone has read, but perhaps a useful guide (as one factor among many) about the time a user has invested in getting up to date on what's already been written here about rationality?