LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
RobertM had made this table for another discussion on this topic, it looks like the actual average is maybe more like "8, as of last month", although on a noticeable uptick.
You can see that the average used to be < 1.
I'm slightly confused about this because the number of users we have to process each morning is consistently more like 30 and I feel like we reject more than half and probably more than 3/4 for being LLM slop, but that might be conflating some clusters of users, as well as "it's annoying to do this task so we often put it off a bit and that results in them bunching up." (although it's pretty common to see numbers more like 60)
[edit: Robert reminds me this doesn't include comments, which was another 80 last month)
Again you can look at https://www.lesswrong.com/moderation#rejected-posts to see the actual content and verify numbers/quality for yourself.
We get like 10-20 new users a day who write a post describing themselves as a case-study of having discovered an emergent, recursive process while talking to LLMs. The writing generally looks AI generated. The evidence usually looks like, a sort of standard "prompt LLM into roleplaying an emergently aware AI".
It'd be kinda nice if there was a canonical post specifically talking them out of their delusional state.
If anyone feels like taking a stab at that, you can look at the Rejected Section (https://www.lesswrong.com/moderation#rejected-posts) to see what sort of stuff they usually write.
They felt to me like "comments that were theoretically fine, but they had the smell of 'the first very slight drama-escalation that tends to lead to Demon Threads'".
Mod note: I get the sense that some commenters here are bringing a kind of... naive political partisanship background vibe (mostly not too overt, but it felt off enough I felt the need to comment). I don't have a specific request, but, make sure to read the Political Prerequisites sequence and I recommend trying to steer towards "figure out useful new things" or at least have the most productive version of the conversation you're trying to have.
(that doesn't mean there won't/shouldn't be major frame disagreements or political fights here, but, like, lean away from drama on the margin)
I think the original just also had very large paragraphs and not-actual-footnotes
I do sure wish that abstract was either Actually Short™, or broken into paragraphs. (I'm assuming you didn't write it but it's usually easy to find natural paragraph breaks on the authors' behalf)
(hurray for thoughtful downvote explanations)
I don't think this post is trying to hide Nate's identity, he's just using his longstanding LessWrong account. Evidence: his name's on the book cover!
I think this is actually already part of the LessWrong-style-rationalist zeitgeist. Taste, aesthetics, focusing and belief reporting are some keywords to look at.
(I think this post also seems to not understand what LessWrong's conception of rationality is about, although I'm not 100% sure what you're assuming about it. Vlad's comment seems like a good starting point for that)
Huh, the crosspost is coming from Zvi's wordpress blog which looks different. https://thezvi.wordpress.com/2025/07/08/balsa-update-springtime-in-dc/
But, I just copy-pasted the substack version in.