LESSWRONG
LW

Raemon
56822Ω7204808370308
Message
Dialogue
Subscribe

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Step by Step Metacognition
Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)
23Raemon's Shortform
Ω
8y
Ω
578
Balsa Update: Springtime in DC
Raemon6h20

Huh, the crosspost is coming from Zvi's wordpress blog which looks different. https://thezvi.wordpress.com/2025/07/08/balsa-update-springtime-in-dc/ 

But, I just copy-pasted the substack version in.

Reply
Raemon's Shortform
Raemon6h80

RobertM had made this table for another discussion on this topic, it looks like the actual average is maybe more like "8, as of last month", although on a noticeable uptick. 

You can see that the average used to be < 1.

I'm slightly confused about this because the number of users we have to process each morning is consistently more like 30 and I feel like we reject more than half and probably more than 3/4 for being LLM slop, but that might be conflating some clusters of users, as well as "it's annoying to do this task so we often put it off a bit and that results in them bunching up." (although it's pretty common to see numbers more like 60)

[edit: Robert reminds me this doesn't include comments, which was another 80 last month)

Again you can look at https://www.lesswrong.com/moderation#rejected-posts to see the actual content and verify numbers/quality for yourself.

Reply1
Raemon's Shortform
Raemon8h390

We get like 10-20 new users a day who write a post describing themselves as a case-study of having discovered an emergent, recursive process while talking to LLMs. The writing generally looks AI generated. The evidence usually looks like, a sort of standard "prompt LLM into roleplaying an emergently aware AI".

It'd be kinda nice if there was a canonical post specifically talking them out of their delusional state. 

If anyone feels like taking a stab at that, you can look at the Rejected Section (https://www.lesswrong.com/moderation#rejected-posts) to see what sort of stuff they usually write.

Reply
Applying right-wing frames to AGI (geo)politics
Raemon8h40

They felt to me like "comments that were theoretically fine, but they had the smell of 'the first very slight drama-escalation that tends to lead to Demon Threads'". 

Reply
Applying right-wing frames to AGI (geo)politics
Raemon9hModerator Comment42

Mod note: I get the sense that some commenters here are bringing a kind of... naive political partisanship background vibe (mostly not too overt, but it felt off enough I felt the need to comment). I don't have a specific request, but, make sure to read the Political Prerequisites sequence and I recommend trying to steer towards "figure out useful new things" or at least have the most productive version of the conversation you're trying to have.

(that doesn't mean there won't/shouldn't be major frame disagreements or political fights here, but, like, lean away from drama on the margin)

Reply
Balsa Update: Springtime in DC
Raemon10h20

I think the original just also had very large paragraphs and not-actual-footnotes

Reply
Energy-Based Transformers are Scalable Learners and Thinkers
Raemon11h21

I do sure wish that abstract was either Actually Short™, or broken into paragraphs. (I'm assuming you didn't write it but it's usually easy to find natural paragraph breaks on the authors' behalf)

Reply
Shutdown Resistance in Reasoning Models
Raemon1d20

(hurray for thoughtful downvote explanations)

Reply
A case for courage, when speaking of AI danger
Raemon1d110

I don't think this post is trying to hide Nate's identity, he's just using his longstanding LessWrong account. Evidence: his name's on the book cover!

Reply
Art, rationality, and the "feeling" for rightness
Raemon1d*40

I think this is actually already part of the LessWrong-style-rationalist zeitgeist. Taste, aesthetics, focusing and belief reporting are some keywords to look at.

(I think this post also seems to not understand what LessWrong's conception of rationality is about, although I'm not 100% sure what you're assuming about it. Vlad's comment seems like a good starting point for that)

Reply
Load More
124"Buckle up bucko, this ain't over till it's over."
4d
21
112"What's my goal?"
7d
7
29Hiring* an AI** Artist for LessWrong/Lightcone
9d
6
32Social status games might have "compute weight class" in the future
2mo
7
50What are important UI-shaped problems that Lightcone could tackle?
2mo
22
133Anthropic, and taking "technical philosophy" more seriously
4mo
29
59"Think it Faster" worksheet
5mo
8
86Voting Results for the 2023 Review
5mo
3
99C'mon guys, Deliberate Practice is Real
5mo
25
88Wired on: "DOGE personnel with admin access to Federal Payment System"
5mo
45
Load More
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
(+317)
Sandbagging (AI)
3mo
Sandbagging (AI)
3mo
(+88)
AI "Agent" Scaffolds
3mo
AI "Agent" Scaffolds
3mo
(+340)
AI Products/Tools
3mo
(+121)
Language Models (LLMs)
4mo
Load More