LessWrong is the central discussion space for the Rationalist subculture. The Rationalists write extensively about what they expect from the world, so I in turn have expectations from them. Each point is my expectation vs what I see.

Track Records and Accountability

I imagine systems where you could hover over someone's username to see their prediction track record. Perhaps there would be prediction market participation stats or calibration scores displayed prominently. High status could be tied to demonstrated good judgment through special user flair for accurate forecasters or annual prediction competitions.

I do not see this. There is karma, which is costly, but not that hard to game by writing lots of comments and articles. Likewise there is the yearly review (going currently) which looks back on older posts and judges whether they have stood the test of time.

Neither of these are a clear track record for me. I chatted to Oliver Habryka and he said “I think contributing considerations is usually much more valuable than making predictions.” As I understand it , he sees getting top contributors to build forecasting track records as less useful compared to careful writing and the reading of such writing. I see forecasting as a useful focus and a clearer way of ranking individuals.

Where do you fall?

Anonymous ranking mechanisms

I expect LessWrong to have a system for anonymously upvoting replies. Users could signal disagreement or point out errors without social cost. Sometimes it feels very expensive to write a disagreeing comment, but an up/downvote is very cheap.

This is one area where reality has matched or exceeded expectations. The LessWrong implementation - separating agreement voting from quality voting - is quite elegant and effective. The emojis that you can add to posts are great too. I find LessWrong a pleasure to comment on in this regard, and often wish Twitter or Substack had similar features.

Experimental Culture

My vision of LessWrong includes a thriving culture of hands-on experimentation and empirical investigation. Regular posts about chemistry experiments, engineering projects, and systematic data collection about important life decisions. Given the trust within the community, coordinated efforts to gather useful data through surveys about career outcomes, child-raising approaches, or life satisfaction in different cities seems natural.

Instead of the hands-on experimentation I expected, what I see is a culture heavily focused on long-form theoretical posts. People write extensive pieces filtered through their own models and frameworks. While these can be valuable, they seem to crowd out the more empirical, experimental content I expected.

I think it would be healthier for rationalists to do experiments at home, write short posts about personal experiences, make falsifiable judgements about geopolitics.

There are some notable successes here, like that group trying to do their own vaccine work, but that feels like the minority to me.

Consensus-Building Tools

The platform could feature sophisticated argument-mapping software
Some people claim I am obsessed with argument mapping. I am not sure they are wrong. and tools for synthesising perspectives across multiple posts. Visual representations of debate structures, methods to track how positions evolve over time, and systems for identifying key points of agreement and disagreement would be standard features. These tools would facilitate building toward shared understanding rather than just collecting individual perspectives.

At present most discussions remain in traditional comment threads, making it difficult to track the evolution of ideas or find points of consensus across multiple posts. The wiki is poorly populated and not good for understanding the flow of overall discussion.

Dialogues were an interesting test but they don’t seem to have worked.

AI, Crypto, and X-Risk Content

The platform could host comprehensive coverage of existential risks, emerging technologies, and complex coordination challenges. This would encompass detailed analysis of AI development trajectories, cryptocurrency governance mechanisms, and other technological risks. Regular updates and careful tracking of developments in these fields would keep the community informed and engaged.

This is another area where reality somewhat matches expectations. There's extensive discussion of AI safety, existential risks, and emerging technologies. However, the focus can sometimes feel narrow, with certain topics (like AI alignment) receiving intense attention while others (like biosecurity or environmental risks) get less coverage. I would like to see more discussion of geopolitics.

Unexpected Successes

While I've focused on gaps between expectations and reality, there are also areas where the rationalist community has shown surprising strengths I hadn't anticipated:

Moderation

In some way, moderation seems much better than I expect. I more rarely disagree with LessWrong moderation decisions than the EA forum and LessWrong seems to be consumed by drama less often. Often when drama is consuming the EA forum, LessWrong seems.. fine. It is neither annoying politically correct, nor endlessly edgy, nor full of racism. This is an impressive achievement.

In particular, I like rate limiting as a tool. If people want to stay engaged they can, but there is an incentive for them to fix their behaviour, rather than disappear and then come back when the ban is over, as bad as ever.

Community Space Management

The Lighthaven campus has been run remarkably well, which wasn't something I would have predicted. I dint’ think there was a particular reason to think rationalists would excel at managing physical spaces, yet they've created a lovely conference space.

Props!

Platform Aesthetics

LessWrong's design aesthetic is also not something I expected. I like the layout, how they try to keep the screen uncluttered and the AI art. The Best of LessWrong page is quite beautiful.

Why These Gaps Matter

Rationalists are big on meaning what they say. If they mean what they say in the sequences I would like more track records, more contact with reality in the things the community writes about and better ways of having discussions than longform.

What do you think?

Have I judged fairly? What have I missed or got wrong? Why is LessWrong like this, do you think?

  1. ^

    Some people claim I am obsessed with argument mapping. I am not sure they are wrong. Somehow it seems so obvious to me that it's a thing I want. How do people disagree and where does their disagreement flow from? 

New Comment
4 comments, sorted by Click to highlight new comments since:

(COI: I’m a lesswrong power-user)

Instead of the hands-on experimentation I expected, what I see is a culture heavily focused on long-form theoretical posts.

FWIW if you personally want to see more of those you can adjust the frontpage settings to boost the posts with a "practical" tag. Or for a dedicated list: https://www.lesswrong.com/tag/practical?sortedBy=magic. I agree that such posts are currently a pretty small fraction of the total, for better or worse. But maybe the absolute number is a more important metric than the fraction?

I’ve written a few “practical” posts on LW, and I generally get very useful comments on them.

Consensus-Building Tools

I think these mostly have yet to be invented.

Consensus can be SUPER HARD. In my AGI safety work, on ~4 occasions I’ve tried to reconcile my beliefs with someone else, where it wound up being the main thing I was doing for about an entire month, just to get to the point where I could clearly articulate what the other person believed and why I disagreed with it. As for actually reaching consensus with the other person, I gave up before getting that far! (See e.g. here, here, here)

I don't really know what would make that kind of thing easier but I hope someone figures it out!

“It is more valuable to provide accurate forecasts than add new, relevant, carefully-written considerations to an argument”

On what margin? In what context? I hope we can all think of examples where one thing is valuable, and where the other thing is valuable. If Einstein predicted the Eddington experiment results but didn’t explain the model underlying his prediction, I don’t think anyone would have gotten much out of it, and really probably nobody would have bothered doing the Eddington experiment in the first place.

Manifold, polymarket, etc. already exist, and I'm very happy they do!! I think lesswrong is filling a different niche, and that's fine.

High status could be tied to demonstrated good judgment through special user flair for accurate forecasters or annual prediction competitions.

As for reputation, I think the idea is that you should judge a comment or post by its content and not by the karma of the person who wrote it. Comment karma and post karma are on display, but by contrast user karma is hidden behind a hover or click. That seems good to me. I myself write posts and comments of widely varying quality, and other people sure do too.

An important part of learning is feeling free and safe to be an amateur messing around with half-baked ideas in a new area—overly-central “status” systems can sometimes discourage that kind of thing, which is bad. (Cf. academia.)

(I think there’s a mild anticorrelation between my own posts’ karma and how objectively good and important they are, see here, so it’s a good thing that I don’t care too much about karma!) (Of course the anticorrelation doesn’t mean high karma is bad, rather it’s from conditioning on a collider.)

For long-time power users like me, I can benefit from the best possible “reputation system”, which is actually knowing most of the commenters. That’s great because I don’t just know them as "good" or "bad", but rather "coming from such-and-such perspective" or "everything they say sounds nuts, and usually is, but sometimes they have some extraordinary insight, and I should especially be open-minded to anything they say in such-and-such domain".

If there were a lesswrong prediction competition, I expect that I probably wouldn’t participate because it would be too time-consuming. There are some people where I would like them to take my ideas seriously, but such people EITHER (1) already take my ideas seriously (e.g. people really into AGI safety) OR (2) would not care whether or not I have a strong forecasting track-record (e.g. Yann LeCun).

There’s also a question about cross-domain transferability of good takes. If we want discourse about near-term geopolitical forecasting, then of course we should platform people with a strong track record of near-term geopolitical forecasting. And if we want discourse about the next ML innovation, then we should platform people with a strong track record of coming up with ML innovations. I’m most interested in neither of those, but rather AGI / ASI, which doesn’t exist yet. Empirically, in my opinion, “ability to come up with ML innovations” transfers quite poorly to “ability to have reasonable expectations about AGI / ASI”. I’m thinking of Yann LeCun for example. What about near-term geopolitical forecasting? Does that transfer? Time will tell—mostly when it’s already too late. At the very least, there are skilled forecasters who strongly disagree with each other about AGI / ASI, so at least some of them are wrong.

(If someone in 1400 AD were quite good at predicting the next coup or war or famine, I wouldn’t expect them to be particularly good at predicting how the industrial revolution would go down. Right? And I think AGI / ASI is kinda like the latter.)

So anyway, probably best to say that we can’t predict a priori who is going to have good takes on AGI, just based on track-record in some different domain. So that’s yet another reason to not have a super central and visible personal reputation system, IMO.

I think there’s a mild anticorrelation between [Steven Byrnes'] posts’ karma and how objectively good and important they are...

I agree that this is true of posts that deviate from trendy topics and/or introduce new ideas, in a way that is especially true of your posts.

For long-time power users like me, I can benefit from the best possible “reputation system”, which is actually knowing most of the commenters.

As another power user, I feel this benefit too.

There’s also a question about cross-domain transferability of good takes.

Agreed. That isn't a difference between contributing "considerations" and "predictions" (using Habryka's reported distinction). There are people who contribute good analysis about geopolitics. Others contribute good analysis about ML innovations. Does that transfer to analysis about AGI / ASI? Time will tell - mostly when it's already too late. We will try anyway.

In terms of predicting the AI revolution the most important consideration is what will happen to power. Will it be widely or narrowly distributed? How much will be retained by humans? More importantly, can we act in the world to change any of this? These are similar to geopolitical questions, so I welcome analysis and forecasts from people with a proven track record in geopolitics.

The industrial revolution is a good parallel. Nobody in 1760 (let alone 1400) predicted the detailed impacts of the industrial revolution. Some people predicted that population and economic growth would increase. Adam Smith had some insights into power shifts (Claude adds Benjamin Franklin, François Quesnay and James Steuart). That's about the best I expect to see for the AI revolution. It's not nothing.

“I think contributing considerations is usually much more valuable than making predictions.”

I think he's absolutely right.  Seeing predictions of top predictors should absolutely be a feature of forecasting sites.  I think the crossover with more conceptual and descriptive posts on LessWrong is pretty minimal.