Before we get to the receipts, let's talk epistemics.
This community prides itself on "rationalism." Central to that framework is the commitment to evaluate how reality played out against our predictions.
"Bayesian updating," as we so often remind each other.
If we're serious about this project, we should expect our community to maintain rigorous epistemic standards—not just individually updating our beliefs to align with reality, but collectively fostering a culture where those making confident pronouncements about empirically verifiable outcomes are held accountable when those pronouncements fail.
With that in mind, I've compiled in the appendix a selection of predictions about China and AI from prominent community voices. The pattern is clear: a systematic underestimation of China's technical capabilities, prioritization of AI development, and ability to advance despite (or perhaps because of) government involvement.
The interesting question isn't just that these predictions were wrong. It's that they were confidently wrong in a specific direction, and—crucially—that wrongness has gone largely unacknowledged.
How many of you incorporated these assumptions into your fundamental worldview? How many based your advocacy for AI "pauses" or "slowdowns" on the belief that Western labs were the only serious players? How many discounted the possibility that misalignment risk might manifest first through a different technological trajectory than the one pursued by OpenAI or Anthropic?
If you're genuinely concerned about misalignment, China's rapid advancement represents exactly the scenario many claimed to fear: potentially less-aligned AI development accelerating outside the influence of Western governance structures. This seems like the most probable vector for the "unaligned AGI" scenarios many have written extensive warnings about.
And yet, where is the community updating? Where are the post-mortems on these failed predictions? Where is the reconsideration of alignment strategies in light of demonstrated reality?
Collective epistemics require more than just nodding along to the concept of updating. They require actually doing the work when our predictions fail.
What do YOU think?
Appendix:
Bad Predictions on China and AI
"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy
"China has neither the resources nor any interest in competing with the US on developing artificial general intelligence" = @Eva_B
dear ol @Eliezer Yudkowsky
Anonymous (I guess we know why)
https://www.lesswrong.com/posts/ysuXxa5uarpGzrTfH/china-ai-forecasts
https://www.lesswrong.com/posts/KPBPc7RayDPxqxdqY/china-hawks-are-manufacturing-an-ai-arms-race
Various folks on twitter...
Good Predictions / Open-mindedness
https://www.lesswrong.com/posts/xbpig7TcsEktyykNF/thiel-on-ai-and-racing-with-china
My first thought is, it's not clear why you care about this. This is your first post ever, and your profile has zero information about you. Do you consider yourself a Less Wrong rationalist? Are you counting on the rationality community to provide crucial clarity and leadership regarding AI and AI policy?
My second thought is, if a big rethink is needed, it should also include the fact that in Trump 2.0, the US elected a revolutionary regime whose policies include AI accelerationism. I don't think anyone saw that coming either, and I think that's more consequential than DeepSeek-r1. Maybe a Chinese startup briefly got ahead of its American rivals in the domain of reasoning LLMs; but most of the contenders are still within American borders, and US AI policy is now ostensibly in the hands of a crypto VC who is a long-time buddy of Elon's.
OK, thanks for the information! By the way, I would say that most people active on Less Wrong, disagree with some of the propositions that are considered to be characteristic of the Less Wrong brand of rationalism. Disagreement doesn't have to be a problem. What set off my alarms was your adversarial debut - the rationalists are being irrational! Anyway, my opinion on that doesn't matter since I have no authori... (read more)