Before we get to the receipts, let's talk epistemics.
This community prides itself on "rationalism." Central to that framework is the commitment to evaluate how reality played out against our predictions.
"Bayesian updating," as we so often remind each other.
If we're serious about this project, we should expect our community to maintain rigorous epistemic standards—not just individually updating our beliefs to align with reality, but collectively fostering a culture where those making confident pronouncements about empirically verifiable outcomes are held accountable when those pronouncements fail.
With that in mind, I've compiled in the appendix a selection of predictions about China and AI from prominent community voices. The pattern is clear: a systematic underestimation of China's technical capabilities, prioritization of AI development, and ability to advance despite (or perhaps because of) government involvement.
The interesting question isn't just that these predictions were wrong. It's that they were confidently wrong in a specific direction, and—crucially—that wrongness has gone largely unacknowledged.
How many of you incorporated these assumptions into your fundamental worldview? How many based your advocacy for AI "pauses" or "slowdowns" on the belief that Western labs were the only serious players? How many discounted the possibility that misalignment risk might manifest first through a different technological trajectory than the one pursued by OpenAI or Anthropic?
If you're genuinely concerned about misalignment, China's rapid advancement represents exactly the scenario many claimed to fear: potentially less-aligned AI development accelerating outside the influence of Western governance structures. This seems like the most probable vector for the "unaligned AGI" scenarios many have written extensive warnings about.
And yet, where is the community updating? Where are the post-mortems on these failed predictions? Where is the reconsideration of alignment strategies in light of demonstrated reality?
Collective epistemics require more than just nodding along to the concept of updating. They require actually doing the work when our predictions fail.
What do YOU think?
Appendix:
Bad Predictions on China and AI
"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy
"China has neither the resources nor any interest in competing with the US on developing artificial general intelligence" = @Eva_B
dear ol @Eliezer Yudkowsky
Anonymous (I guess we know why)
https://www.lesswrong.com/posts/ysuXxa5uarpGzrTfH/china-ai-forecasts
https://www.lesswrong.com/posts/KPBPc7RayDPxqxdqY/china-hawks-are-manufacturing-an-ai-arms-race
Various folks on twitter...
Good Predictions / Open-mindedness
https://www.lesswrong.com/posts/xbpig7TcsEktyykNF/thiel-on-ai-and-racing-with-china
"My first thought is, it's not clear why you care about this. This is your first post ever, and your profile has zero information about you. Do you consider yourself a Less Wrong rationalist? Are you counting on the rationality community to provide crucial clarity and leadership regarding AI and AI policy? "
I tried posting in the past but was limited because of the karma wall, but thanks for questioning my motives.
I am a game theorist and researcher, and yes, I consider myself broadly aligned with rationalism, though with a strong preference for skeptical consequentialism than overconfident utilitarianism. Is there no place for consequentialists here?
"Are you counting on the rationality community to provide crucial clarity and leadership regarding AI and AI policy?"
The rationalist community is extremely influential in both AI development and AI policy. Do you disagree?
"My second thought is, if a big rethink is needed, it should also include the fact that in Trump 2.0, the US elected a revolutionary regime whose policies include AI accelerationism."
This is a) irrelevant to the post (why didn't we update re China) and b) naive and borderline defensive.
Not very rational of you.
If you couldn't forecast the Republicans would be in favor of less regulation, I don't know man, you probably shouldn't be publicly forecasting. This is more a statement about the quality of your Bayes machine than the world.
"Maybe a Chinese startup briefly got ahead of its American rivals in the domain of reasoning LLMs; but most of the contenders are still within American borders, and US AI policy is now ostensibly in the hands of a crypto VC who is a long-time buddy of Elon's."
This is literally cope. Go on Twitter for 5 seconds and find people freaking out about Gwen. Have you heard of Manus? Again, this says more about your estimation engine than the world.