Before we get to the receipts, let's talk epistemics.
This community prides itself on "rationalism." Central to that framework is the commitment to evaluate how reality played out against our predictions.
"Bayesian updating," as we so often remind each other.
If we're serious about this project, we should expect our community to maintain rigorous epistemic standards—not just individually updating our beliefs to align with reality, but collectively fostering a culture where those making confident pronouncements about empirically verifiable outcomes are held accountable when those pronouncements fail.
With that in mind, I've compiled in the appendix a selection of predictions about China and AI from prominent community voices. The pattern is clear: a systematic underestimation of China's technical capabilities, prioritization of AI development, and ability to advance despite (or perhaps because of) government involvement.
The interesting question isn't just that these predictions were wrong. It's that they were confidently wrong in a specific direction, and—crucially—that wrongness has gone largely unacknowledged.
How many of you incorporated these assumptions into your fundamental worldview? How many based your advocacy for AI "pauses" or "slowdowns" on the belief that Western labs were the only serious players? How many discounted the possibility that misalignment risk might manifest first through a different technological trajectory than the one pursued by OpenAI or Anthropic?
If you're genuinely concerned about misalignment, China's rapid advancement represents exactly the scenario many claimed to fear: potentially less-aligned AI development accelerating outside the influence of Western governance structures. This seems like the most probable vector for the "unaligned AGI" scenarios many have written extensive warnings about.
And yet, where is the community updating? Where are the post-mortems on these failed predictions? Where is the reconsideration of alignment strategies in light of demonstrated reality?
Collective epistemics require more than just nodding along to the concept of updating. They require actually doing the work when our predictions fail.
What do YOU think?
Appendix:
Bad Predictions on China and AI
"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy
"China has neither the resources nor any interest in competing with the US on developing artificial general intelligence" = @Eva_B
dear ol @Eliezer Yudkowsky
Anonymous (I guess we know why)
https://www.lesswrong.com/posts/ysuXxa5uarpGzrTfH/china-ai-forecasts
https://www.lesswrong.com/posts/KPBPc7RayDPxqxdqY/china-hawks-are-manufacturing-an-ai-arms-race
Various folks on twitter...
Good Predictions / Open-mindedness
https://www.lesswrong.com/posts/xbpig7TcsEktyykNF/thiel-on-ai-and-racing-with-china
I don't think it's fair to say I made a bad prediction here.
Here's the full context of my quote: "The report clocks in at a cool 793 pages with 344 endnotes. Despite this length, there are only a handful of mentions of AGI, and all of them are in the sections recommending that the US race to build it.
In other words, there is no evidence in the report to support Helberg’s claim that "China is racing towards AGI.”
Nonetheless, his quote goes unchallenged into the 300-word Reuters story, which will be read far more than the 800-page document. It has the added gravitas of coming from one of the commissioners behind such a gargantuan report.
I’m not asserting that China is definitively NOT rushing to build AGI. But if there were solid evidence behind Helberg’s claim, why didn’t it make it into the report?"
Here's my tweet mentioning Gwern's comment. It's not clear that DeepSeek falsifies what Gwern said here:
V3 and R1 are impressive but didn't advance the absolute capabilities frontier. Maybe the capabilities/cost frontier, though we don't actually know how compute efficient OAI, Anthropic, GDM are.
I think this part of @gwern's comment doesn't hold up as well:
I still don't think DS is evidence that "China" is racing toward AGI. The US isn't racing toward AGI either. Some American companies are, with varying levels of support from the government. But there's a huge gap between that and Manhattan Project levels of direct govt investment, support, and control.
However, overall, I do think that DS has gotten the CCP more interested in AGI and changed the landscape a lot.
ok so what criteria would you use to suggest that your statements/gwern’s statements were falisified?
What line can we agree on today, while it feels uncertainty, so that later we’re not still fighting over terminology and more working off the same ground truth?