Before we get to the receipts, let's talk epistemics.

This community prides itself on "rationalism." Central to that framework is the commitment to evaluate how reality played out against our predictions. 

"Bayesian updating," as we so often remind each other.

If we're serious about this project, we should expect our community to maintain rigorous epistemic standards—not just individually updating our beliefs to align with reality, but collectively fostering a culture where those making confident pronouncements about empirically verifiable outcomes are held accountable when those pronouncements fail.

With that in mind, I've compiled in the appendix a selection of predictions about China and AI from prominent community voices. The pattern is clear: a systematic underestimation of China's technical capabilities, prioritization of AI development, and ability to advance despite (or perhaps because of) government involvement.

The interesting question isn't just that these predictions were wrong. It's that they were confidently wrong in a specific direction, and—crucially—that wrongness has gone largely unacknowledged.

How many of you incorporated these assumptions into your fundamental worldview? How many based your advocacy for AI "pauses" or "slowdowns" on the belief that Western labs were the only serious players? How many discounted the possibility that misalignment risk might manifest first through a different technological trajectory than the one pursued by OpenAI or Anthropic?

If you're genuinely concerned about misalignment, China's rapid advancement represents exactly the scenario many claimed to fear: potentially less-aligned AI development accelerating outside the influence of Western governance structures. This seems like the most probable vector for the "unaligned AGI" scenarios many have written extensive warnings about.

And yet, where is the community updating? Where are the post-mortems on these failed predictions? Where is the reconsideration of alignment strategies in light of demonstrated reality?

Collective epistemics require more than just nodding along to the concept of updating. They require actually doing the work when our predictions fail.

What do YOU think?

Appendix: 

Bad Predictions on China and AI

"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy 

"China has neither the resources nor any interest in competing with the US on developing artificial general intelligence" = @Eva_B 

https://www.lesswrong.com/posts/z4MDDwwnWKnv2ZzdK/the-agi-race-between-the-us-and-china-doesn-t-exist

dear ol @Eliezer Yudkowsky 

Anonymous (I guess we know why)

https://www.lesswrong.com/posts/ysuXxa5uarpGzrTfH/china-ai-forecasts

@garrison 

https://www.lesswrong.com/posts/KPBPc7RayDPxqxdqY/china-hawks-are-manufacturing-an-ai-arms-race

@Zvi 

Various folks on twitter...

@Greg C 

Good Predictions / Open-mindedness

@Ben Pace 

https://www.lesswrong.com/posts/xbpig7TcsEktyykNF/thiel-on-ai-and-racing-with-china 

@Lao Mein 

@Migwyn 

@sammyboiz 

New to LessWrong?

1.
^

https://infoproc.blogspot.com/2008/06/asian-white-iq-variance-from-pisa.html?m=1

New Comment


28 comments, sorted by Click to highlight new comments since:

I think,

  • People were wrong about Chinese AI being too far behind to matter.
  • People were surprised that Chinese AI turned out to be open source rather than secretive-Manhattan-project-ish.

But

  • The claim that it's ineffective getting the West to slow down or pause AI because China won't follow, is still your own theory.

I gave you a strong upvote because I strongly agree with the direction of your post and I think it's under-discussed.

But I wish you make it clearer that only the first two points are observed realities that people should adjust their beliefs to. The third point is your own belief, which may be wrong.

 

PS: Please don't say "This is literally cope" to people. That sounds like an X/Twitter comment. People on LessWrong avoid talking like that.

Please, before you criticize anyone, ask an AI to help translate your criticism into LessWrong flavoured corporate-speak. /s

ok I will moderate my tone.  I was a competitive debator and irrationality makes me upset.  I thought this was a safe space for high standards wrt logic, but I can modulate.  Thank you for the feebback.

There is a narrow point - people were wrong about this narrow prediction - "the ccp is scared of AI"

The broader point is that I perceive, and could be wrong, there is epistemic rot if a community dedicated to rationalism is incapable of updating.  The comments I've seen so far are by and large consistent with that intuition.  Folks seem defensive, and more concerned about my interest/tone than the thing at hand...a lot of people made decisions based off (in retrospect) bad expectations about the world.  Which is fine, it happens all the time.  But the thing that matters isn't the old predictions, it's identifying them, understand why and where they came from, and then updating.

If we want to talk about the narrow thing of "is China ready to pause AI",it obviously is not entirely knowable, but the bigger issue is the one I think more important, are we capable of updating, because we need to be able to do that to actually investigate the small thing, going forward.

:) thank you for saying you'll moderate your tone. It's rare I manage to criticize someone and they reply with "ok" and actually change what they do.

My first post on LessWrong was A better “Statement on AI Risk?”. I felt it was a very good argument for the government to fund AI alignment, and I tried really hard to convince people to turn it into an open letter.

Some people told me the problem with my idea is that asking for more AI alignment funding is the wrong strategy. The right strategy is to slow down and pause AI.

I tried to explain that: when politicians reject pausing AI, they just need the easy belief of "China must not win," or "if we don't do it someone else will." But for politicians to reject my open letter, they need the difficult belief of being 99.999% sure of no AI catastrophe and 99.95% sure most experts are wrong.

But I felt my argument fell on deaf ears because the community was so deadset on pausing AI, they don't even want to waste their time on anything other than pausing AI. It was very frustrating.

(I was talking to people in private messages and emails)

I feel you and I are on the same team haha. Maybe we might even work together sometime.

Thanks

I don't think it's fair to say I made a bad prediction here. 

Here's the full context of my quote: "The report clocks in at a cool 793 pages with 344 endnotes. Despite this length, there are only a handful of mentions of AGI, and all of them are in the sections recommending that the US race to build it. 

In other words, there is no evidence in the report to support Helberg’s claim that "China is racing towards AGI.” 

Nonetheless, his quote goes unchallenged into the 300-word Reuters story, which will be read far more than the 800-page document. It has the added gravitas of coming from one of the commissioners behind such a gargantuan report. 

I’m not asserting that China is definitively NOT rushing to build AGI. But if there were solid evidence behind Helberg’s claim, why didn’t it make it into the report?"

Here's my tweet mentioning Gwern's comment. It's not clear that DeepSeek falsifies what Gwern said here: 

  1. the scientific culture of China is 'mafia' like (Hsu's term, not mine) and focused on legible easily-cited incremental research, and is against making any daring research leaps or controversial breakthroughs...

    but is capable of extremely high quality world-class followup and large scientific investments given a clear objective target and government marching orders

V3 and R1 are impressive but didn't advance the absolute capabilities frontier. Maybe the capabilities/cost frontier, though we don't actually know how compute efficient OAI, Anthropic, GDM are. 

I think this part of @gwern's comment doesn't hold up as well:

2. there is no interest or investment in an AI arms race, in part because of a "quiet confidence" (ie. apathy/lying-flat) that if anything important happens, fast-follower China can just catch up a few years later and win the real race. They just aren't doing it. There is no Chinese Manhattan Project. There is no race. They aren't dumping the money into it, and other things, like chips and Taiwan and demographics, are the big concerns which have the focus from the top of the government, and no one is interested in sticking their necks out for wacky things like 'spending a billion dollars on a single training run' without explicit enthusiastic endorsement from the very top.

I still don't think DS is evidence that "China" is racing toward AGI. The US isn't racing toward AGI either. Some American companies are, with varying levels of support from the government. But there's a huge gap between that and Manhattan Project levels of direct govt investment, support, and control.

However, overall, I do think that DS has gotten the CCP more interested in AGI and changed the landscape a lot. 

I'm not convinced that these were bad predictions for the most part.

The main prediction: 1) China lacks compute. 2) CCP values stability and control -> China will not be the first to build unsafe AI/AGI.

Both of these premises are unambiguously true as far as I'm aware. So, these predictions being bad suggests that we now believe China is likely to build AGI without realizing it threatens stability/control, and with minimal compute, before USA? All while refusing to agree to any sort of deal to slow down? Why? Seems unlikely.

American companies, on the other hand, are still explicitly racing toward AGI, are incredibly well resourced, have strong government support, and have a penchant for disruption. The current administration also cares less about stability than any other in recent history.

So, from my perspective, USA racing to AGI looks even more dangerous than before, almost desperate. Whereas China is fast following, which I think everyone expected? Did anyone suggest that China would not be able to fast-follow American AI?

The argument has historically been that existential risk from AI came from some combination of a) SOTA models, and b) open source.

China is now publishing SOTA open source models.  Oh and they found a way to optimize around their lack of GPUs.

Are you sure you aren't under the influence of cognitive dissonance/selective memory? 

I think LW consensus has been that the main existential risk is AI development in general. The only viable long-term option is to shut it all down. Or at least slow it down as much as possible until we can come up with better solutions. DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.

Anyway, I don't see how this relates to these predictions. The predictions are about China's interest in racing to AGI. Do you believe China would now rather have an AGI race with USA than agree to a pause?

DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.

Any evidence of DeepSeek marginally slowing down AI development?

and the response to 'shut it down' has always "what about China, or India, or the UAE, or Europe to which the response was...they want to pause bc XYZ

Well, you not have proof, not speculation, that they are not pausing.  They don't find your arguments pursuasive. What to do?!?

Which is why the original post was about updating.  Something you don't seem very interested doing.  Which is irrational. So is this forum about rationality or about AI risk?  I would think the later flows from the former, but I don't see much evidence of the former.

I think Alibaba has not made any crazy developments yet. So let's consider DeepSeek. I think almost nobody had heard of DeepSeek before v3. Before v3, predicting strong AI progress in China would probably sound like "some AI lab in China will appear from nowhere and do something great. I don't know who or what or when or where, but it will happen soon." That was roughly my opinion, at least in my memory. Maybe making that kind of prediction does not match the tastes of people who are good at predicting things? Awfully vague claim to make I guess.

There was time between v3 and r1 where folks could have more loudly commented DeepSeek was ascendant. What would this have accomplished? I suppose it would have shown some commitment to truth and awareness of reality. I am guessing people who are against the international AI race are a bit lazy to point out stuff that would accelerate the race. I guess at some point the facts can't be avoided.

Cope.  Leadership in AI has been an explicit policy goal since "Made in China 2025".  The predictions were that "the CCP prioritizes stability", and "the CCP prioritizes censorship" and "China is behind in AI".  Are you willing to admit that these are all demonstrably untrue as of today?  Let's start there.

Here's an article from 2018(!) in the South China Morning Post.  

"Artificial intelligence (AI) has come to occupy an important role in Beijing’s ‘Made in China 2025’ blueprint. China wants to become a global leader in the field by 2030 and now has an edge in terms of academic papers, patents and both cross-border and global AI funding.

The fact you were ignorant or dismissive of their strategy is independent of the fact they a) stated the goal publicly, and b) are now in the lead.  

https://multimedia.scmp.com/news/china/article/2166148/china-2025-artificial-intelligence/index.html

I am not entirely sure what specific thing were the rationalists wrong about (the quotes are about various things) and what specifically is the correct version we should update to.

For example, Eliezer's quote seems to be about how China would prefer a narrow AI (that can be controlled by the Chinese Communist Party) over a general AI, for completely selfish reasons. Do you believe that this is wrong?

"Various things" - ugh

ok how about this ONE.

"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy 

At the time of that recording, the race to AGI was being stoked exclusively by actors in the USA (Anthropic, OpenAI, etc).

It is now 2 years later. They got what they asked for, and now other actors have joined the race as well.

idk what to tell you man, you picked one line in one podcast from two years ago. I go on a lot of podcasts, I think out-loud a lot, and I update a lot. Do you want people to have to qualify every comment they make on a podcast with "In my current best estimate, at this specific point in time, which may be invalidated or updated within the next 2 years, I would say that X is a currently appreciably-accurate assessment about the state of the world"?

Yes, we are in fact in a worse world than we were 2 years ago. Things that I was trying to advocate against doing (such as hyperstitioning the AGI race into existance) have in fact happened. Trust me, I (and I assume everyone else) know, and you will see this reflected in more recent podcast appearances of mine.

I can't speak for others, but for me it feels like "Wow look at this guy who in 2006 said 'we are not in a recession right now', but it's 2008 now, and we are in fact in a recession!" 

I do not think your combative and sensationalistic attitude is conducive to productive community sense making, as others have already told you in other comments. If you are just trying to get epic digs in ("ok how about this ONE."), take it to X, I respect that LessWrong has higher norms of good faith.

I think the main reason is that until a few years ago, not much AI research came out of China. Gwern highlighted this repeatedly.

Exactly.  @gwern was wrong. And yet...

What is some moderately strong evidence that China (by which I mean Chinese AI labs and/or the CCP) is trying to build AGI, rather than "just": build AI that is useful for whatever they want their AIs to do and not fall behind the West while also not taking the Western claims about AGI/ASI/singularity at face value?

I was not wrong? I argued with Connor about this back in the day on EAI. If you take the PISA scores at face value and account for their higher population, China has like 30x more people above 145 IQ. Steve Hsu has a nice post about this[1]

This appears to be very antimemetic but seems to be true and the world sure is behaving like this is true. 

If human capital matters, they have more and better human capital than America. But human capital will be obsolete soon. 

And I do think much of the idea that China cannot catch up is based on some idiotic, racist notions that asians lack creativity. And this increasingly is becoming obviously massive cope as China catches up and surpasses in industry after industry. And no, semiconductors are not special. They will catch up just fine. 

I think timelines are short, so I would still be betting on an “American god” but if they are longer than I think, China is a contender, and increasing more so in worlds where timelines are long and human capital still matters.

  1. ^

    https://infoproc.blogspot.com/2008/06/asian-white-iq-variance-from-pisa.html?m=1

Are you saying they are suicidal?

are you saying they accept your frame?  because it appears they do not.

I think I object level disagree with you on the china vector of existential risk, I think it is a self-fulfilling prophecy and that it does not engage with the current AI situation in china.

If you were object-level correct about china I would agree with the post but I just think you're plain wrong.

Here's a link to a post that makes some points about the general epistmic situation around china: https://www.lesswrong.com/posts/uRyKkyYstxZkCNcoP/careless-talk-on-us-china-ai-competition-and-criticism-of

Do you disagree that entities in China are now pushing the state of the art in an open source way?

If you disagree, then sure, you don't have to update.  But I'd argue you aren't paying attention.

If you agree, then how did you update?

If your point is that using 'use vs them' framing makes thing worse, that may or may not be correct, but from the perspective of existential risk the object level determination re China is irrelevant, vs what "they" represent.  A repeated game where defection by anyone one of N players leads to ruin (from the doomer perspective) and where folks in China just represent one of a very large set.

Does that make sense?

So I guess the point then more becomes about general open source development of other countries where China is part of it and that people did not correctly predict this as something that would happen. 

Something like distillation techniques for LLMs would be used by other countries and then profilerated and that the rationality community as a whole did not take this into account? 

I'll agree with you that Bayes points should be lost in prediction of theory of mind of nation states, it is quite clear that they would be interested in this from a macro-analysis perspective (I say in hindsight of course.)

I'm not sure that Deepseek is SOTA in terms of inherent development, it seems to me that they're using some of the existing work from OpenAI, Deepmind & Anthropic but I might be wrong here, is there anything else that you're pointing at?

[+][comment deleted]20