As the title says. I'm more interested in "up-to-date" than "comprehensive".

New Answer
New Comment
6 comments, sorted by Click to highlight new comments since:

There are new Huawei Ascend 910C CloudMatrix 384 systems that form scale-up worlds comparable to GB200 NVL72, which is key to being able to run long reasoning inference for large models much faster and cheaper than possible using systems with significantly smaller world sizes like the current H100/H200 NVL8 (and also makes it easier to run training, though not as essential unless RL training really does scale to the moon).

Apparently TSMC produced ~2.1M compute dies for these systems in 2024-2025, which is 1.1M chips, and an Ascend 910C chip is 0.8e15 dense BF16 FLOP/s (compared to 2.5e15 for a GB200 chip). So the compute is about the same as that of ~350K GB200 chips (not dies or superchips), which is close to 400K-500K GB200 chips that will be installed at the Abilene site of Crusoe/Stargate/OpenAI in 2026. There also seems to be potential to produce millions more without TSMC.

These systems are 2.3x less power-efficient per FLOP/s than GB200 NVL72. They are using 7nm process instead of 4nm process of Blackwell, the scale-up network is using optical transceivers instead of copper, and the same compute needs more chips to produce it, so they are probably significantly more expensive per FLOP/s. But if there is enough funding and the 2.1M compute dies from TSMC are used to build a single training/inference system (about 2.5 GW), there is in principle some potential for parity between US and China at the level of a single frontier AI company for late 2026 compute (with no direct implications for 2027+ compute, in particular Nvidia Rubin buildout will begin around that time).

(The relevance is that whatever the plans are, they need to be grounded in what's technically feasible, and this piece of news changed my mind on what might be technically feasible in 2026 on short notice. The key facts are systems with a large scale-up world size, and enough compute dies to match the compute of Abilene site in 2026, neither of which was obviously possible without more catch-up time, by which time the US training systems would've already moved on to an even greater scale.)

While far from what I hoped for, this is the closest to what I hoped for that I managed to find so far: https://www.chinatalk.media/p/is-china-agi-pilled 

Overall, the Skeptic makes the stronger case — especially when it comes to China’s government policy. There’s no clear evidence that senior policymakers believe in short AGI timelines. The government certainly treats AI as a major priority, but it is one among many technologies they focus on. When they speak about AI, they also more often than not speak about things like industrial automation as opposed to how Dario would define AGI. There’s no moonshot AGI project, no centralized push. And the funding gaps between leading Chinese AI labs and their American counterparts remain enormous.

The Believer’s strongest argument is that the rise of DeepSeek has changed the conversation. We’ve seen more policy signals, high-level meetings, and new investment commitments. These suggest that momentum is building. But it remains unclear how long this momentum can be maintained–and whether it will really translate into AGI moonshots. While Xi talks about “two bombs one satellite”-style mobilzation in the abstract, he hasn’t channeled that idea into any concerted AGI push and there are no signs on any “whole nation” 举国 effort to centralize resources. Rather, the DeepSeek frenzy again is translating into application-focused development, with every product from WeChat to air conditioning now offering DeepSeek integrations.

This debate also exposes a flaw in the question itself: “Is China racing to AGI?” assumes a monolith where none exists. China’s ecosystem is a patchwork — startup founders like Liang Wenfeng and Yang Zhilin dream of AGI while policymakers prioritize practical wins. Investors, meanwhile, waver between skepticism and cautious optimism. The U.S. has its own fractures on how soon AGI is achievable (Altman vs. LeCun), but its private sector’s sheer financial and computational muscle gives the race narrative more bite. In China, the pieces don’t yet align.

While browsing through Concordia AI's report (linked by @Mitchell_Porter ), I stumbled on an essay by Yin Hejun (China's Minister of Science and Technology) from ~1y ago, which Concordia's AI Safety in China Substack summarizes as:

Background: Minister of the Ministry of Science and Technology (MOST) YIN Hejun (阴和俊) published an essay on AI in CAC’s magazine, “China Cyberspace (中国网信).” The essay outlines China’s previous efforts in AI development, key accomplishments, and plans moving forward. 

Discussion of governance and dialogue: Generally, the essay seeks to emphasize the importance of balance in innovation and legislation, suggesting that China should “place equal emphasis on development and governance” and “avoid suppressing innovation due to improper governance.” It also discussed AI ethics governance, with one paragraph citing China’s recent science and technology (S&T) ethics-related policies and noting that China has been advancing AI legislation in an orderly manner. Another paragraph suggested expanding international cooperation on AI governance, favorably referencing the UK AI Safety Summit and noting several dialogues between China and the UK, France, and Global South. At the same time, the article also prominently argued that AI is the “largest variable in the restructuring of overall national competitiveness and the new focus of global great power competition.” 

Implications: This essay is a microcosm of the Chinese government’s complex attitude towards AI. It sees tremendous potential in AI’s development for national power and social benefit and also advocates for ethical governance, in part because the latter is viewed as compatible with development. Simultaneously, China is open to international cooperation and also views AI development as a strategic priority and an area where it aims to establish a leading position. Discussion on frontier AI safety is mostly lacking from the article.

And in Concordia's slides:

➢ MOST Minister YIN Hejun’s (阴和俊) essay in a CAC-overseen magazine highlights the complexity and ambivalence of these views.
- The essay argues that AI is key to national power, as the “largest variable in the restructuring of overall national competitiveness and the new focus of global great power competition.”
- Yin called for improving the AI governance system under the idea that “development is the greatest security” and also to put “equal emphasis on development and governance.” 
- At the same time, he supports promoting AI ethics and expanding international cooperation on AI governance.

I don't speak Chinese, so I Google-translated the essay to skim/read it. It seems to fit the narrative of "China wants to accelerate AI just as it would like to accelerate any useful technology, but they're not particularly buying into singularity/AGI/ASI." Something that got Google-translated into "universal AI" is mentioned 2 times, 1 time for "general AI", mostly in the context of language models, but without much elaboration. There's no "let's get AI that can do everything for us", more like "let's get AI so that we are better at this and this and that".

(Although some local Chinese governments did announce policies on "AGI": 1, 2.)


(I weakly predict that I'm going to be using this thread as a dump for whatever new info on this topic I find worth sharing.)

Complex and ambivalent views seem like the correct sort of views for governments to hold at this point.

I don't speak Chinese, so I Google-translated the essay to skim/read it.

I also don't speak Chinese, but my impression is that machine translations of high-context languages like Chinese need to be approached with considerable caution -- a lot of context on (eg) past guidance from the CCP may be needed to interpret what they're saying there. I'm only ~70% on that, though, happy to be corrected by someone more knowledgeable on the subject.

That's an informative article. 

There's lots of information about AI safety in China at Concordia AI, e.g. this report from a year ago. But references to the party or the government seem to be scarce, e.g. in that 100-page report, the only references I can see are on slide 91. 

Curated and popular this week