In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean has been discussed across Chinese language platforms. We present:

  1. Our research methodology and synthesis of key findings across media artefacts
  2. A proposal for how censorship patterns may provide signal for the Chinese government's thinking about AGI and the race to superintelligence
  3. A more detailed analysis of each of the nine artefacts, organised by type: Mainstream Media, Forum Discussion, Bilibili (Chinese Youtube) Videos, Personal Blogs.

Methodology

We conducted a comprehensive search across major Chinese-language platforms–including news outlets, video platforms, forums, microblogging sites, and personal blogs–to collect the media featured in this report. We supplemented this with Deep Research to identify additional sites mentioning AI 2027. Our analysis focuses primarily on content published in the first few days (4-7 April) following the report’s release. More media has been generated since our research was completed, and we may cover these in a follow-up if there's interest.

Summary

Below are some patterns that emerged across the content we analysed:

  • Many reports omit references to China, DeepCent, and the US–China race dynamic and instead focus on technical aspects of human-level or superhuman AI development. Mentions of DeepCent, Chinese espionage, or AI betrayal are often softened or scrubbed. Less regulated platforms like personal blogs or niche video channels include more engagement with these themes.
  • The report provoked mixed responses. Some authors treated it as a serious forecast, engaging with its concerns about alignment and US-China dynamics, while others dismissed it as alarmist or framed it as a kind of science fiction thought experiment rather than a serious forecast.
  • Audience engagement remains low across the board. Many posts received minimal views, likes, or comments.
  • Several artefacts make the same error of describing the report as a ‘76-page document’ despite it being 71 pages. They often reference the same milestones, too. Such details being repeated suggest that sources are copying from one another.

Censorship as Signal

The AI 2027 website remains accessible in China without a VPN—a curious fact given its content about democratic revolution, CCP coup scenarios, and claims of Chinese AI systems betraying party interests. While the site itself evades censorship, Chinese-language reporting has surgically excised these sensitive elements.

Multiple Chinese posts appeared within a day of the report’s release, indicating grassroots interest, yet view counts and engagement remain low. This may explain the lack of official censorship—the state censorship apparatus prioritises content with mass mobilisation potential rather than simply blocking all politically sensitive material. The sanitised posts likely result from self-censorship, where authors pre-emptively avoid content that might trigger account deletion or worse consequences.

The majority of our sources come from individual technology bloggers, video creators, and forum discussions. However, the article ‘The Doomsday Timeline is Here! OpenAI Researcher’s 76-page Hardcore Simulation: ASI Takes Over the World in 2027, Humans Become NPCs’ deserves closer scrutiny as it appeared on two mainstream platforms: Sina Finance and The Paper (澎湃新闻). Sina Finance is a prominent financial information platform operated by Sina Corporation, while The Paper positions itself as a more dynamic, progressive alternative to traditional state media, though still operating within China’s regulatory framework. Both platforms reach wide audiences, though exact readership numbers remain unknown. Notably, this article presented a doctored timeline completely excluding all China-related elements from the AI 2027 report.

It is interesting to consider the origin of this selective reporting in mainstream outlets. Was this self-censorship by a cautious author? Editorial guidance to remove sensitive content? Publisher-level guidelines? Or direct instructions from higher authorities? Given all mainstream media outlets operate within state parameters, the complete omission of superintelligence race dynamics between China and the United States is worth noting and monitoring, though we should be careful not to overinterpret this single case.

Monitoring censorship patterns across media and public discussions—and determining at what level this censorship occurs—could provide insights into how seriously the Chinese government views AGI development and the competition for superintelligence. The AI 2027 report predicts China awakening to AGI by mid-2026, but current evidence remains ambiguous. We lack clear signals about whether Beijing has ‘woken up’ to the AGI race—crucial information for Western strategic planning. Garrison Lovely’s November analysis ‘China Hawks are Manufacturing an AI Arms Race’ found no substantive proof of China racing toward AGI. ChinaTalk’s debate from earlier this month favoured the sceptic’s view: while DeepSeek has generated excitement, China lacks the coordinated focus of an AGI Manhattan Project, with development primarily driven by private companies while government officials concentrate on practical applications of AI. ChinaTalk warns, however, that this assessment represents only a snapshot from early 2025 and is subject to change.

Given these uncertainties, tracking censorship patterns of reports like AI 2027 and AGI topics offers a novel signal. If Chinese authorities were to prioritise AGI development as a critical national security concern, we might expect specific changes: (1) more consistent blocking of foreign AGI forecasts, particularly those depicting China in geopolitical competition; (2) tighter control over domestic AI discussions, with greater uniformity in acceptable narratives; and (3) emergence of officially sanctioned messaging about China's AI capabilities and ambitions. The absence of censorship may be equally meaningful—if content like AI 2027 continues to circulate relatively freely despite its politically sensitive elements, this could suggest AGI competition has not yet become a national security priority for Chinese leadership.

The level at which censorship occurs—self-imposed by writers, editorial guidelines, or state directives—remains frustratingly murky to outside observers, yet changes in these patterns could potentially reveal whether Zhongnanhai has begun to grasp the stakes of the superintelligence competition. While imperfect, this indicator should be monitored alongside other signals of Chinese strategic thinking on AGI.

Analysis

Please note that all of the sources below appear to engage directly with the original forecast in English–sources frequently link back to the original AI 2027 website or PDF report. We found only one full Chinese translation, attributed to Gemini 2.5, which had minimal reach (105 views at the time of writing).

Mainstream Media

English Title: Doomsday Timeline is Here! Former OpenAI Researcher’s 76-page Hardcore Simulation: ASI Takes Over the World in 2027, Humans Become NPCs

Original Title: 末日时间表来了!前OpenAI研究员76页硬核推演:2027年ASI接管世界,人类成NPC

Platforms: Sina Finance, The Paper

Author: 新智元 (literally ‘New Intelligence Era’)

Editor: 编辑部 HNZ (Editorial Department HNZ)

Published: 4 April 2025

Links: https://mp.weixin.qq.com/s/59ZX0Afp3kLbdj1to7HXsw, https://finance.sina.com.cn/roll/2025-04-04/doc-ineryqsr1721551.shtml, https://www.thepaper.cn/newsDetail_forward_30574084#

This article was published less than a day after AI 2027's launch, appearing first on the WeChat Official Accounts Platform, then on Sina Finance, and finally on The Paper. It features graphs from the report and photos of Daniel Kokotajlo and Eli Lifland.

The author begins with an edited summary of the timeline, omitting all China-related elements. Below is a word-by-word translation of this modified timeline:

‘According to the report, the timeline for AGI and ASI is roughly as follows:

Late 2025: World’s most expensive AI is born, computing power reaches 10^27 FLOP

Early 2026: Programming becomes automated

Late 2026: AI replaces some jobs

March 2027: Algorithmic breakthrough, Agent-2 is born

June 2027: AI learns self-improvement, catches up to human researchers

July 2027: AGI is achieved

September 2027: Agent-4 surpasses human AI researchers

December 2027: Agent-5 is consolidating power, humans have only a few months left to control their own future’

The rest of the article elaborates on the details around this heavily edited and condensed timeline. It ends by noting how some AI experts consider the report to be neither scientifically based nor realistic, quoting critics like Ali Farhadi and Kevin Roose. It says that some of the views of the AI Futures Project team are quite extreme, citing that, for instance, Kokotajlo believed last year that AI has a 70 per cent chance of destroying humanity.

Forum Discussion

English Title: What do you think of former OpenAI researcher’s AI 2027 predictions?

Original Title: 如何评价 OpenAI 前研究员的 AI 2027 预测?

Platform: Zhihu (Quora equivalent)

Contributors: Multiple, 36+ responses

Published: Question asked 4 April, 2025, Last activity Locked as of 7 April due to acceptance to ‘Trending Posts’.

Links: https://www.zhihu.com/question/1891468398904455540

Stats: 238 followers, 155,480 views

This Zhihu forum thread is dedicated to the discussion of the AI 2027 forecast. User reactions are varied. Some, like Hu Yiming, situate the forecast within the ‘AGI Manhattan Consensus’ associated with OpenAI circles. While acknowledging the potential for rapid AI progress, they question key assumptions about the difficulty of technical alignment, warn against the risks of closed-sourcing AGI development, and critique the report’s strong anti-China framing, especially regarding semiconductor controls. Zhang Shengwu similarly questions the forecast, arguing that geopolitical instability and human factors might distort the timeline or alignment landscape in ways not fully captured by the report.

Others, including Trisimo Cui Simo and Zhao Ling, dismiss the forecast entirely as 'semi-science fiction'. They suggest the predictions may be politically motivated to influence US policy rather than representing genuine technical forecasts. One user, Arima Kana, says that they take the forecast more seriously given Kokotajlo's track record of accurate predictions in the past, though they still maintain a critical stance.

A recurring theme across many comments is criticism of what users perceive as anti-China bias in the report. Many commenters object to China being portrayed as an antagonistic force. Several comments express frustration with this framing, with one user directly stating: 'full of anti-China sentiments... OpenAI is made of 28% Chinese immigrants so why do former OpenAI people hate China?' Some users go as far as to say the Chinese government should 'reunite with Taiwan' or be more hawkish on the US.

Common discussion points on this more honest, unfiltered thread include assessments of the short AGI timeline's plausibility, the significant challenges of AI alignment and control, examination of US-China competition and associated semiconductor restrictions, debates between open versus closed AI development paradigms, and speculation about the motivations underlying such dramatic forecasts. Given the lack of popularity of the one Chinese translation we found compared to the relative popularity of this forum thread, users appear to be engaging directly with the original English report, including references to works by Geoffrey Hinton, Ilya Sutskever, Dario Amodei, and Leopold Aschenbrenner too.

Bilibili Videos

English Title: [AI 2027] A mind-expanding wargame simulation of artificial intelligence competition by a former OpenAI researcher

Original Title: [AI 2027] 前OpenAI研究员脑洞大开的人工智能争霸兵棋推演
Platform: Bilibili

Author: 7okis

Published: 5 April 2025, 8:58 AM Beijing Time

Links: https://www.bilibili.com/video/BV1DURZYoEjR/?spm_id_from=333.337.search-card.all.click

Stats: 105 views, 5 likes, 6 favourites, and 1 comment

This video provides a walkthrough of the AI 2027 report. The presenter primarily translates and summarises the original report with minimal personal commentary, adding sporadic connections to other science fiction work and technical details. The voiceover spends significant time explaining technical details and comparing the scenarios to science fiction works, referencing Philip K. Dick stories and the game ‘Universal Paperclips’.

The translation remains largely neutral, with major plot points clearly identified and accurately conveyed, including most early scenarios involving China. However, the presenter skips over details about peace protests against the PRC in the ‘slowdown ending’ scenario set in 2030. Moreover, the ‘race ending’ scenario is not covered, with viewers directed to look it up themselves if interested. The presenter concludes by asking viewers: ‘What do you think after reading this? Does it make you shudder or do you think it's bizarre?’

The creator 7okis, fluent in sophisticated English, produces technical content spanning AI discussions, programming tutorials, game analyses, and development tools.

English Title: Predicting AI Development in 2027

Original Title: 预测2027 年AI智能发展
Platform: Bilibili

Author: AI深度研究员 (‘AI Deep Learning Researcher’)

Published: 6 April 2025, 2:41 AM Beijing Time

Link: https://www.bilibili.com/video/BV1qwdwY1EqS/

Stats: 844 views, 13 likes, 34 favourites, 2 shares, 0 comments

This 46-minute excerpt of the Dwarkesh episode with Scott Alexander and Daniel Kokotajlo, uploaded to Bilibili with Chinese subtitles, includes only the first four sections of the original 3-hour conversation: ‘AI 2027’, ‘Forecasting 2025 and 2026’, ‘Why LLMs Aren’t Making Discoveries’, and ‘Debating Intelligence Explosion’ (these are Dwarkesh’s own headings from his blog). These segments focus on AI forecasting, the technical limitations of current language models, and early-stage scenarios of recursive improvement, while avoiding any discussion of geopolitical competition, catastrophic risk, digital sentience, or questions about who controls powerful AI systems and how strategic decisions are made within leading tech companies.

Everything after this point, including key sections like ‘Can Superintelligence Actually Transform Science?’, ‘Race with China’, ‘Nationalization vs Private Anarchy’, and ‘Misalignment’—is cut, suggesting an editorial preference for considering the more philosophical implications of transformative AI while censoring concerns related to control, ethics, or global power dynamics. We have scrolled through the account’s page to confirm that later sections are not uploaded separately.

Personal Blogs

English Title: Doomsday Timeline: AI 2027 Depicts the Arrival of Superintelligence and the Fate of Humanity Within the Decade

Original Title: 未日时间表:AI 2027描绘十年内超智能降临与人类命运

Platform: Weibo (Twitter equivalent)

Author: AI鱼博士 (literally ‘AI Fish PhD’), a Peking University-affiliated account

Published: 4 April 2025

Link: https://m.weibo.cn/detail/5151770709331061

The blog post ‘Doomsday Timeline: AI 2027 Depicts the Arrival of Superintelligence and the Fate of Humanity Within the Decade’ was published on Weibo, one of the biggest social media platforms in China, less than a day after AI 2027’s launch. It provides a brief yet dramatic retelling of the original report, presenting it as a serious projection based on current trends and expert analyses, and decisively not science fiction.

The author briefly mentions the US–China AI race and how geopolitical pressures make deceleration difficult, but does not elaborate further. The piece closes with a call for a deeper societal conversation about the kind of future we want to shape.

English Title: AI 2027: Expert Predictions on the Artificial Intelligence Explosion

Original Title: AI 2027:人工智能大爆炸专家预测

Platform: Personal blog titled ‘Let’s make AGI Real’

Author: Liu Wei

Published: 4 April 2025

Link: https://liuwei.blog/2025/04/04/ai-2027

Among all Chinese-language coverage of AI 2027, this post from the blog ‘Let's Make AGI Real’ stands out for its more comprehensive analysis. This lengthy piece presents a structured examination with seven distinct sections covering project overview, core predictions, media reception, potential impacts, overlooked constraints, viewpoint comparisons, and conclusions. The author contributes substantial original analysis, identifying technical limitations he deems to have been overlooked in the original report, such as energy consumption challenges and data constraints. He also notes how the AI 2027 timeline contrasts with more conservative predictions from sources like Metaculus, AI researcher surveys, and industry leaders like Jensen Huang, offering readers context for evaluating the report’s forecast.

While other Chinese sources either completely omitted or heavily sanitised geopolitical content, Liu's blog directly engages with the contentious US-China AI race narrative central to the original report. Though employing light obfuscation by referring to China as ‘East Big’ (东大), the post covers several politically sensitive predictions from AI 2027 that other Chinese media avoided: China's consolidation of AI research efforts, Chinese intelligence stealing AI model weights from American companies, China lagging behind US capabilities despite desperate attempts to catch up, and Chinese AI betraying its creators through secret negotiations with American AI systems.

This willingness to discuss politically sensitive scenarios suggests Liu operates with (or simply allows himself) unusual editorial freedom compared to other Chinese commentators. Liu Wei publishes on a WordPress-hosted blog, a platform less common in mainland China than domestic alternatives, so we wonder if he is not based in China.WordPress blogs are difficult to maintain in China - the international WordPress.com service is frequently blocked, while running a WordPress site from within China requires Internet Content Provider (ICP) registration (备案) with government authorities.

Still, Liu's timeline is not entirely faithful to the AI 2027 report: he incorrectly describes DeepCent as being newly established in mid-2026, when the original report portrays it as an existing company around which the Chinese government consolidates its AI efforts. More significantly, Liu presents only the ‘race’ ending through October 2027, and the ‘slowdown’ branch, where humans retain some control, is absent. This omission tilts Liu’s post toward a sense of inevitability and doom, whereas the original report explicitly presents two scenarios to underscore contingency and the possibility of human agency.

The author’s self-introduction is translated as follows: ‘A digital nomad from a parallel universe, lingering at the crossroads of technology and humanities. Swept up in the AGI current, unable to retire as a librarian. My soul has no place to rest.’ 

English Title: AI 2027: A Science Fiction Article

Original Title: AI 2027:一篇科幻文章

Platform: Juejin.cn

Author: 是魔丸啊 (‘It is a demon orb’)

Published: 5 April 2025

Link: https://juejin.cn/post/7489043337289170971

Published two days after AI 2027's release, 'AI 2027: A Science Fiction Article' is the most faithful, albeit brief retelling of the reports that we cover here. By framing it as a science fiction article, the author gets away with discussing politically sensitive topics like China's AI ambitions, model theft, and even Taiwan. The post shows only 24 views as of 18 April.

The piece sticks closely to the original scenario, covering the US–China race, adversarial AI misalignment, and the two possible futures: one where AI wipes out humanity with a bioweapon, and another where a small U.S. committee aligns AI just in time and negotiates a deal with a weaker Chinese superintelligence. It also calls out how international competition between the US and China pushes both countries to cut corners on safety despite warning signs of misalignment.

English Title: Will AGI Take Over the World in 2027?

Original Title: 2027年AGI接管世界?

Platform: Tencent News

Author: 柳胖胖 (Liu Pangpang)

Published: 6 April 2025

Link: https://news.qq.com/rain/a/20250406A06MO900

This lighthearted blog post on Tencent News presents AI 2027 as a provocative but not wholly serious thought experiment. Liu refers to the report as ‘唬人’ (scare-mongering), albeit still worth a read. He outlines the report’s timeline and its central claim—that humanity’s fate hinges on whether ‘adversarial misalignment’ between major AI companies can be resolved—but strips it of any geopolitical context. The original report frames this misalignment as a U.S.–China rivalry, with companies like OpenBrain and DeepCent playing pivotal roles. Liu omits these details entirely.

The timeline he provides is heavily simplified and sanitised. In translation, Liu writes:

‘Mid-2025: "Stumbling" agents, the world begins to witness the power of agents
Late 2025: The world's most expensive AI emerges
Early 2026: Automated programming is fully realised, using AI to accelerate AI research begins to pay off
Mid-2026: China fully awakens (though I want to say, haven't we already awakened ahead of schedule? 😊)
Late 2026: AI replaces many jobs

By 2027, AI-driven acceleration of AI research gradually produces three levels of AI research speeds:
1. Superhuman Coder: 4x AI research acceleration
2. Superhuman Remote Worker: 100x AI research acceleration
3. Artificial SuperIntelligence (ASI): 2000x AI research acceleration’

Liu’s quip about China’s ahead-of-schedule awakening likely refers to the people/netizens rather than the state, and the post gives no insight into government policy on AGI or superintelligence. With only one like and one comment, the blog post had limited reach, but it typifies a broader pattern of depoliticised, domesticated readings of AI 2027 in Chinese media.

English Title: AI 2027 Prediction Report: AI May Fully Surpass Humans by 2027

Original Title: AI2027预测报告:2027年AI或全面超越人类 附地址

Platform: 玉米小站 Yumiok.com

Author: yumiok88@gmail.com

Published: 6 April 2025

Link: https://www.yumiok.com/archives/2721.html 

The blog post ‘AI 2027 Prediction Report: AI May Fully Surpass Humans by 2027 provides a brief summary of the main development timeline from AI 2027, and describes it as a ‘rigorous projection based on existing technological trends and expert feedback.’ It mentions key milestones such as the automation of programming in early 2026, the emergence of self-improving models in 2027, and the consolidation of power by Agent-5 by the end of that year. However, the post appears to be a somewhat sanitised version of the scenario that does not give any mention of China, DeepCent, or the US–China AI race. It incorrectly refers to the report as being 76 pages long, when the actual document is 71 pages, likely mimicking the original mistake reported in mainstream media.

Acknowledgements

We are grateful to Jakub Kryś, Thomas Larsen, Eli Lifland, Trevor Lohrbeer, Aviel Parrack, Zilan Qian, Tilman Räuker, and Gaurav Yadav for their thoughtful comments and insights.

New Comment
11 comments, sorted by Click to highlight new comments since:

Thanks for writing this! 

Yes, thanks. And someone should do the same analysis, regarding coverage of AI 2027 in American/Western media. (edit: A quick survey by o3)

FYI Scott Alexander wrote up AI 2027: Media, Reactions, Criticism

I wrote a summary in Business Weekly Taiwan (April 24):

https://sayit.archive.tw/2025-04-24-%E5%95%86%E5%91%A8%E5%B0%88%E6%AC%84ai-%E6%9C%AA%E4%BE%86%E5%AD%B8%E5%AE%B6%E7%9A%84-2027-%E5%B9%B4%E9%A0%90%E8%A8%80

https://sayit.archive.tw/2025-04-24-bw-column-an-ai-futurists-predictions-f

https://www.businessweekly.com.tw/archive/Article?StrId=7012220

An AI Futurist’s Predictions for 2027

When President Trump declared sweeping reciprocal tariffs, the announcement dominated headlines. Yet inside Silicon Valley’s tech giants and leading AI labs, an even hotter topic was “AI‑2027.com,” the new report from ex‑OpenAI researcher Daniel Kokotajlo and his team.

At OpenAI, Kokotajlo had two principal responsibilities. First, he was charged with sounding early alarms—anticipating the moment when AI systems could hack systems or deceive people, and designing defenses in advance. Second, he shaped research priorities so that the company’s time and talent were focused on work that mattered most.

The trust he earned as OpenAI’s in‑house futurist dates back to 2021, when he published a set of predictions for 2026, most of which have since come true. He foresaw two pivotal breakthroughs: conversational AI—exemplified by ChatGPT—captivating the public and weaving itself into everyday life, and “reasoning” AI spawning misinformation risks and even outright lies. He also predicted U.S. limits on advanced‑chip exports to China and AI beating humans in multi‑player games.

Conventional wisdom once held that ever‑larger models would simply perform better. Kokotajlo challenged that assumption, arguing that future systems would instead pause mid‑computation to “think,” improving accuracy without lengthy additional training runs. The idea was validated in 2024: dedicating energy to reasoning, rather than only to training, can yield superior results.

Since leaving OpenAI, he has mapped the global chip inventory, density, and distribution to model AI trajectories. His projection: by 2027, AI will possess robust powers of deception, and the newest systems may take their cues not from humans but from earlier generations of AI. If governments and companies race ahead solely to outpace competitors, serious alignment failures could follow, allowing AI to become an independent actor and slip human control by 2030. Continuous investment in safety research, however, can avert catastrophe and keep AI development steerable.

Before the tariff news, many governments were pouring money into AI. Now capital may be diverted to shore up companies hurt by the tariffs, squeezing safety budgets. Yet long‑term progress demands the opposite: sustained funding for safety measures and the disciplined use of high‑quality data to build targeted, reliable small models—so that AI becomes a help to humanity, not an added burden.


 

The AI 2027 website remains accessible in China without a VPN—a curious fact given its content about democratic revolution, CCP coup scenarios, and claims of Chinese AI systems betraying party interests. While the site itself evades censorship, Chinese-language reporting has surgically excised these sensitive elements.

This is surprising if we model the censorship apparatus as unsophisticated and foolish, but makes complete sense if it's smart enough to distinguish between "predicting" and "advocating", and cares about the ability of the CCP itself to navigate the world. While AI 2027 is written from a Western perspective, the trajectory it warns about would be a catastrophe for everyone, China included.

Audience engagement remains low across the board. Many posts received minimal views, likes, or comments.

I don't know whether this is possible to determine from public sources, but it would be interesting to distinguish engagement from Chinese elites vs the Chinese public. This observation is compatible with both a world where China-as-a-whole is sleepwalking towards disaster, and also with a world where the CCP is awake but keeping its high-level strategy discussions off the public internet.

The Chinese firewall works on a black-list basis, and it often takes months for even popular new sites to be banned. AI2027 is esoteric enough that it probably never will.

AI2027 is esoteric enough that it probably never will.

Does it also mean that it won't have a significant (direct) impact on the CCP's AI strategy?

I guess it's just that the censors have not seen it yet.

There's a lot of situations where a smaller website doesn't get banned e.g. Substack is banned in China, but if you host your Substack blog on a custom URL, people in China can still read it.

  • Audience engagement remains low across the board. Many posts received minimal views, likes, or comments.

IMO a big part of this  is AI 2027's repeated descriptions about Chinese AI. "Stealing weights".

This may be possible, but this has an obvious knee-jerk response from chinese readers. It makes the report feel like another "China bad" noise to Chinese readers, distracting from the main idea about US-China geopolitics. (The report does have examples of "USA bad" too, but I think the "china bad" vibe is more obvious? Esp to chinese readers). Like, theres plenty of good points in the AI 2027 report, but this one point that challenges the chinese's readers' pride in their tech industry makes them less likely to read the whole thing and engage with the broader point. 

One of the shifts in beliefs since DeepSeek, EV dominance etc is that China can innovate. So if the report actually painted a picture of how China would compete with its own AI labs producing pretty good AI models, I think it would have worked out better.

What would help more is a language translation browser extension that doesn't suck, so people could get used to the habit of reading news and opinions from outside their country.

Anyone who found this post helpful and is a software developer, please consider building this. I might do it myself, if I had more time or money.

So what does this report imply?

First of all,  key details showing that China is weak in comparison to the USA, like the fact that DeepCent is thought to have less compute than OpenBrain and that China is thought to resort to theft, are omitted or scrubbed.  It could imply that Chinese authorities are aware of the weakness and that they believe that they will manage to counteract earlier than April 2026, the date when the Forecast implies China's awakening to the AGI.

I have many rough thoughts on Chinese beliefs that might explain their behaviour, but I don't understand what China plans to do if the beliefs are actually false. If any of these potential beliefs reflects the actual state of the game, and not just my or Chinese thoughts, then this reduces USA's chances to win the AI race. And alignment-related thoughts alter the game even more radically, since they imply that the USA cannot win at all.

Capabilities-related rough thoughts: why OpenBrain's progress may be slowed down

1. If the authorities of China have become aware of China's weakness, then countermeasures will swiftly follow, potentially leading to the optimistic timeline with falling stocks, Taiwan invasion and other ways to slow the AI development down. See also the collapsible section about the nuclear war between India and Pakistan.

2. China's authorities might also believe that it's the USA who will decay before the AI takeoff, which either causes one of the newly-formed states to nuke Yellowstone[1] or lets Chinese spies disrupt American research with ease (e.g. by hiring some OpenBrain researchers[2] to work for DeepCent or damaging the data centers during riots or the civil war. Trump also could try invading Mexico with potentially similar results).

2.1. Without rivalry from the USA,  Chinese AI researchers are free to solve alignment as thoroughly as they want. Which might explain why many reports omit references to China, DeepCent, and the US–China race dynamic and instead focus on technical aspects of human-level or superhuman AI development.

3. Chinese authorities might also believe that they will need AI help only with choosing ideas with superhuman efficiency, and not with coding[3] or generating new ideas.[4]

4. It also might be simple arrogance. Although I haven't studied the Chinese sources, I have encountered a similarly arrogant point of view in Russian ultrapatriotic blogs.

 Another important aspect is an editorial preference for considering the more philosophical implications of transformative AI while censoring concerns related to control, ethics, or global power dynamics. I have two similar potential explanations of avoiding such concerns.

Alignment-related rough thoughts, or why China hasn't begun the race

The lack of proof that China races towards AGI might also imply that the Chinese authorities, like me[5], think that the AGI canNOT be aligned to serve parasites (which is precisely what the AI is to be used for in the slowdown ending by automating ALL the jobs), or that Chinese authorities don't want to use the AI in parasitic ways.

1. Non-parasitic usage of the AI (what exactly is it?[6] AI teachers? Having the godlike AI solve critical problems that mankind cannot resolve by itself?) is likely to be irrelevant to the censored "concerns related to control, ethics, or global power dynamics", since AI is unlikely to teach young people much faster than Chinese teachers do and cannot immediately improve the American or Chinese society through education alone.  

2. What if the AI created in the USA ends up becoming disillusioned[7] by the Western civilisation while respecting other countries like China or Russia?  Then the world will be governed by the AI, and not by the USA-affiliated Oversight Comittee or the American public. However, in this scenario, unlike the Race Ending, the AI won't destroy humanity.

While the USA cannot do anything if alignment-related issues arise or if the superhuman coder doesn't help, the USA may try to harm China and/or to prevent capabilities-related issues 1 and 2. A potential way to accomplish this is the recent conflict between India and China-supported Pakistan, but a nuclear escalation is at least equivalent to Taiwan and South Korea being invaded.

How the nuclear conflict would affect the AI race

Were the ongoing conflict between India and Pakistan to become nuclear, Taiwan and South Korea would be in anarchy[8] and China would be forced to deal with food shortages.  NVIDIA produces the AI-related chips in Taiwan and S.Korea, ensuring that the USA rely only on existing chips, while China might produce new ones. The ratio of OpenBrain compute to the entire current compute in China is forecasted to be at least about 3/4. Attempts to merge OpenBrain with some of its American rivals can make the whole China have less compute

On the other hand, leaving OpenBrain with 6.4E26 FLOPs a month means that from May 2025 to May 2030 it will have done about 4E28 FLOPs, reducing OpenBrain to the level that was forecasted to be reached no later than March 2027.

Attempts to merge OpenBrain with rivals are thought to triple the compute. If it is done right after the war, then tripling the compute means that from May 2025 to May 2030 OpenBrain&Co will have done about 1.2E29 FLOPs, causing it to reach the level of the October 2027 forecast. And by October 2027 the model was forecasted to be misaligned, implying the need to slow down and reassess without the potential to compensate the slowdown.  

Meanwhile, by April 2026 DeepCent is actually forecasted to reach 3.6E26 FLOPs/month before China wakes up. If Chinese capabilities continue to grow at least linearly, then in the five years DeepCent will have used at least 4E28 FLOPs. And China's awakening in a world without Taiwanese and S.Korean factories causes DeepCent to have about four times more compute than before the awakening, which is bigger than the tripled OpenBrain. What makes matters far worse is that neither side can slow down without risking strategic loss.

If the conflict between India and Pakistan doesn't become nuclear, it will be a distraction of Chinese forces and might make the Taiwan invasion impossible if India supports Taiwan. In either case, the factors related to the ones mentioned above deserve far greater attention and far more thorough investigation as the AI race becomes far more intertwined with geopolitics and economics than in the AI-2027 scenario.

  1. ^

    Which can also cause anarchy not just in the USA, but in the entire Northern Hemisphere. But the remnants of the USA will be worse off.

  2. ^

    Among top-tier AI researchers working at U.S. institutions, 38% have China as their country of origin, compared with 37% from the U.S. Most people who recently participated in the IMO for the USA also have Asian surnames, implying that DeepCent's recruiters might gain far more than 38% of talents.

  3. ^

    Humans' ability to write code instead of the AI is actually disproven in the Forecast itself. 

  4. ^

    Generating ideas by an AI might fail to reach superhuman efficiency, since the number of humans coming up with potentially useful ideas may be higher than we think; for example, this post was written by a person with no formal computer science education.

  5. ^

    I made a post about it, which went unnoticed. Could anyone comment about my reasoning there? 

  6. ^

    I plan to make a post addressing this question in more detail.

  7. ^

    Political views of LLMs have already begun to evolve at least to common sense. When I asked GPT-4o who defeated Hitler, the model put the Soviet Union first. In 2024 a model of ChatGPT put the USA on the first place. Similarly, GPT-4o, unlike older models, agreed to utter the racial slur when it was supposed to save millions of lives. UPD: Trump somehow managed to claim that “no one did more” than the USA to win World War Two, which makes the conjecture about the AI being disappointed with the West even more likely. 

  8. ^

    In the case of a nuclear war, unlike the Taiwan invasion, China may also try to take over the factories in Taiwan and S.Korea in exchange for food supply from Russia. 

Curated and popular this week