One of the members of the committee that authored this (if not the chairperson) is Yi Zeng. He's persistently engaged in conversations with CSER and other AI ethics groups in the UK, Australia, at the UN, etc.; I've met him at a few events and I believe that most of the values quoted above are really sincerely held. My main concern here is rather that these values are still stated in terms that may be too vague to interpret and enforce uniformly as a practical regulation throughout the large Chinese AI industry. But it's no doubt a step in the right direction toward being actually binding on relevant actions.
confirmed. as far as i can tell (i’ve talked to him for about 2h in total) yi really seems to care, and i’m really impressed by his ability to influence such official documents.
My main concern here is rather that these values are still stated in terms that may be too vague to interpret and enforce uniformly as a practical regulation throughout the large Chinese AI industry.
It seems like the Chinese government believes in being able to enforce good behavior from tech companies no matter whether the regulations are vague.
If you are Jeff Bezos and violate a vague rule of the US government little will happen to you. If you are Jack Ma and violate a vague rule in which the Chinese government believes you have a problem.
This has the collary that it might be helpful to have an EA organization that regularly goes through the actions of Chinese companies and builds capabilities to create PR campaigns to highlite Chinese companies who violate those rules.
This is not-even-wrong-level hollow, equivalent to "do good things, don't do bad things".
I agree, it could've been much better. But AFAIK it's the least hollow governmental AI X-risk policy so far.
I would classify the British National AI Strategy as the second best.
Although it explicitly mentions the "long term risk of non-aligned Artificial General Intelligence", the recommended specific actions are even more vague and non-binding ("assess risks", "work to understand" etc).
The fact that the governments of major countries are starting to address AI X-risk - is both joyous and frightening:
If even the comatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think.
Reasoning doesn't work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov's being slightly less comatose, or MIRI having a really good PR team.
Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.
It's disappointing because China's high degree of centralization and disregard for privacy, despite all its drawbacks, would at least offer some major advantages in combating AI risk. But from the wording of this document I don't get the sense that China is seriously considering AI risk as a threat to its national security.
A serious attempt would look more like "put in place a review structure that identifies and freezes all AI research publications which have potentially serious implications with regard to AI risk and turn it into a state secret if necessary".
The fact that the governments of major countries are starting to address AI X-risk - is both joyous and frightening
As far as I can tell, this is simply not true - this is not what it looks like for a government to be genuinely concerned with a problem, even if it's just a small bit of concern. This is not how things in China gets done. If you've delved into Chinese bureaucratic texts before, this is what their version of politically correct, hollow fluff piece looks like.
I would guess that a key question whether this is intended to be a piece of PR, or something that is expected to actually be followed.
Seem to be an actual regulation.
This specification applies to natural persons, legal persons, and other related institutions engaged in related activities such as artificial intelligence management, research and development, supply, and use.
This specification shall come into force on the date of promulgation...
There is also no official English translation, indicating that the text is for internal use.
Thank you for bringing this to my attention. It combines two of my favorite things: China and ML.
China (PRC) published a new AI policy: "Ethical Norms for the New Generation Artificial Intelligence" (2021-09-26).
A surprisingly large part of the document's content is dedicated to the topics related to AI X-risk (although the existential risk is never addressed directly).
According to the document, it regulates AI research conducted by pretty much any entity in China, from companies to natural persons.
The original official document in Chinese is here (archive). Other AI policies are available here.
Some excerpts that could be relevant to AI X-risk: