I'm a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you're saying makes sense with my experience, and a lot of Western ideas are generally regarded as either "sounds nice but is hypocritical and not what Westerns actually do" (a common viewpoint until ~10 years ago) with a later idea of "actually no, many young Westerners are sincere about their ideas - they're actually just crazy in an ideological way about things that can't and won't work" that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren't persuasive. But concerns around safety, second-order effects, th...
Thanks for writing this.
Overall, I don't like the post much under it's current form. There's ~0 evidence (e.g. from Chinese newspapers) and there is very little actual argumentation. I like that you give us a local view but putting a few links to back your claims would be very very appreciated. Right now it's hard to update on your post given that the claims are very empirical and without any external sources.
More minorly: "A domestic regulation framework for nuclear power is not a strong signal for a willingness to engage in nuclear arms reduction" I also disagree with this statement. I think it's definitely a signal.
A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).
Can you expand on this? Why would it be a bad idea? I have interacted with mainland chinese people (outside of china) and I'm not really making the connection.
Let's just say that weirdness in China is very different from weirdness in the West. AI safety isn't even a weird concept here. It's something people talk about, briefly think over, then mostly forget, like Peter Thiel's new book. People are generally receptive to it. What AI safety needs to get traction in the Chinese idea sphere is to rapidly disassociate with really really weird ideas like EA. EA is like trying to shove a square peg into the round hole of Chinese psychology. It's a really bad sign that the AI Safety toehold in China is clustered around EA.
Rationality is pretty weird too, and is honestly just extra baggage. Why add it to the conversation?
We don't need rationality or EA to get Chinese to care about AI safety. Trying to import the Western EA-AI safety-Rationality memeplex wholesale is both unnecessary and detrimental.
Interestingly, Yud is attractive to Russian mindset (similarly to Karl Marx). I heard 12 old children discussing HPMOR on the beach, and their parents were not rationalists.
From my observations, the Chinese mindset is much more different from the American one than the Russian mindset.
In comparison to Chinese, Russians are just slightly unusual Europeans, with mostly the same values, norms, worldviews as Americans.
I attribute it to the 3000+ years of a relatively strong cultural isolation of China, and to the many centuries of Chinese rulers running all kinds of social engineering (including the purposeful modification of the language according to political goals).
We don't need rationality or EA to get Chinese to care about AI safety. Trying to import the Western EA-AI safety-Rationality memeplex wholesale is both unnecessary and detrimental.
I agree with this, not out of any particular expertise on China, but instead because porting a whole memeplex across a big cultural boundary is always unnecessary and detrimental.
I just want to note that rationality can fit into the Chinese idea sphere, very neatly; it's just that it's not effortless to figure out how to make it work.
The current form e.g. the sequences, is wildly inappropriate. Even worse, a large proportion of the core ideas would have to be cut out. But if you focus on things like human intelligence amplification and forecasting and cognitive biases, it will probably fit into the scene very cleanly.
I'm not willing to give any details, until I can talk with some people and get solid estimates on the odds of bad outcomes, like the risk that rationality will spread, but AI safety doesn't, and then the opportunity is lost. The "baggage" thing you mentioned is worth serious consideration, of course. But I want to clarify that yes, EA won't fit, but rationality can (if done right, which is not easy but also not hard), please don't rule it out prematurely.
The most common response I get when I talked to coworkers about AI risk wasn't denial or an attempt to minimize the problem. It was generally something like "That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship." And then a shrug before they went back to their tasks. I don't see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.
I can't really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It's not something you can argue people out of. Just trust me on this one.
Is it "insanely cringe" for different reasons than it is "insanely cringe" for English audiences? I suspect most Americans, if exposed to it, would describe it as cringe. There is much about it that is cringe, and I say this with some love.
"That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship."
The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money!
When it comes to helping strangers, the Chinese help strangers a lot less. There are articles like: https://medium.com/shanghai-living/4-31-why-people-would-usually-not-help-you-in-an-accident-in-china-c50972e28a82 about Chinese not helping strangers who have an accident.
When it comes to the accident that Peter Singer refers to, the article talks about it as:
There were a couple of extreme cases where Chinese people refusing to help led to the death of the person in need. Such is the case of Wang Yue, a two year old girl who was wandering alone in a narrow alley, because her mother was busy doing the laundry. She was ran over by two cars, and 18 people who passed around the area didn’t even look at her. Later a public anonymous survey revealed that 71% thought that the people who passed by didn’t stop to help her because they were afraid of getting into trouble.
The article goes on to say:
...That’s not the only case in 2011 an 88 year old Chinese man fell in the street and broke his nose, while people passed him by no one helped him and he died suffocating in his own blood. After some anonymous poll the result was the same, the people didn’t blame those who didn’t help, because the rece
An example of the dangers of helping in China from the article that ChristianKl talked about.
...While individualism in China is a big thing, this situation is more related to the fear of being accused as the responsible of the accident, even when you just tried to help.
The most popular case happened in the city of Nanjing, a city located at the west of Shanghai. The year was 2006 when Xu Shoulan, an old lady trying to get out of a bus, fell and broke her femur. Peng Yu, was passing by and helped her taking her to the hospital and giving her ¥200 (~30 USD) to pay for her treatment. After the first diagnosis Xu needed a femur replacement surgery, but she refused to pay it by herself so she demanded Peng to pay for it, as he was the responsible of the accident according to her. She sued him and after six months she won and Peng needed to cover all the medical expenses of the old lady. The court stated that “no one would, in good conscience, help someone unless they felt guilty”.
While this incident wasn’t the first, it was very popular and it showed one of the non written rules of China. [I bolded this] If you help someone it’s because you feel guilty of what happened, so in some way you
Thanks for writing this post! I want to note a different perspective. Although unlike OP, I have not lived in China since 2015 and am certainly more out of touch with how the country is today.
I do observe some of the same dynamics that OP describes, but I want to point out that China is a really big country with inherently diverse perspectives, even in the current political environment. I don’t see the dynamics described in this post as necessarily the dominant one, and certainly not the only one. I know a lot of young people, both in my social circle and online, that share many of the Western progressive values such as the pursuit of equality, freedom, and altruism. I see many people trying their best to live a meaningful life and do good for the world. (Of course, many people are not thinking about this at all, but that is the same everywhere. It's not like these concerns are that mainstream in the West.) As a small piece of evidence, 三联生活周刊 did an interview with me about AI safety recently, and it got 100k+ views on WeChat and only positive comments. I’ve also had a few people reach out to me expressing interest in EA/AI safety since the interview came out.
...You can't just hope
A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).
This has already been done, and has pretty good reviews and some discussions.
I've looked through the EA/Rationalist/AI Safety forums in China
If these are public, could you post the links to them?
there is only one group doing technical alignment work in China
Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?
This seems to be arguing against a starry-eyed idealist case for an "AI disarmament treaty", but not really against a cynical/realistic case. (At first I was going to say "arguing against a strawman", but no, there are in fact lots of starry-eyed idealists in alignment.)
Here's my cynical/realistic case for an "AI disarmament treaty" (or something vaguely in that cluster) with China. As the post notes, the regulations mostly provide evidence that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. For purposes of an AI treaty, that's plausibly all we need. Near-term AI is a potential threat to stability from the CCP's perspective. That's true whether the AI is built in China or somewhere else; American-built AI is still a threat to stability. So presumably the CCP would love for the US to adopt rules limiting new LLMs. If the US comes along and says "we'll block training of big new AIs if you do", the CCP's answer is plausibly "yes definitely that sounds excellent".
And sure, China won't be working much on AI safety. That's fine; the point of an "AI disarmament treaty" is not to get literally everyone working on safety. They don't even need to be motivated by X-risk. If they're willing to commit to not build big new AI, then it doesn't really matter whether they're doing that for the same reasons we want it.
If only we could spread the meme of irresponsible Western powers charging head-first into building AGI without thinking through the consequences and how wise the Chinese regulation is in contrast.
I think it'll be easier in a few years to demonstrate the foolishness of open-source AI.
Since we are talking regulation and/or treaties, both government concerns, is there anything to the idea of working in the context of party slogans (by which I mean tifa [提法])?
What I have in mind is something like this:
I think this might work because:
It will not work if:
That being said, I think it might be a good exercise at least from the English side as a direct attempt at working with some Chinese conceptualization of the motivations for the problem of alignment. A few examples of wha...
Me and @Roman_Yampolskiy published a piece on AI xrisk in a Chinese academic newspaper: http://www.cssn.cn/skgz/bwyc/202303/t20230306_5601326.shtml
We were approached after our piece in Time and asked to write for them (we also gave quotes for another provincial newspaper). I have the impression (I've also lived and worked in China) that leading Chinese decision makers and intellectuals (or perhaps their children) read Western news sources like Time, NYTimes, Economist, etc. AI xrisk is currently probably mostly unknown in China, and if stumbled upon people might have trouble believing it (as they have in the west). But if/when we're going to have a real conversation about AI xrisk in the west, I think the information will seep into China as well, and I'm somewhat hopeful that if this happens, it could perhaps prepare China for cooperation to reduce xrisk. In the end, no one wants to die.
Curious about your takes though, I'm of course not Chinese. Thanks for the write-up!
I've heard people be somewhat optimistic about this AI guideline from China. They think that this means Beijing is willing to participate in an AI disarmament treaty due to concerns over AI risk.
I'm curious where you've seen this. My impression from reading the takes of people working on the governance side of things is that this is mostly being interpreted as a positive sign because it (hopefully) relaxes race dynamics in the US. "Oh, look, we don't even need to try all that hard, no need to rush to the finish line." I haven't seen anyone seri...
I've only seen vaguely positive vibes from people who needed Google Translate in order to understand it, like this post from Zvi. He comes to the conclusion that "All of that points, once again, to an eager partner in a negotiation." This isn't obvious at all. Again, a willingness to regulate nuclear power is not a strong signal for a willingness to participate in nuclear disarmament treaty - all states will eventually have some level of nuclear regulation.
"Damn it, we're falling behind! GPT4 is way better than anything we have."
OpenAI's bans on Chinese users have really hurt public knowledge of GPT4, for what that's worth. The small amount of effort it takes to get a US phone number was enough to discourage casuals from getting hands-on experience, although there are now clone websites using an OpenAI API going up and down all the time. But yeah, awareness of just how good it is isn't as mainstream compared to the US.
As far as I can tell, GPT4/ChatGPT works great on Chinese, even without fine-tuning. And it blows Chinese-specializing models of Baidu and friends out of the water. It seems like a bit of a Spudnik moment.
OpenAI does not accept creating an account with a Chinese/HK phone number (specifically noting that they are blocking service from those areas). I also can not access the chatgpt website from a mainland IP, but can from HK. I vaguely remember OpenAI citing US law as a reason they don't allow Chinese users access, maybe legislation passed as part of the chip ban?
Hi! I run a reasonably popular podcast (ChinaTalk) and am also very frustrated with the misconceptions people have around this topic! Pls reach out I’m at jordan@chinatalk.media
Thanks very much for this dose of reality. So maybe a western analogy for the attitude to "AI safety" at Chinese companies, is that at first, it will be comparable to the attitude at Meta. What I mean by this: Microsoft works with OpenAI, and Google works with Anthropic, so they both work with organizations that at least talk about the danger of AI takeover, alongside more mundane concerns. But as far as I can tell, Meta does not officially acknowledge AI takeover as a real risk at all. The closest thing to an official Meta policy on the risk of AI takeove...
Do you really think that a scientist is going to walk up to his friend from the Politburo and say "Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith"
This part focused me, what happens if you do this is, your politburo friend answers "that's obviously dangerous propaganda, so I'm going to generate counterarguments against it so that no one entertains it ever again, ...
It is far more likely that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation.
This suggests that Beijing is making regulations that are driven by short-term thinking and not careful thinking over longer timelines.
Generally, my perspective of Chinese decision-making is that government policy is written by people who are able to think over longer timelines.
To me "AGI has to be aligned with CEV" and the sentence of "AGI has to be aligned with socialist values" sound very similar to me. Anything ...
When it comes to having an international treaty, whether or not China will agree depends a lot on the content of the treaty.
If the Chinese put their guidelines in practice, they might be open to have a treaty that basically says that all companies have to abide by the guidelines.
The proposed requirements on training data quality can prevent models like GPT-3 and GPT-4 from being trained.
Legal liability for anything the model does is also an effective way to slow down AI development.
Old lurker new account! Need to go work soon so very very quick high-level thoughts:
I feel like we feel some of the same frustration at mainstream/western EA, or others alignment speculaters without a deep understanding of China. I can crudely gesture at their wrong inferences sourcing from assumptions orthogonal to the truth, implying fundamental misconceptions in their model, but in the end it comes down to context: much of this is sub-verbal subtle differences that I can succinctly identify without putting in a few months of effort first. Often when I t...
我浏览了中国的EA/Rationalist/AI安全论坛
Hi, do you know Concordia AI in Beijing? they are pretty active -
https://concordia-ai.com/
https://www.zhihu.com/org/an-yuan-ai-6/posts
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Does Anyuan(安远) have a website? I haven't heard of them and am curious. (I've heard of Concordia Consulting https://concordia-consulting.com/ and Tianxia https://www.tian-xia.com/.)
Good post.
For Westerners looking to get a palatable foothold on the priorities and viewpoints of the CCP (and Xi), I endorse "The Avoidable War" written last year by Kevin Rudd (former Prime Minister of Australia, speaks mandarin, has worked in China, loads of diplomatic experience - imo about as good of a person that exists to interpret Chinese grand strategy and explain it from a Western viewpoint). The book is (imo, for a politician), impressively objective in its analysis.
Some good stuff in there explaining the nature of Chinese cynicism about foreign motivations that echoes some of what is written in this post, but with a bit more historical background and strategic context.
I did a masters of data science at Tsinghua University in Beijing. Maybe it's a little biased, but I thought they knew their stuff. Very math heavy. At the time (2020), the entire department seemed to think graph networks, with graph based convolutions and attention was the way forward towards advance AI. I still think this is a reasonable thought. No mention of AI safety, though I did not know about the community (or concern) then.
The main thing seems to be money. How do we get more money into alignment in a way that is not detrimental or repulsive to people we want to attract to working there?
I'm trying to think of ideas here. As a recap of what I think the post says:
^let me know if I am understanding correctly.
Some ideas/thoughts:
It sounds like you're skeptical about EA field building because most Chinese people find "changing the world" childishly naive and impractical. Do you think object-level x-risk field building is equally doomed?
For example, if 看理想 (an offshoot of publishing house 理想国 that produces audio programs about culture and society) came out with an audio program about x-risks (say, 1-2 episodes about each of nuclear war, climate change, AI safety, biosecurity), I think the audience would be fairly receptive to it. 梁文道, one of the leaders of 看理想, has shared on his pod...
We know we can change the world - it's not even a question when your grandparents are literal peasant farmers, your parents are industrial workers, and you're a scientist. It's just that altruism itself is cringe. Volunteering is cringe. I don't know how to explain this to Westerners in a way they can understand.
I've heard people be somewhat optimistic about this AI guideline from China. They think that this means Beijing is willing to participate in an AI disarmament treaty due to concerns over AI risk. Eliezer noted that China is where the US was a decade ago in regards to AI safety awareness, and expresses genuine hope that his ideas of an AI pause can take place with Chinese buy-in.
I also note that no one expressing these views understands China well. This is a PR statement. It is a list of feel-good statements that Beijing publishes after any international event. No one in China is talking about it. They're talking about how much the Baidu LLM sucks in comparison to ChatGPT. I think most arguments about how this statement is meaningful are based fundamentally on ignorance - "I don't know how Beijing operates or thinks, so maybe they agree with my stance on AI risk!"
Remember that these are regulatory guidelines. Even if they all become law and are strictly enforced, they are simply regulations on AI data usage and training. Not a signal that a willingness for an AI-reduction treaty is there. It is far more likely that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. A domestic regulation framework for nuclear power is not a strong signal for a willingness to engage in nuclear arms reduction.
Maybe it is true that AI risk in China is where it was in the US in 2004. But the US 2004 state was also similar to the US 1954 state, so the comparison might not mean that much. And we are not Americans. Weird ideas are penalized a lot more harshly here. Do you really think that a scientist is going to walk up to his friend from the Politburo and say "Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith, they're even promising to not widen the current AI gap they have with us!" Well, China isn't in this race for parity or to be second best. China wants to win. But that's for another post.
Remember that Chinese scientists are used to interfacing with our Western counterparts and know to say the right words like "diversity", "inclusion", and "no conflict of interest" that it takes to get our papers published. Just because someone at Beida makes a statement in one of their papers doesn't mean the intelligentsia is taking this seriously. I've looked through the EA/Rationalist/AI Safety forums in China, and they're mostly populated by expats or people physically outside of China. Most posts are in English, and they're just repeating/translating Western AI Safety concepts. A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this). The only significant AI safety group is Anyuan(安远), and they're only working on field-building. Also, there is only one group doing technical alignment work in China, the founder was paying for everything out of pocket and was unable to navigate Western non-profit funding. I've still not figured out why he wasn't getting funding from Chinese EA people (my theory is that both sides assume that if funding was needed, the other side would have already contacted them).
You can't just hope an entire field into being in China. Chinese EAs have been doing field-building for the past 5+ years, and I see no field. If things keep on this trajectory, it will be the same in 5 more years. The main reason I could find is the lack of interfaces, people who can navigate both the Western EA sphere and the Chinese technical sphere. In many ways, the very concept of EA is foreign and repulsive to the Chinese mindset - I've heard Chinese describe an American's reason to go to college (wanting to change the world) as childishly naive and utterly impractical. This is a very common view here and I think it makes approaching alignment from an altruistic perspective doomed to fail. However, there are many bright Chinese students and researchers recently laid off who are eager to get into the "next big thing", especially on the ground floor. Maybe we can work with that.
I mostly made this post because I want to brainstorm possible ideas/solutions, so please comment if you have any insights.
Edit: I would really appreciate it if someone could get me on a podcast to discuss these ideas.