The Amelia Bedelia defense.
I acknowledge this. My thinking is a bit scattered and my posts are often just an attempt to articulate publically somewhere intuitions that I have no outlet elsewhere to discuss and refine.
I'm saying first off, there is no moat. Yet I observe people on this and similar forums with the usual refrain: but look, the West is so far ahead in doing X in AI, so we shouldn't use China as a boogie man when discussing AI policy. I claim this is bogus. The West isn't far ahead in X because everything can be fast copied, stolen, brute forced and limits on hardware, etc. appear ineffective. Lots of the arguments in favor of disregarding China in setting AI safety policy assume it being perpetually a few steps behind. But if they are getting similar performance, then they aren't behind.
So if there is no moat, and we can expect peer performance soon, then we should be worried because we have reason to believe that if scaling + tweaks can reach AGI, then China might conceivably get AGI first, which would be very bad. I have seen replies to this point of: well, how do you know it would be that much worse? Surely Xi wants human flourishing as well. And my response is: governments do terrible things. At least in the West, the public can see these terrible things and sometimes say, hey: I object. This is bad. The PRC has no mechanism. So AGI would be dangerous in their hands in a way that it might not be...at least initially, in the West...and the PRC is starting from a not so pro-flourishing position (Uighur slavery and genocide, pro-Putinism, invade Taiwan fever, debt trap diplomacy, secret police abroad, etc.).
If you think AGI kills everyone anyway, then this doesn't matter. If you think AGI just makes the group possessing it really powerful and able to disempower or destroy competitors, then this REALLY matters, and policies designed to hinder Western AI development could mean Western disempowerment, subjugation, etc.
I make no guarantees about the coherence of this argument and welcome critiques. Personally, I hope to be wrong.
Before 30, I was also a moron. But I only know this because I had an ideological epiphany after that and my belief system changed abruptly. Scales-fell-from-my-eyes type situation. When I turned 33, I started keeping a diary because I noticed I have a terrible memory for even fairly recent things, so maybe going forward subtle changes will become more salient.
That said, some things seem more impervious to change. For instance the "shape" of things that give you pleasure. Maybe you liked 3d puzzles as a child and now you like playing in Blender in your free time. Not the same thing, but the same shape.
Good point.
I'd like to be convinced that I'm wrong, but I just watched a Kling AI video of Justin Timberlake drinking soda and it was pretty real looking. This plus Voice delay from OpenAI plus Yi-Large in top 10 on the LMSYS leader board after company has only existed 1 year plus just the general vibe has me really convinced that:
Predictions:
(Item removed. Realized that paper I was refering to would affect inference time compute, not training compute.)
By years end, some Chinese-made LLM will be atop the LMSYS leaderboard. (60%)
Beyond Sora text-to-video and image-to-video generation wide-released to general Chinese public by end of year (80%). Capable of generating multiple minutes of video. (70%, given the first statement). Generation times less than half that of Sora. (80%) Compute less than half that of Sora. (90%)
Chips of similar quality to ones produced by TSMC or Samsung will be produced by a Chinese firm within 2 years (50%). This will be accomplished by using a new lithographic process to sidestep the need for embargoed advanced etching machines or by reverse engineering one of the latest etching machines (smuggling it from Korea or Japan) (80%, given the first statement is true)
Advanced inexpensive Chinese personal robots will overwhelm the western markets, destroying current western robotics industry in the same way that the West's small kitchen appliance industry was utterly crushed. (70%) Data from these robots will make its way to CCP (90%, given the first statement is true)
What does this mean: the West is caught backfoot again. Despite creating the technology, China, by sheer size and directed investment, is poised to crush the West in AI. We saw this same story with electric cars, solar panels, robotics. Fast copy (or steal) and then quickly iterate and scale is extremely effective and there is no easy way to combat it. Market asymmetries mean that Chinese firms always have a large market without competitors while Western markets are bombarded with cheap alternatives to domestic brands.
If these were Japanese firms in the 1980s or Korean firm in the 2000s, we could sit back and relax. Sure, they may be ahead, but they are friendly and so we can reap the benefits. That is not the case here, especially with the possibility of AGI. Chinese firms in the 2020s, funded and controlled by CCP and subject to civil-military fusion laws, the tech is likely already being deployed in weapon systems, propaganda tools, etc. If LLMs scale to AGI and the Chinese get it first, the West is cooked in a scary existential way, over and above the general danger of AGI.
Why? Observe the flood of fentanyl precursors streaming from Chinese ports to Mexico. This could be stopped, but is permitted because it serves the CCP's ends. Observe the Chinese chips making their way into Russian weapon systems. This could be stopped, but it serves the CCP's ends that its vassal Russia crush western advancement. Now imagine the same entity had AGI. This is not to say that the West has a good track record--Iran-Contra, Iraq, Afghanistan, arms to rogue regimes, propping up South American despots, turning a blind eye to South African apartheid for decades, etc. But the various checks and balances in the West often mean that there is a meaningful way to change such policies, especially ones that look calculated to disempower and subordinate. An AGI China is scary as fuck. Unchecked power. The CCP already has millions of people in work camps and promotes re-education (ethnic cleansing) in "wayward" provinces. Extrapolate a little.
Again, I am eager to be convinced I am wrong. I hate to beat this same drum over and over.
I would argue that leaders like Xi would not immediately choose general human flourishing as the goal. Xi has a giant chip on his shoulder. I suspect (not with any real proof, but just from a general intuition) that he feels western powers humiliated imperial China and that permanently disabling them is the first order of business. That means immediately dissolving western governments and placing them under CCP control. Part of human flourishing is the feeling of agency. Having a foreign government use AI to remove their government is probably not conducive to human flourishing. Instead, it will produce utter despair and hopelessness.
Consider what the US did with Native Americans using complete tech superiority. Subjugation and decimation in the name of "improvement" and "reeducation." Their governments were eliminated. They were often forcibly relocated at gunpoint. Schools were created to beat out "savage" habits from children. Their children were seized and rehomed with Whites. Their languages were forcibly suppresed and destroyed. Many killed themselves rather than submit. That is what I'd expect to happen to the West if China gets AGI.
Unfortunately, given the rate at which things are moving, I expect the West's slight lead to evaporate. They've already fast copied SORA. The West is unprepared to contend with a fully operational China. The counter measures are half-hearted and too late. I foresee a very bleak future.
There are lots of language that use a "to-be" copula far less frequently than English. I don't know that it actually affects people's ontologies. It would be evidence in favor of Sapir-Worf if it did.
Nvidia just low-key released its own 340B parameter model. For those of you worried about the releasing of model weights becoming the norm, this will probably aggravate your fears.
Here is the link: https://research.nvidia.com/publication/2024-06_nemotron-4-340b
Oh, and they also released their synthetic data generation pipeline:
https://blogs.nvidia.com/blog/nemotron-4-synthetic-data-generation-llm-training/
I think I've switched positions on open source models. Before I felt that we must not release them because they can be easily fine-tuned to remove safety measures and represent a tech donation to adversaries. But now I feel the harm posed by these open source models seems pretty small and that because Alibaba is releasing them at an exceptionally rapid pace, western forbearance will not affect their proliferation.
I would be willing to bet maybe $100 on the video prediction one. Kling is already in beta. As soon as it is released to the general public, that is satisfied. The only uncertainty is whether Chinese authorities crack down on such services for insufficient censorship of requests.