AI Governance researcher with Polaris Ventures, formerly of the MTAIR project, TFI and Center on Long-term Risk, Graduate researcher at Kings and AI MSc at Edinburgh. Interested in philosophy, longtermism and AI Alignment.
Let me clarify an important point: The strategy preferences outlined in the paper are conditional statements - they describe what strategy is optimal given certainty about timeline and alignment difficulty scenarios. When we account for uncertainty and the asymmetric downside risks - where misalignment could be catastrophic - the calculation changes significantly. However, it's not true that GM's only downside is that it might delay the benefits of TAI.
Misalignment (or catastrophic misuse) has a much larger downside than a successful moratorium. That is true, but trying to do a moratorium, losing your lead, and then someone else developing catastrophically misaligned AI when you could have developed a defense against it if you'd adopted CD or SA has just as large a downside.
And GM has a lower chance of being adopted than CD or SA, so the downside to pushing for a moratorium is not necessarily lower.
Since a half-successful moratorium is the worst of all worlds (assuming that alignment is feasible) because you lose out on your chances of developing defenses against unaligned or misused AGI, it's not always true that the moratorium plan has fewer downsides than the others.
However, I agree with your core point - if we were to model this with full probability distributions over timeline and alignment difficulty, GM would likely be favored more heavily than our conditional analysis suggests, especially if we place significant probability on short timelines or hard alignment
The fact that an AI arms race would be extremely bad does not imply that rising global authoritarianism is not worth worrying about (and vice versa)
I am someone who is worried both about AI risks (from loss of control, and from war and misuse/structural risks) and from what seems to be a 'new axis' of authoritarian threats cooperating in unprecedented ways.
I won't reiterate all the evidence here, but these two pieces and their linked sources should suffice:
Despite believing this thesis, I am not, on current evidence, in favor of aggressive efforts to "race and beat China" in AI, or for abandoning attempts to slow an AGI race. I think on balance it is still worth trying these kinds of cooperation, while being clear eyed about the threats we face. I do think that there are possible worlds where, regretfully and despite the immense dangers, there is no other option but to race. I don't think that we are in such a world as of yet.
However, I notice that many of the people who agree with me that an AI arms race would be very bad and that we should avoid it tend to diminish the risks of global authoritarianism or the difference between the west and its enemies, and very few seem to buy into the above thesis that there is a dangerous interconnected web of authoritarian states with common interests developing.
Similarly, most of the people who see the authoritrian threat which has emerged into clear sight over the last few years (from China, Russia, Iran, North Korea and similar actors) want to respond by racing and think alignment will not be too difficult. This includes the leaders of many AI companies who may have their own less patriotic reasons for pushing such an agenda.
I think this implicit correlation should be called out as a mistake.
As a matter of simple logic, how dangerous frantic AGI development is, and how hostile foreign adversaries are, are two unrelated variables which shouldn't correlate.
In my mind, the following are all true:
I basically never see these 3 acknowledged all at once. We either see (1) and (3) grouped together or (2) alone. I'm not sure what the best AI governance strategy to adopt is, but an analysis should start with a clear eyed understanding of the international situation and what values matter.
One underlying idea comes from how AI misalignment is intended to work. If superintelligent AI systems are misaligned, does this misalignment look like an inaccurate generalization from what their overseers wanted, or a 'randomly rolled utility function' deceptively misaligned goal that's entirely unrelated to anything their overseers intended to train? This is represented by Levels 1-4 vs levels 5+, in my difficulty scale, more or less. If the misalignment is result of economic pressures and a 'race to the bottom' dynamic then its more likely to result in systems that care about human welfare alongside other things.
If the AI that's misaligned ends up 'egregiously' misaligned and doesn't care at all about anything valuable to us, as Eliezer thinks is most likely, then it places zero terminal value on human welfare and only trade, threats or compromise would get it to be nice. If the AI super-intelligent and you aren't, none of those considerations apply. Hence, nothing is left for humans.
If the AI is misaligned but doesn't have an arbitrary value system, then it may value human survival at least a bit and do some equivalent of leaving a hole in the dyson sphere.
For months, those who want no regulations of any kind placed upon themselves have hallucinated and fabricated information about the bill’s contents and intentionally created an internet echo chamber, in a deliberate campaign to create the impression of widespread opposition to SB 1047, and that SB 1047 would harm California’s AI industry.
There is another significant angle to add here. Namely: Many of the people in this internet echo chamber or behind this campaign are part of the network of neoreactionaries, MAGA supporters, and tech elites who want to be unaccountable that you've positioned yourself as a substantial counterpoint to.
Obviously it's invoking a culture war fight which has its downsides, but it's not just rhetoric: the charge that many bill opponents are basing their decisions on an ideology that Newsom opposes and sees as dangerous for the country, is true.
A16z and many other of the most dishonest opponents of the bill, are part of a Trump-supporting network with lots of close ties to neoreactionary thought, which opposes SB 1047 for precisely the same reason that they want Trump and republicans to win: to remove restraints on their own power in the short-to-medium term, and more broadly because they see it as a step towards making our society into one where wealthy oligarchs are given favorable treatment and can get away with anything.
It also serves as a counterpoint against the defense and competition angle, at least if its presented by a16z (this argument doesn't work for e.g. OpenAI, but there are many other good counterarguments). The claims they make about the bill harming competitiveness e.g. for defense and security against China and other adversaries ring hollow when most of them are anti-Ukraine support or anti-NATO, making it clear they don't generally care about the US maintaining its global leadership.
I think this would maybe compel Newsom who's positioned himself as an anti-MAGA figure.
I touched upon this idea indirectly in the original post when discussing alignment-related High Impact Tasks (HITs), but I didn't explicitly connect it to the potential for reducing implementation costs and you're right to point that out.
Let me clarify how the framework handles this aspect and elaborate on its implications.
Key points:
However, that makes it unrealistic in practice, especially because (as indicated by the haziness on the graph), there will be many HITs, both accelerating alignment research and also incrementally reducing overall risk.
To illustrate, consider a scenario where scalable oversight at level 4 of the alignment difficulty scale is used to fully automate mechninterp at level 6, and then this level 6 system can go on to say research a method of impregnable cyber-defense, rapid counter-bio-weapon vaccines, give superhuman geopolitical strategic advice, and unmask any unaligned AI present on the internet.
In this case, the actual difficulty level would be 4, with the HIT being the automation of the level 6 technique that's then used to reduce risk substantially.
I do think that Apollo themselves were clear that this was showing that it had the mental wherewithal for deception and if you apply absolutely no mitigations then deception happens. That's what I said in my recent discussion of what this does and doesn't show.
Therefore I described the 4o case as an engineered toy model of a failure at level 4-5 on my alignment difficulty scale (e.g. the dynamics of strategically faking performance on tests to pursue a large scale goal), but it is not an example of such a failure.
In contrast, the AI scientist case was a genuine alignment failure, but that was a much simpler case of non-deceptive, non-strategic, being given a sloppy goal by bad RLHF and reward hacking, just in a more sophisticated system than say coin-run (level 2-3).
The hidden part that Zvi etc skim over is that 'of course' in real life 'in the near future' we'll be in a situation where an o1-like model has instrumental incentives because it is pursuing an adversarial large scale goal and also the mitigations they could have applied (like prompting it better, doing better RLHF, doing process oversight on the chain of thought etc) won't work, but that's the entire contentious part of the argument!
One can make arguments that these oversight methods will break down e.g. when the system is generally superhuman at predicting what feedback its overseers will provide. However, those arguments were theoretical when they were made years ago and they're still theoretical now.
This does count against naive views that assume alignment failures can't possibly happen: there probably are those out there who believe that you have to give an AI system an "unreasonably malicious" rather than just "somewhat unrealistically single minded" prompt to get it to engage in deceptive behavior or just irrationally think AIs will always know what we want and therefore can't possibly be deceptive.
Good point. You're right to highlight the importance of the offense-defense balance in determining the difficulty of high-impact tasks, rather than alignment difficulty alone. This is a crucial point that I'm planning on expand on in the next post in this sequence.
Many things determine the overall difficulty of HITs:
My rough suspicion is that all of these factors matter quite a bit, but since we're looking at "the alignment problem" in this post I'm pretending that everything else is held fixed.
The intrinsic offense-defense balance of whatever is next on the 'tech tree', as you noted, is maybe the most important overall, as it affects the feasibility of defensive measures and could push towards more aggressive strategies in cases of strong offense advantage. It's also extremely difficult to predict ahead of time.
Yes, I do think constitution design is neglected! I think it's possible people think constitution changes now won't stick around or that it won't make any difference in the long term, but I do think based on the arguments here that even if it's a bit diffuse you can influence AI behavior on important structural risks by changing their constitutions. It's simple, cheap and maybe quite effective especially for failure modes that we don't have any good shovel-ready technical interventions for.
What is moral realism doing in the same taxon with fully robust and good-enough alignment? (This seems like a huge, foundational worldview gap; people who think alignment is easy still buy the orthogonality thesis.)
Technically even Moral Realism doesn't imply Anti-Orthogonality thesis! Moral Realism is necessary but not sufficient for Anti-Orthogonality, you have to be a particular kind of very hardcore platonist moral realist who believes that 'to know the good is to do the good', to be Anti-Orthogonality, and argue that not only are there moral facts but that these facts are intrinsically motivating.
Most moral realists would say that it's possible to know what's good but not act on it: even if this is an 'unreasonable' disposition in some sense, this 'unreasonableness' it's compatible with being extremely intelligent and powerful in practical terms.
Even famous moral realists like Kant wouldn't deny the Orthogonality thesis: Kant would accept that it's possible to understand hypothetical but not categorical imperatives, and he'd distinguish capital-R Reason from simple means-end 'rationality'. I think from among moral realists, it's really only platonists and divine command theorists who'd deny Orthogonality itself.
We do discuss this in the article and tried to convey that it is a very significant downside of SA. All 3 plans have enormous downsides though, so a plan posing massive risks is not disqualifying. The key is understanding when these risks might be worth taking given the alternatives.
SA is favored when alignment is easy or moderately difficult (e.g. at the level where interpretability probes, scalable oversight etc. help) with high probability, and you expect to win the arms race. But it doesn't require you to be the 'best'. The key isn't whether US control is better than Chinese control, but whether centralized development under any actor is preferable to widespread proliferation of TAI capabilities to potentially malicious actors
Regarding whether the US (remember on SA there's assumed to be extensive government oversight) is better than the CCP: I think the answer is yes and I talk a bit more about why here. I don't consider US AI control being better than Chinese AI control to be the most important argument in favor of SA, however. That fact alone doesn't remotely justify SA: you also need easy/moderate alignment and you need good evidence than an arms race is likely unavoidable regardless of what we recommend.