Soft Nationalization: how the USG will control AI labs
Crossposted to the EA Forum. We have yet to see anyone describe a critical element of effective AI safety planning: a realistic model of the upcoming role the US government will play in controlling frontier AI. The rapid development of AI will lead to increasing national security concerns, which will in turn pressure the US to progressively take action to control frontier AI development. This process has already begun,[1] and it will only escalate as frontier capabilities advance. However, we argue that existing descriptions of nationalization[2] along the lines of a new Manhattan Project[3] are unrealistic and reductive. The state of the frontier AI industry — with more than $1 trillion[4] in private funding, tens of thousands of participants, and pervasive economic impacts — is unlike nuclear research or any previously nationalized industry. The traditional interpretation of nationalization, which entails bringing private assets under the ownership of a state government,[5] is not the only option available. Government consolidation of frontier AI development is legally, politically, and practically unlikely. We expect that AI nationalization won't look like a consolidated government-led “Project”, but rather like an evolving application of US government control over frontier AI labs. The US government can select from many different policy levers to gain influence over these labs, and will progressively pull these levers as geopolitical circumstances, particularly around national security, seem to demand it. Government control of AI labs will likely escalate as concerns over national security grow. The boundary between "regulation" and "nationalization" will become hazy. In particular, we believe the US government can and will satisfy its national security concerns in nearly all scenarios by combining sets of these policy levers, and would only turn to total nationalization as a last resort. We’re calling the process of progressively increasing government c