Epistemic status: somewhat rushed out in advance of the deadline tomorrow (Saturday the 15th) for the call for public comment on US AI policy. I think the issue is complex and deserves careful consideration.
Summary
- Methods and motivations for governments to control AGI and limit its proliferation seem to be underexplored
- Current laws appear adequate to take control of AGI as a national security and military technology if its potential is taken seriously
- Softer informal monitoring and steering is relatively easy and can scale as AGI seems more imminent
- Governments obsess over national security, and AGI is a severe security risk
- There are both dramatic advantages and disadvantages of government control for the alignment project as a whole:
- Government control could dramatically reduce proliferation of takeover-capable AGI
- Reducing opportunities for misalignment and misuse
- Government control could concentrate power in dangerous human hands
- And create a more hostile race dynamic
- Government control could dramatically reduce proliferation of takeover-capable AGI
- Thus, we neither know whether governments are likely to control AGI in time to reduce proliferation, nor whether that would be a good thing.
- Publicizing reasons government should control AGI might be one of few "levers" available to those who are already aware of the importance and danger of AGI.
In sum, determining whether government control of AGI is worth accelerating seems worth some careful analysis.
Overview
My thesis here is that governments will probably pursue AGI, and they will want to prevent others from doing so. The open questions are whether they will do so in time to effectively slow proliferation of AGI projects, and whether that would be a good thing for the odds of humanity's success with the advent of AGI.
I've talked to many alignment people who are certain government won't intervene in time to matter, and others who are equally certain it will. I've heard a similar range of opinions on whether this would be good or bad for our odds of surviving AGI. Here I largely raise questions and possibilities which I have not seen discussed prominently; I hope further discussion will move us toward better guesses. As with other complex and important topics, my hopes lie in cooperative epistemic work.
I am not remotely expert in government or law, but I have been thinking about government response to AGI a lot, and I haven't found any expert that addresses all of what seem like important points and possibilities. So I'll do what I can here, and hope to get input from those more expert in how governments might react to AGI becoming more than a distant theory.
Much discussion to date focuses on whether or not AGI projects will be nationalized. There are a much broader range of options for controlling AGI as much and as quickly as the government deems useful. Broad laws governing the export of technology with military applications seem to be adequate for the government to threaten legal action in the short term, and governments are historically quite willing to create new laws when they feel they're in an unprecedented crisis.
To the extent the government believes in the potential of AGI, they will want to control it. And they have plenty of power to do so, at least within their borders (stopping proliferation of the software and techniques is much harder). Assuming governments won't intervene might be an outdated assumption.
The US and Chinese governments in particular are worth considering, since they are currently ahead in AGI and political and military power. The US and China could race aggressively, or could collaborate to limit other state and non-state actors from gaining similar systems. This would reduce the risks of misalignment or misuse as AGI proliferates. But it increases the risk of a malignant actor seizing control of one of those few AGIs and centralizing power in a way that might be difficult or impossible to dislodge.
But the positive outcome is one possible route to success and human flourishing in the transition to AGI. One logical move is to promise that the civilian benefits of AGI-created technologies will be broadly shared. The ease of doing this and the strategic advantage of keeping the promise in the short term could lead them to honor it.
Governments seem unlikely to pursue AGI x-risk concerns out of moral duty, or to recognize its dangers and potential immediately. But when they recognize AGI as a route to power—like nuclear weapons before it—states will want to control it and limit its proliferation. As AGI systems grow visibly more capable at creating new technologies, their potential to shift the global balance of power will become increasingly obvious. Elected officials may remain slow to see the potential of AGI, but national security thinkers are more analytical, farsighted, and mindful of international balance of power. They are likely to attend to AGI risks and opportunities before politicians.
The control of AGI development by governments, even if driven by geopolitical self-interest, could improve our odds of surviving AGI. The fewer independent actors pushing for AGI at breakneck speed, the fewer chances of misalignment or misuse we will have to contend with. Government control of AGI raises fears of centralized power and the possibility of a manhattan trap race with China, but it also has a likely large advantage in reducing proliferation of AGI. And a belligerent relationship with China is not inevitable; notably, the US and Russia cooperated to limit proliferation of nukes. The challenges of AGI nonproliferation are different but the motivations are largely the same.
Alerting government to the power of controlled AGI is a separate project from alerting them to the dangers of misaligned AGI. Determining whether the concerned should pursue that project seems like a good use of our collective time.
The following sections expand on all of that logic. If you thought "I doubt it" to some of the major claims there, but are open to being convinced, you may benefit from reading the remainder. If you're on board with it being important and undecided, and want to help think about it, you may also want to read the remainder.
Many options for scaling control
Nationalizing AGI labs is (rightly, I think) dismissed as too clumsy and difficult to be a realistic possibility, particularly for short timelines. Soft Nationalization pointed out a number of other alternatives for official government control of AGI projects. But the government isn't limited to even those sorts of actions. Power is not all hard power. Actors from many branches of government including intelligence and military could legitimately take an interest at any point, and they may collectively be creative and experienced enough to stay informed and involved until it's time to explicitly and legally take partial or complete control.
Consider variations of this scenario: A couple of guys show up at each of the leading AGI labs. They say "hey, sorry to bother you. Our bosses wanted us to just follow your progress on the whole AGI thing. It's nothing official, and we hope it doesn't become official, because you know what a circus it would be to get Congress or the President involved. We don't know if AGI is that big a deal any time soon, but we know you guys think it is." Here their bosses could be a variety of government actors, or an informal coalition among several.
Here we might wonder what individuals in which branches of government might take even that much trouble to monitor progress toward AGI. Intelligence agencies are unlikely to act within the US based on historically-justified concerns about their expertise in breaking laws to achieve their ends, but many branches of government including the executive branch and the armed forces have legitimate jurisdiction over an AGI project, because it would heavily impact so many areas. I assume that, while the left hand often doesn't talk to the right, some individuals do talk to others in different branches if they think there's the possibility of something new and important that doesn't fit conventional jurisdictions.
Even once it decides to try, there are complex questions about how effectively the government could monitor the relevant AGI orgs and their progress toward AGI.[1] Government representatives would probably mention potential legal consequences for noncompliance. There are several laws that cover exporting military-relevant technologies even before they're officially classified that way (according to my AI legal consultants, there are enough such laws, and they are broad enough, to make "see you in court" a foolish response).[2]
I am not remotely informed on what different elements of the government get up to when they think it's important. As Dave Kasten says:
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrence for decades, and 2) the "fuck you, I'm doing Iran-Contra" folks.
I'd want more expert discussion before concluding that nationalization or soft nationalization are the only likely government responses. Schemes for controlling US AGI could advance many peoples' careers and (perhaps and very arguably) save the world from communists and dictators forever. Whether they're implemented and whether they work in time remains to be seen—or perhaps influenced.
Governments won't be totally blindsided by AGI
AGI is now a highly theoretical concern. But there will be an increasing amount of evidence before it arrives, particularly on our current path of relatively steady improvements. Human cognition shifts when confronted with immediate, visceral realities; we attend to things we think might kill us or make us rich.
Before their development, nuclear weapons were an abstract theoretical concept that still drove massive efforts. After Hiroshima, they were the defining issue of global security. (The Manhattan project might've been triggered by Einstein's) The same shift will happen with AGI as its disruptive potential becomes increasingly difficult to ignore. The power-oriented, analytical minds in national security will probably be directing politicians attention to these possibilities—if they haven't already.
At different points, the individuals who make up governments will start to think seriously about AGI, recognizing it as a critical strategic asset and national security priority rather than lumping it in with AI as a technological trend. At that point, they will take a sharp interest in AGI development.
Governments exist in large part to exert power over national security interests. Assuming they will fail to do so in regards to AGI seems questionable at best.
Government use of AGI seems likely at some point
Historical precedent suggests that governments will take the steps to control AGI if they see it as even credibly powerful in the near future. Technology with balance-of-power-altering implications has been rapidly absorbed by state power when its significance became obvious. Nuclear weapons, cryptography, and missile technology all followed this pattern. The government slowed proliferation dramatically with nukes, moderately with missile technology, and attempted but failed to control cryptography.
In AI, we are already seeing the first stages of government interest and a sharp move to treating AI as a national security issue. And those moves seem to have been motivated only by the prospect of AI as a useful military technology, not artificial general intelligence in its full potential.
The incentives to control AGI are much larger than any of the previous military-relevant technologies that have triggered attempts at control. Real AGI would be far more significant than any previous military technology, with the possible exception of nuclear weapons. It would be capable of inventing new technologies faster than humans, including military technologies. And any ability to self-improve would create a winner-take-all dynamic, in which being in possession of the first AGI that could rapidly self-improve might enable one nation to maintain its own security permanently, and dictate terms to all other nations as it wished.
U.S.-China Cooperation in Limiting AGI Proliferation
While the potential of AGI is a powerful incentive for governments to race toward AGI, the incentives are not straightforward, and better dynamics are also possible.
The international response to nuclear weapons provides a useful, if imperfect, analogy. The U.S. and Soviet Union recognized that while nuclear competition was inevitable, nuclear proliferation was not in their strategic interest. This led to the Non-Proliferation Treaty (NPT), which successfully slowed the spread of nuclear weapons beyond the initial nuclear powers.
Doing this successfully would be difficult. It would be harder on some routes to AGI (those with low compute requirements), and possible only if the project is started early enough. It would have to be approached with both carrots and sticks; the promise to share the material wealth from AGI-created technology would be the carrot, and AI-aided surveillance and software intrusion would be the downside.
The gap between US and Chinese precision weapons and intelligence technology and those of the rest of the world is large. It would make a treaty led by those two nations on AGI nonproliferation potentially enforcable, with the variable of how much privacy violation and surgical force the rest of the world would tolerate. As AI advanced toward AGI, enforcement would become easier - while the threat of centralized power becomes greater.
If one indulges in a bit of optimism, the potential for AGI to provide material wealth and security might shift the incentives toward cooperation enough that the US and China could actually continue cooperating into a long-term stable future that benefits all of humanity.[3] How benevolent does one have to be to share a pie that's growing exponentially? And how wise does one need to be with superhuman advice?
Don't take that for optimism; it's uncertainty which extends to positive as well as negative possibilities.
How government control of AGI could change our odds
Government control could lead to a survivable transition in two notable ways:
- Fewer AGI actors means fewer chances for reckless deployments
- Fewer AGI actors mean fewer chances of malicious use of AGI
- Those controlling AGIs would have an incentive to order vicious first-strike takeover attempts if they have unusual goals/ethics/beliefs[2]
- Governments typically have more patience than corporations
These advantages are offset by the disadvantage of a likely worsened race dynamic raising the odds of rushing to deploy AGI without sufficient alignment. This could easily lead to takeover and permanent disempowerment or extinction for humanity. There is already a dangerous race dynamic between AGI companies, but government involvement would create a race between governments and cultures with a history of distrust and occasional warfare and atrocities.
Provisional conclusion
Right now, the world is stumbling toward AGI. Governments don't seem to yet understand its stakes. Corporate leadership has incentives that favor speed over caution. If AGI development turns out to require less resources, individuals or fringe groups with access to proto-AGI would be even less restrained.
A shift to government control of AGI projects could be quite dangerous. It could lead to a race dynamic that either results in a misaligned runaway AGI, or in conflict between governments that escalates to an accidental catastrophic nuclear exchange.
Government control could also be our best shot at effectively limiting AGI proliferation. The fewer independent actors that achieve and use superhuman AGI, the better our odds of some form of alignment working for every instance of it. And less proliferation means less chance that If we solve alignment, we die anyway from misuse of intent-aligned AGI.
This doesn't seem like a particularly good strategic situation. But it is far from an obviously losing scenario.
The prospect of government control is alarming. There are good reasons for Fear of centralized power and fear of misaligned AGI. I'm mentioning it publicly because it may be the best of a bad set of options, and there are already many voices telling the government to pay attention to AI. I have yet to hear another realistic route to limiting the development of dangerous AGI. And if it's largely inevitable that governments seize control of AGI, it may be best if they do it while proliferation can still be limited.
So we should probably figure out whether we want pull on this lever.
- ^
There are many ways government could fail to control AGI even if it tries. The most notable is uncontrolled proliferation; if research is happening in many countries at similar levels as we approach AGI, all of those governments would need to coordinate. There are also many ways that US companies could evade control even if it's attempted, but most of those look to me like foolish attempts in the face of government power. A clever and ambitious leader might balance saying they're complying while still hiding progress toward AGI, or fighting in court; but these seem like dead-end strategies to me. Evading control by some government permanently seems very hard. Takeoff would need to be quite fast to evade control from the government with jurisdiction over the physical location of lab personnel and leadership if the full government is motivated to take control.
I'm interested in counterarguments! - ^
AGI proliferation might mean a strong offensive advantage if they could create weapons in secret. People with unusual visions for the future (themselves as god-emperor, religious beliefs, etc) might be motivated to order their AGI to strike first in order to control the future, even if it means destroying most of humanity or the earth. After all, an AGI capable of self-improving should be able to create a new glorious future approximately to its master's specifications. See If we solve alignment, do we die anyway? and the discussion there.
- ^
One closely related topic that's out of scope here, but crucial to any future that includes corporate or government control of intent-aligned AGI is that of the moral character of human leaders. What individuals could be trusted with the future of humanity, and under what circumstances, is another critical question on which intuitions vary dramatically, and all of the available arguments I've found are highly incomplete. This is one more reason I wish we had more psychologists involved in alignment theory.
- ^
Existing law is seems only relevant to how easy it would be for smaller government actors to get compliance just for asking. If and when the government is largely AGI-pilled, new laws and executive orders will likely be made quickly. Claude 3.7 and GPT-4.5 confirm my suspicions that there are laws against developing technology that could obviously be used for important military purposes, then letting foreign powers steal or have it. They cite many laws that would be applicable if the courts took the full potential of AGI seriously. Just the basic definition of treason, "don't try to overthrow the government" might cover it. But there are more specific laws.
Below I've included Claude's quick summary; I invite those more expert than me to weigh in here. ChatGPT-4.5 says largely the same things WRT applicable laws. Both were probed for counterarguments, which focused on government awareness and speed of takeoff. Legal defenses exist but seem to me unwise. I wouldn't want to appear in court saying "oh I didn't realize it could be adapted to take over the world" when I most certainly did. Legal systems look solid in liberal democracies in peacetime, but in perceived crisis, it looks to me like government does what it wants.
Some US laws applicable to AGI development if it's taken seriously
Claude 3.7:
If government authorities and courts were convinced that AGI posed similar existential or security risks, many of the same legal frameworks [as for directly military technologies, from context] could indeed be applied:Legal Basis for AGI Control
- Dual-Use Technology Framework: AGI could be designated as a "dual-use technology" with both civilian and military applications under the Export Administration Regulations, similar to how nuclear technology is treated
- National Security Classification: If AGI were viewed as having weapons-development potential, the government could apply classification controls to core research
- Defense Production Act: Could be invoked to require licensing and government oversight of AGI development
- Preventive Control Precedent: There's precedent in how the Atomic Energy Act preemptively controlled nuclear technology before specific weapons were developed
Key Challenges Specific to AGI
The primary hurdle is the "capability belief threshold" you mentioned:
- Proving Capability: Unlike bioweapons or drones where the danger is immediately demonstrable, AGI's potential dangers are more theoretical until demonstrated
- Uncertain Timeline: The government would need to determine at what development stage intervention becomes necessary
- Technical Expertise Gap: Government regulators would need sufficient technical understanding to recognize when AGI capabilities cross critical thresholds
The most likely approach would be a graduated response - beginning with monitoring and information sharing requirements, then escalating to more direct control if capabilities approach concerning thresholds.
This is essentially what happened with nuclear technology - initial light oversight that dramatically expanded once capabilities were demonstrated at Los Alamos.
I didn't prompt it with the scenario of informal monitoring under the threat of legal enforcement, followed by direct control if AGI appears imminent. We concur, but we could use more expert input.
ChatGPT-4.5:
Companies aware of AGI’s potentially existential or geopolitical threats would face enormous liability (criminal and civil) if found negligent in protecting the technology from adversarial access.
Interesting. I hadn't thought about Musk's influence and how he is certainly AGI-pilled.