On Treaties, Nuclear Weapons, and AI
overnOne of the basic suggestions for dealing with the threat of AI is a global treaty banning training sufficiently large models. Summarizing the forthcoming If Anyone Builds it Everyone Dies, Scott Alexander describes this plan as "more-or-less lifted from the playbook for dealing with nuclear weapons."[1] I am deeply skeptical that anything along the lines of the nuclear non-proliferation regime will work for AI. My goal here is to sketch out some of the basic reasons why, beginning with a brief summary of the non-proliferation regime that you can safely skip if you're familiar with it. The Nuclear Non-Proliferation Regime Nuclear weapons are a 1940s-era technology. In this sense, it quite remarkable that we have managed to limit their diffusion. In 1963, John F. Kennedy observed: "I am haunted by the feeling that by 1970, unless we are successful [at non-proliferation], there may be 10 nuclear powers instead of four, and by 1975, 15 or 20." Judged by this standard, the non-proliferation regime has been a success. As of 2025, there are only nine nuclear powers, and only ten states have ever developed nuclear weapons.[2] Another dozen states or so have seriously pursued nuclear weapons before eventually abandoning those efforts under international pressure. Legally speaking, the center piece of the non-proliferation regime is the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), originally signed in 1968 and now ratified by all but five countries.[3] The NPT set up a two-tier system. The five countries with acknowledged nuclear weapons as of 1968 joined the treaty as nuclear weapons states; all others joined as non-nuclear weapons states.[4] The non-nuclear weapons NPT members agreed not to develop nuclear weapons. In return, they obtained two basic promises from the nuclear states: first, the nuclear weapons states agreed to provide assistance in developing peaceful uses of nuclear energy (a promise that they have largely honored). Second, t
There are some other prediction markets on Manifold/Metaculus that address the question more directly but they're small.
Some economists have argued that you should look at long run real interest rates -- the idea being that AGI boosts the return on capital, so bondholders should demand higher rates in order to lock up their money in bonds.
I think it's pretty hard to infer much from the stock prices of tech companies because it's kinda ambiguous what AGI would do to those companies (and depends on what exactly counts) plus sub-AGI advances in AI can confuse the price effect. Nvidia, for example, is the market's favorite AI play but AGI in the "dominates humans... (read more)