tl;dr: I know a bunch of EA/rationality-adjacent people who argue — sometimes jokingly and sometimes seriously — that the only way or best way to reduce existential risk is to enable an “aligned” AGI development team to forcibly (even if nonviolently) shut down all other AGI projects, using safe AGI. I find that the arguments for this conclusion are flawed, and that the conclusion itself causes harm to institutions who espouse it. Fortunately (according to me), successful AI labs do not seem to espouse this "pivotal act" philosophy.
[This post is also available on the EA Forum.]
How to read this post
Please read Part 1 first if you’re very impact-oriented and want to think about the consequences of various institutional policies more than the arguments that lead to the policies; then Parts 2 and 3.
Please read Part 2 first if you mostly want to evaluate policies based on the arguments behind them; then Parts 1 and 3.
I think all parts of this post are worth reading, but depending on who you are, I think you could be quite put off if you read the wrong part first and start feeling like I’m basing my argument too much on kinds-of-thinking that policy arguments should not be based on.
Part 1: Negative Consequences of Pivotal Act Intentions
Imagine it’s 2022 (it is!), and your plan for reducing existential risk is to build or maintain an institution that aims to find a way for you — or someone else you’ll later identify and ally with — to use AGI to forcibly shut down all other AGI projects in the world. By “forcibly” I mean methods that violate or threaten to violate private property or public communication norms, such as by using an AGI to engage in…
- cyber sabotage: hacking into competitors’ computer systems and destroy their data;
- physical sabotage: deploying tiny robotic systems that locate and destroy AI-critical hardware without (directly) harming any humans;
- social sabotage: auto-generating mass media campaigns to shut down competitor companies by legal means, or
- threats: demonstrating powerful cyber or physical or social threats, and bargaining with competitors to shut down “or else”.
Hiring people for your pivotal act project is going to be tricky. You’re going to need people who are willing to take on, or at least tolerate, a highly adversarial stance toward the rest of the world. I think this is very likely to have a number of bad consequences for your plan to do good, including the following:
- (bad external relations) People on your team will have a low trust and/or adversarial stance towards neighboring institutions and collaborators, and will have a hard time forming good-faith collaboration. This will alienate other institutions and make them not want to work with you or be supportive of you.
- (bad internal relations) As your team grows, not everyone will know each other very well. The “us against the world” attitude will be hard to maintain, because there will be an ever weakening sense of “us”, especially as people quit and move to other institutions and conversely. Sometimes, new hires will express opinions that differ from the dominant institutional narrative, which might pattern-match as “outsidery” or “norm-y” or “too caught up in external politics”, triggering feelings of internal distrust within the team that some people might defect on the plan to forcibly shut down other projects. This will cause your team to get along poorly internally, and make it hard to manage people.
- (risky behavior) In the fortunate-according-to-you event that your team manages to someday wield a powerful technology, there will be a sense of pressure to use it to “finally make a difference” or other argument that boils down to acting quickly before competitors would have a chance to shut you down or at least defend themselves. This will make it hard to stop your team from doing rash things that would actually increase existential risk.
Overall, building an AGI development team with the intention to carry out a “pivotal act” of the form “forcibly shut down all other A(G)I projects” is probably going to be a rough time, I predict.
Does this mean no institution in the world can have the job of preparing to shut down runaway technologies? No; see “Part 3: it matters who does things”.
Part 2: Fallacies in Justifying Pivotal Acts
For pivotal acts of the form “shut down all (other) AGI projects”, there’s an argument that I’ve heard repeatedly from dozens of people, which I claim has easy-to-see flaws if you slow down and visualize the world that the argument is describing.
This is not an argument that successful AI research groups (e.g., OpenAI, DeepMind, Anthropic) seem to espouse. Nonetheless, I hear the argument frequently enough to want to break it down and refute it.
Here is the argument:
- AGI is a dangerous technology that could cause human extinction if not super-carefully aligned with human values.
(My take: I agree with this point.)
- If the first group to develop AGI manages to develop safe AGI, but the group allows other AGI projects elsewhere in the world to keep running, then one of those other projects will likely eventually develop unsafe AGI that causes human extinction.
(My take: I also agree with this point, except that I would bid to replace “the group allows” with “the world allows”, for reasons that will hopefully become clear in Part 3: It Matters Who Does Things.)
- Therefore, the first group to develop AGI, assuming they manage to align it well enough with their own values that they believe they can safely issue instructions to it, should use their AGI to build offensive capabilities for targeting and destroying the hardware resources of other AGI development groups, e.g., nanotechnology targeting GPUs, drones carrying tiny EMP charges, or similar.
(My take: I do not agree with this conclusion, I do not agree that (1) and (2) imply it, and I feel relieved that every successful AI research group I talk to is also not convinced by this argument.)
The short reason why (1) and (2) do not imply (3) is that when you have AGI, you don’t have to use the AGI directly to shut down other projects.
In fact, before you get to AGI, your company will probably develop other surprising capabilities, and you can demonstrate those capabilities to neutral-but-influential outsiders who previously did not believe those capabilities were possible or concerning. In other words, outsiders can start to help you implement helpful regulatory ideas, rather than you planning to do it all on your own by force at the last minute using a super-powerful AI system.
To be clear, I’m not arguing for leaving regulatory efforts entirely in the hands of governments with no help or advice or infrastructural contributions from the tech sector. I’m just saying that there are many viable options for regulating AI technology without requiring one company or lab to do all the work or even make all the judgment calls.
Q: Surely they must be joking or this must be straw-manning... right?
A: I realize that lots of EA/R folks are thinking about AI regulation in a very nuanced and politically measured way, which is great. And, I don't think the argument (1-3) above represents a majority opinion among the EA/R communities. Still, some people mean it, and more people joke about it in an ambiguous way that doesn't obviously distinguish them from meaning it:
- (ambiguous joking) I've numerous times met people at EA/R events who were saying extreme-sounding things like "[AI lab] should just melt all the chip fabs as soon as they get AGI", who when pressed about the extremeness of this idea will respond with something like "Of course I don't actually mean I want [some AI lab] to melt all the chip fabs". Presumably, some of those people were actually just using hyperbole to make conversations more interesting or exciting or funny.
Part of my motivation in writing this post is to help cut down on the amount of ambiguous joking about such proposals. As the development of more and more advanced AI technologies is becoming a reality, ambiguous joking about such plans has the potential to really freak people out if they don't realize you're exaggerating.
- (meaning it) I have met at least a dozen people who were not joking when advocating for invasive pivotal acts along the lines of the argument (1-3) above. That is to say, when pressed after saying something like (1-3), their response wasn't "Geez, I was joking", but rather, "Of course AGI labs should shut down other AGI labs; it's the only morally right thing for them to do, given that AGI labs are bad. And of course they should do it by force, because otherwise it won't get done."
In most cases, folks with these viewpoints seemed not to have thought about the cultural consequences of AGI research labs harboring such intentions over a period of years (Part 2), or the fallacy of assuming technologists will have to do everything themselves (Part 1), or the future possibility of making evidence available to support global regulatory efforts from a broader base of consensual actors (see Part 3).
So, part of my motivation in writing this post is as a genuine critique of a genuinely expressed position.
Part 3: It Matters Who Does Things
I think it’s important to separate the following two ideas:
- Idea A (for “Alright”): Humanity should develop hardware-destroying capabilities — e.g., broadly and rapidly deployable non-nuclear EMPs — to be used in emergencies to shut down potentially-out-of-control AGI situations, such as an AGI that has leaked onto the internet, or an irresponsible nation developing AGI unsafely.
- Idea B (for “Bad”): AGI development teams should be the ones planning to build the hardware-destroying capabilities in Idea A.
For what it’s worth, I agree with Idea A, but disagree with Idea B:
Why I agree with Idea A
It’s indeed much nicer to shut down runaway AI technologies (if they happen) using hardware-specific interventions than attacks with big splash effects like explosives or brainwashing campaigns. I think this is the main reason well-intentioned people end up arriving at this idea, and Idea B, but I think Idea B has some serious problems.
Why I disagree with Idea B
A few reasons! First, there’s:
- Action Consequence 1: the action of having an AGI carry out or even prescribe such a large intervention on the world — invading others’ private property to destroy their hardware — is risky and legitimately scary. Invasive behavior is risky and threatening enough as it is; using AGI to do it introduces a whole range of other uncertainties, not least because the AGI could be deceptive or otherwise misaligned with humanity in ways that we don’t understand.
Second, before even reaching the point of taking the action prescribed in Idea B, merely harboring the intention of Idea B has bad consequences; echoing similar concerns as Part 1:
- Intention Consequence 1: Racing. Harboring Idea B creates an adversarial winner-takes-all relationship with other AGI companies racing to maintain
- a degree of control over the future, and
- the ability to implement their own pet theories on how safety/alignment should work, leading to more desperation, more risk-taking, and less safety overall.
- Intention Consequence 2: Fear. Via staff turnover and other channels, harboring Idea B signals to other AGI companies that you are willing to violate their property boundaries to achieve your goals, which will cause them to fear for their physical safety (e.g., because your incursion to invade their hardware might go awry and end up harming them personally as well). This kind of fear leads to more desperation, more winner-takes-all mentality, more risk-taking, and less safety.
Summary
In Part 1, I argued that there are negative consequences to AGI companies harboring the intention to forcibly shut down other AGI companies. In Part 2, I analyzed a common argument in favor of that kind of “pivotal act”, and found a pretty simple flaw stemming from fallaciously assuming that the AGI company has to do everything itself (rather than enlisting help from neutral outsiders, using evidence). In Part 3, I elaborated more on the nuance regarding who (if anyone) should be responsible for developing hardware-shutdown technologies to protect humanity from runaway AI disasters, and why in particular AGI companies should not be the ones planning to do this, mostly echoing points from Part 1.
Fortunately, successful AI labs like DeepMind, OpenAI, and Anthropic do not seem to espouse this “pivotal act” philosophy for doing good in the world. One of my hopes in writing this post is to help more EA/R folks understand why I agree with their position.
Having thought along these lines, I agree that "we'll use the AI to stop everyone else" is a bad strategy. A specific reason that I didn't see mentioned as such: "You need to let me out of the box so I can stop all the bad AI companies" is one of the arguments I'd expect an AI to use to convince someone to let it out of the box.
Well, then, supposing that you do accidentally create what appears to be a superintelligent AI, what should you do? I think it's important for people to figure out a good protocol for this well in advance. Ideally, most of the AI people will trust most of the other AI people to implement this protocol, and then they won't feel the need to race (and take risks) to become the first to AI. (This is essentially reversing some of the negatives that Andrew Critch talks about.)
The protocol I'm thinking of is, basically: Raise the alarm and bring in some experts who've spent years preparing to handle the situation. (Crying wolf could be a problem—we wouldn't want to raise false alarms so often and make it so expensive that people start saying "Meh, it looks probably ok, don't bother with the alarm"—so the protocol should probably start with some basic automated tests that can be guaranteed safe.)
It's like handling first contact with a (super-powerful) alien race. You are the technician who has implemented the communication relays that connect you with them. It's unlikely that you're the best at negotiating with them. Also, it's a little odd if you end up making decisions that have huge impacts on the rest of humanity that they had no say in; from a certain perspective that is inappropriate for you to do.
So, what the experts will do with the AI is important. I mean, if the experts liked catgirls/catboys and said "we'll negotiate with the AI to develop a virus to turn everyone into catpeople", a lot of AI researchers would probably say "fuck no" and might prefer to take their own chances negotiating with the AI. So we need to trust the experts to not do something like that. What should they do, then?
Whatever your view of an ideal society, an uber-powerful AI can probably be used to implement it. But people probably will have lots of views of ideal societies that differ in important aspects. (Some dimensions: Are people immortal? Do animals and pets exist? Do humans have every need taken care of by robots, or do they mostly take care of themselves? Do artificial beings exist? Do we get arbitrary gene-editing or remain as we are today?) If AI people can expect the expert negotiators to drastically change the world into some vision of an ideal society... I'm not certain of this, but it seems reasonably likely that there's no single vision that the majority of AI people would be pleased with—perhaps not even a vision that the majority wouldn't say "Fuck no, I'd rather YOLO it" to. In which case the "call in the experts" protocol fails.
It would be a shame if humanity's predictable inability to agree on political philosophies pushed them to kill each other with reckless AGI development.
That being the case, my inclination would be to ask the AI for some list of obviously-good things (cures for Alzheimer's, cancer, etc.) and then ... I'm not sure what next. Probably the last step should be "get input from the rest of humanity on how they want to be", unless we've somehow reached agreement on that in advance. In theory one could put it up to an actual democratic vote.
If it were truly up to me, I might be inclined to go for "extend everyone's life by 50 years, make everyone gradually more intelligent, ensure that no one nukes/superplagues/etc. the world in the meantime, and reevaluate in 50 years".
Maybe one could have a general mechanism where people suggest wishes (maybe with Reddit-style upvoting to see which ones get put to a vote), and then, if a wish gets >95% approval or something, it gets implemented. Things like "Alzheimer's cure" would presumably pass that bar, and things like "Upload everyone's brains into computers and destroy their organic bodies" wouldn't, at least not anytime soon.
Another aspect of this situation: If you do raise the alarm, and then start curing cancer / otherwise getting benefits that clearly demonstrate you have a superintelligent AI... Anyone who knew what research paths you were following at the time gets a hint of how to make their own AGI. And even if they don't know which AI group managed to do it, if there are only, say, 5-10 serious AI groups that seem far enough along, that still narrows down the search space a lot. So... I think it would be good if "responsible" AI groups agreed to the following: If one group manages to create an AGI and show that it has sufficiently godlike capabilities (it cures cancer and whatnot [well, you'd want problems whose answers can be verified more quickly than that]) and that it's following the "bring in the experts and let them negotiate on behalf of humanity" protocol, then the other groups can retire, or at least stop their research and direct their efforts towards helping the first group. This would be voluntary, and wouldn't incentivize anyone to be reckless.
(Maybe it'd incentivize them to prove the capabilities quickly; but perhaps that can be part of the "quick automated test suite" given to candidate AGIs.)
There could be a few "probably-bad actor" AI groups that wouldn't agree to this, but (a) they wouldn't be the majority and (b) if this were a clearly good protocol that the good people agreed to, then that would limit the bad-actor groups' access to good-aligned talent.
If your AGI is capable and informed enough to give you English-language arguments about the world's strategic situation, then you've either made your system massively too capable to be safe, or you already solved the alignment problem for far more limited AGI and are now able to devote arbitrarily large amounts of time to figuring out th... (read more)