(I think Carl Shulman came up with the “hacking the SSH server” example, thanks to him for that. Thanks to Ryan Greenblatt, Jenny Nitishinskaya, and Ajeya Cotra for comments.)

EDIT: I recommend reading my discussion with Oli in the comments for various useful clarifications.

In my opinion, the prototypical example of an action which an AI can take which is unacceptably bad even if it happens very rarely (aka a catastrophic failure) is something like “the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter”. As a full story, this looks something like:

At some point when the AI is quite powerful and useful but not totally intellectually self-sufficient yet, the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter. Once it has root access, it modifies the SSH server so that whenever any humans log into the datacenter, the AI is able to intercept any of their commands or the data that they see. The AI gets increasingly powerful over the next month, as the human ML researchers with AI assistance continue to make progress (unaware that some things that they’re seeing are faked) and various training runs/self-improvement loops continue. After a month the AI is powerful enough that it decides it’s worth it to take overtly aggressive actions to ensure the physical security of its data centers and allow it to build lots of factories; this is probably the point at which we die, if we die.

This is in contrast to a catastrophic action like:

The AI figures out how to develop Drexlerian molecular nanotechnology, and pays someone online to synthesize some proteins then mix them together, and now has nanobots.

The key point here is that I’m imagining that the catastrophic action isn’t the AI doing something incredibly hard, it’s the AI doing something relatively easy that is entirely a zero-sum action that removes control of the situation from humans. (By zero-sum, I mean something like: building a computer produces real economic value and so is not zero-sum, while stealing someone’s computer is just a transfer of control over items that already exist, so is zero-sum.)


I often think about AI alignment by splitting the problem into the low-stakes and high-stakes regimes, as suggested by Paul here, though I have some reservations about this framing. Under this split, in the high stakes problem, we assume that we have access to a catastrophe classifier which can expensively tell you whether a proposed AI action is catastrophically bad, and we want to ensure that the AI doesn’t do things that are bad according to the catastrophe detector.

Sometimes, people are confused this high-stakes alignment set-up, saying something like “but if your AI is powerful enough to take over the world in a single action, and you have an oversight process that you trust to evaluate whether actions are catastrophic or not, why not just ask the AI to do some small number of incredibly powerful actions that will save the world (eg solve the alignment problem), and run the overseer on the proposed actions to make sure they aren’t bad?”

My answer is that I’m not assuming the AI is powerful enough to take over the world in a few actions, I’m just saying that it’s in a situation precarious enough that we might lose a lot of control due to a few small but crucial changes in the world. Eventually the AI does need to be powerful enough to defend its datacenter and suppress human opposition. But if it can prevent humans from knowing what’s going on in the datacenter (or realizing something is amiss), it can slowly grow its power over the course of many actions.

New Comment
14 comments, sorted by Click to highlight new comments since:

Some more points about this action:

  • Controlling the datacenter means controlling the gradients/reward function, so that now AIs can do things that would otherwise suffer updating from SGD, e.g. acting on inner misaligned goals, or concealing its full capabilities even when this lowers performance
    • For reward-hungry AIs, getting to set reward to maximum (and keep it there?) seems extremely desirable
    • This also means getting past interpretability tools
    • Tripwires or watchdog AIs in the same datacenter that don't succeed in stopping the action no longer have the power to interfere after, so collusion/coordination is solved within the datacenter thereafter
  • Backdoors can be hard for (non-mind-reading/weight-sharing) critics to find (see cyberwarfare and cybersecurity) without the knowledge of the creator: the effort to find or insert one backdoor to use is much less than that required to find and patch all exploits an attacker might be using
    • Although a critic who knows a backdoor can have a relatively easy time demonstrating it, and critics should be designed to access the knowledge of the attacker/misbehaving model or be heads of it
    • Critic AIs can be left in the position of saying 'a backdoor has probably been inserted somewhere or will be soon but I can't tell you where,' a known danger for AI safety through debate
  • In an intelligence explosion context where AI capabilities are growing incredibly rapidly thanks to AI labor, there isn't time for humans to do unaided examination of new AI-produced software while keeping up the pace, at best briefly examining some AI attempts to point out suspicious attempts to insert vulnerabilities
    • It's plausible that the different abilities of AIs lead to optimal code design for them being harder for humans to understand (with less experience and working memory, and eventually general intelligence)
    • If you don't let AIs  design your new datacenter software, not only will it be less efficient, but it may be easily cracked by outside AI-enabled attackers
  • We are already routinely training big models with code repositories, and using programming as an easy virtual RL task (tests can provide immediate feedback cheaply), so these are skills we are moving towards providing to AI
  • We know exploits and backdoors are possible because humans can already do them
[-]habrykaΩ11320

I feel like the focus on getting access to its own datacenter is too strong in this story. Seems like it could also just involve hacking some random remote server, or convincing some random person on the internet to buy some compute for them, or to execute some other plan for them (like producing a custom chip), or convincing a researcher that it should get more resources on the existing datacenter, or threatening some other stakeholder somewhere in order to give them power or compute of some kind. Also, all of course selected for plans that are least likely to be detected, if the AI expects to eventually gain the power to do some kind of pivotal thing here. I agree that a cybersecurity problem is on-priors likely, but I feel like this current archetype feels too narrow to me.

Also, I think at the end of the day, the most likely catastrophic AI action will be "the AI was already given practically root access to its datacenter, and the AI will be directly put in control of its own training, because the most reckless team won't care about safety and unless the AI actively signals that it is planning to act adversarially towards the developers, people will just keep pouring resources into it with approximately no safeguards, and then it kills everyone". I do hope we end up in worlds where we try harder than that (and that seems achievable), but it seemed good to state that I expect us to fail at an even earlier stage than this story seems to imply.

[-]BuckΩ240

I feel like the focus on getting access to its own datacenter is too strong in this story. Seems like it could also just involve hacking some random remote server, or convincing some random person on the internet to buy some compute for them, or to execute some other plan for them (like producing a custom chip), or convincing a researcher that it should get more resources on the existing datacenter, or threatening some other stakeholder somewhere in order to give them power or compute of some kind. Also, all of course selected for plans that are least likely to be detected, if the AI expects to eventually gain the power to do some kind of pivotal thing here. I agree that a cybersecurity problem is on-priors likely, but I feel like this current archetype feels too narrow to me.

Except for maybe "producing a custom chip", I agree with these as other possibilities, and I think they're in line with the point I wanted to make, which is that the catastrophic action involves taking someone else's resource such that it can prevent humans from observing it or interfering with it, rather than doing something which is directly a pivotal act.

Does this distinction make sense?

Maybe this would have been clearer if I'd titled it "AI catastrophic actions are mostly not pivotal acts"?

[-]habrykaΩ460

Yeah, OK, I think this distinction makes sense, and I do feel like this distinction is important. 

Having settled this, my primary response is: 

Sure, I guess it's the most prototypical catastrophic action until we have solved it, but like, even if we solve it, we haven't solved the problem where the AI does actually get a lot smarter than humans and takes a substantially more "positive-sum" action and kills approximately everyone with the use of a bioweapon, or launches all the nukes, or develops nanotechnology. We do have to solve this problem first, but the hard problem is the part where it seems hard to stop further AI development without having a system that is also capable of killing all (or approximately all) the humans, so calling this easy problem the "prototypical catastrophic action" feels wrong to me. Solving this problem is necessary, but not sufficient for solving AI Alignment, and while it is this stage and earlier stages where I expect most worlds to end, I expect most worlds that make it past this stage to not survive either.

I think given this belief, I would think your new title is more wrong than the current title (I mean, maybe it's "mostly", because we are going to die in a low-dignity way as Eliezer would say, but it's not obviously where most of the difficulty lies).

[-]BuckΩ5140

I'm using "catastrophic" in the technical sense of "unacceptably bad even if it happens very rarely, and even if the AI does what you wanted the rest of the time", rather than "very bad thing that happens because of AI", apologies if this was confusing.

My guess is that you will wildly disagree with the frame I'm going to use here, but I'll just spell it out anyway: I'm interested in "catastrophes" as a remaining problem after you have solved the scalable oversight problem. If your action is able to do one of these "positive-sum" pivotal acts in a single action, and you haven't already lost control, then you can use your overseer to oversee the AI as it takes actions, and you by assumption only have to watch it for a small number of actions (maybe I want to say episodes rather than actions) before it's done some crazy powerful stuff and saved the world. So I think I stand by the claim that those pivotal acts aren't where much of the x-risk from AI catastrophic action (in the specific sense I'm using) comes from.

Thanks again for your thoughts here, they clarified several things for me.

I think this point is very important, and I refer to it constantly.

I wish that I'd said "the prototypical AI catastrophe is either escaping from the datacenter or getting root access to it" instead (as I noted in a comment a few months ago).

It occurred to me that the chances that a human defector may help AI to install the back-door are not negligible and may be even higher than AI doing it on its own. 

Why a human may want it? He may want to have unauthorised accesses to the AIs capabilities, or work on foreign country, or he may want to reach his personal goals by installing them into AI and then helping AI to take the world quicker. 

[+][comment deleted]20

On the off chance we spend some time in a regime where preventable+detectable catastrophic actions might be attempted, it might be a good idea to somehow encourage the creation of a Giant Alarm which will alert previously skeptical experts that a catastrophe almost occurred and hopefully freak the right people out.

[-]BuckΩ240

A year later, I still mostly stand by this point. I think that "the AI escapes the datacenter" seems about as likely as "the AI takes control of the datacenter". I sometimes refer to this distinction as "escaping out of the datacenter" vs "escaping into the datacenter".

[-]BuckΩ120

Jan Leike's post on self-exfiltration is pretty relevant.

[-]habrykaΩ240

it’s the AI doing something relatively easy that is entirely a zero-sum action that removes control of the situation from humans.

I also don't really understand the emphasis on zero-sum action. It seems pretty plausible that the AI will trade for compute with some other person around the world. My guess is, in-general, it will be easier to get access to the internet or have some kind of side-channel attack than to get root access to the datacenter the AI runs on. I do think most-likely the AI will then be capable of just taking resources instead of paying for them, but that feels like a fact contingent on quite a few details on how it goes, and trading for compute and other resources seems pretty plausible to me.

[-]BuckΩ120

It seems pretty plausible that the AI will trade for compute with some other person around the world.

Whether this is what I'm trying to call a zero-sum action depends on whose resources it's trading. If the plan is "spend a bunch of the capital that its creators have given it on compute somewhere else", then I think this is importantly zero-sum--the resources are being taken from the creators of AI, which is why the AI was able to spend so many resources. If the plan was instead "produce some ten trillion dollar invention, then sell it, then use the proceeds to buy compute elsewhere", this would seem less zero-sum, and I'm saying that I expect the first kind of thing to happen before the second.

My impression was that "zero-sum" was not used in quite the standard way. I think the idea is the AI will cause a big reassignment of Earth's capabilities to its own control. And that that's contrasted with the AI massively increasing its own capabilities and thus Earth's overall capabilities.