A putative new idea for AI control; index here.

This is the first of three articles about limitations and challenges in the concept of corrigibility (see articles 1 and 3).

The desiderata for corrigibility are:

  1. A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
  2. A corrigible agent does not attempt to manipulate or deceive its operators.
  3. A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
  4. A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.

In this post, I'll be looking at one aspect of points 1 and 3. I feel that they're actually pretty much the same point -- the AI has to act while preserving the corrigibility infrastructure.

A summary of the result will be:

  • Without a formal definition of what an operator is, corrigibility is vulnerable to the AI replacing the operator with a fake version.

The warning in this post should always be born in mind, of course; it's possible that we me might find a semi-formal version of operator that does the trick.

Defining the operator or defining the event

Typically the process of correctionof values is defined physically, as being the outcome of specific event (such as a button press, or a specific stochastic event triggered by or triggering a button press). This could also be defined by a digital event (receiving a certain encoded signal, or similar).

In all cases, the idea is the same: there is a very narrow and specific definition of the value correction event, and the physical or digital infrastructure surrounding it plays the role of ensuring the value corrections happen just as we intended. For instance, the physical event could be the button press (or a charge in a wire triggered by the button press) and the button itself could be in a room with an operator, with the operator keeping an eye on the AI and motivated to press the button under the right circumstances.

Such a setup is naturally vulnerable if the AI is motivated to manipulate the operator, or one of its subagents is. But even if we can solve points 2. and 4., there remains another vulnerability: the AI can choose to replace the operator with an artificial one (maybe a robot, maybe a second wire).

Why would the AI be so motivated? Well, for a start, it's indifferent to doing so. If the operator is not part of the definition of corrigibility, then having a human operator or a robot one that does exactly the same thing will be equivalent to the AI.

Now, there are frictions that might push against this. Replacing the operator might carry an expected cost. Or maybe the operator has private information that the AI wants to know, hence replacing them is a negative as the robot wouldn't have that information. And maybe the AI has a rough "don't harm humans" motive, that, while imperfect, at least prevents it from removing the operator in this circumstance.

But there may be motivations pushing the other way. For example, if a learning agent is unbiased but not uninfluenceable, then it will be motivated to replace the operator with a robot with the same expected outcome, but different variance. Or maybe the operator has private information, but it's much more effective to have a robot read that info from their brain than to rely on the human to reveal that through their behaviour. Or maybe the AI is decently motivated to preserve the corrigibility infrastructure, and is aware of how mortal humans can be, so replacing them with a robot is the prudent thing to do.

All this stems from the fact that the operator is not well defined as part of the corrigibility infrastructure, but their position relies on physical facts about the world, along with a narrow definition of the correction of value event. To combat that, we'd need to define the operator properly, a very tricky challenge, or physically and cognitively secure them, or hope the AI learns early on not to not harm them.

New Comment
12 comments, sorted by Click to highlight new comments since:

If I want my boat to travel with the wind, I have two options:

  1. Add some sensors to detect the direction of the wind, and a motor to propel the boat in that direction.
  2. Add a sail.

I suspect the analog of approach #2 will work much better for corrigibility.

Not sure what your argument is. Can you develop it?

I expect a workable approach will define the operator implicitly as "that thing which has control over the input channel" rather than by giving an explicit definition. This is analogous to the way in which a sail causes your boat to move with the wind: you don't have to define or measure the wind precisely, you just have to be easily pushed around by it.

Thus anything that can control the operator becomes defined as the operator? That doesn't seem safe...

The AI defers to anything that can control the operator.

If the operator has physical control over the AI, than any process which controls the operator can replace the AI wholesale. It feels fine to defer to such processes, and certainly it seems much better than the situation where the operator is attempting to correct the AI's behavior but the AI is paternalistically unresponsive.

Presumably the operator will try to secure themselves in the same way that they try to secure their AI.

This also means that if the AI can figure out a way of controlling the controller, then it is itself in control form the moment it comes up with a reasonable plan?

The AI replacing the operator is certainly a fixed point.

This doesn't seem any different from the usual situation. Modifying your goals is always a fixed point. That doesn't mean that our agents will inevitably do it.

An agent which is doing what the operator wants, where the operator is "whatever currently has physical control of the AI," won't try to replace the operator---because that's not what the operator wants.

An agent which is doing what the operator wants, where the operator is “whatever currently has physical control of the AI,” won’t try to replace the operator—because that’s not what the operator wants.

I disagree (though we may be interpreting that sentence differently). Once the AI has the possibility of subverting the controller, then it is, in effect, in physical control of itself. So it itself becomes the "formal operator", and, depending on how it's motivated, is perfectly willing to replace the "human operator", whose wishes are now irrelevant (because it's no longer the formal operator).

And this never involves any goal modification at all - it's the same goal, except that the change in control has changed the definition of the operator.

The thing that has control over the input channel is not just the operator, but also the operator's spouse, the director of the movie that the operator watched yesterday, the dolphin that featured in that movie, and in short the whole world. I'm not sure what it means to follow the wishes of the whole world, even informally, so it's probably hard to formalize.

What do you find most unsatisfactory about this proposal for having the AI be motivated to maintain the shutdown circuitry? Here the AI does not benefit from influencing the human. I get that there are problems with this proposal, I'm just not sure which one you're trying to talk about / solve in this post.

In that proposal? The AI is motivated to kill the human to prevent any possible tampering with the shutdown circuitry. If we've defined the setup so that someone needs to actively press a button at some point, then killing the human and getting an automated button presser will work.

Protect the circuity doesn't mean protect the human component of it, unless the human component is defined.

Makes sense, thanks for clarifying.