"You can't fetch the coffee if you're dead."
—Stuart Russell, on the instrumental convergence of shutdown-avoidance

Note: This is presumably not novel, but I think it ought to be better-known. The technical tl;dr is that we can define time-inhomogeneous reward, and this provides a way of "composing" different reward functions; while this is not a way to build a shutdown button, it is a way to build a shutdown timer, which seems like a useful technique in our safety toolbox.

"Utility functions" need not be time-homogeneous

It's common in AI theory (and AI alignment theory) to assume that utility functions are time-homogeneous over an infinite time horizon, with exponential discounting. If we denote the concatenation of two world histories/trajectories by , the time-consistency property in this setting can be written as

This is property is satisfied, for example, by the utility-function constructions in the standard Wikipedia definitions of MDP and POMDP, which are essentially[1]

Under such assumptions, Alex Turner's power-seeking theorems show that optimal agents for random reward functions  will systematically tend to disprefer shutting down (formalized as "transitioning into a state with no transitions out").

Exponential discounting is natural because if an agent's preferences are representable using a time-discount factor that depends only on relative time differences and not absolute time, then any non-exponential discounting form is exploitable (cf. Why Time Discounting Should Be Exponential).

However, if an agent has access to a clock, and if rewards are bounded by an integrable nonnegative function of time, the agent may be time-inhomogeneous in nearly arbitrary ways without actually exhibiting time inconsistency:

Any utility function with the above form still obeys an analogous version of our original time-consistency property that is modified to index over initial time :

Note that time-homogenous utility functions are a special case in which .

Time-bounded utility functions can be sequentially composed

We define a time-bounded utility function as a dependent tuple 

i.e., a family of utility functions indexed by times within a given fixed range. The intended semantics of a time-bounded utility function in  form is:

Given two time-bounded utility functions (in the same environment), they can be concatenated into a new time-bounded utility function:

You can check that  is a monoid, with the neutral element given by .

How to build a shutdown timer

Let  be the reward function for a time-bounded task and  be the time limit for the task, after which we want this agent to shut down. Assume that  also has bounded output, with per-stage reward always between  and . We define

We can then define  to be 1 or indeed any positive integer. If an agent does not reach a shutdown state before  is up, then it will realize a cost in  that outweighs all other rewards it could receive during the episode by a factor of  (a constant greater than 1). Therefore, optimal agents for  must shut down within time  with probability  (if the shutdown state is reachable in that time by any agent).

Proof

Suppose that the optimal policy  results in a shutdown probability , but there exists a policy  which shuts down deterministically (with probability 1). Then

which contradicts the optimality of .

Comparison with the shutdown switch problem

Several years ago, MIRI's Agent Foundations group worked on how to make a reflectively stable agent with a shutdown switch, and (reportedly) gave up after failing to find a solution where the agent neither tries to manipulate the switch to not be flipped nor tries to manipulate the switch to be flipped. This definitely isn't a solution to that, but it is a reflectively stable agent (due to time-consistency) with a shutdown timer.

MIRI researchers wrote about finding "a sensible way to compose a 'shutdown utility function' with the agent's regular utility function such that which utility function the agent optimises depends on whether a switch was pressed"; what's demonstrated here is a sensible way of composing utility functions—but such that which utility function is cared-about depends on how long the agent has been running.

From a causal incentive analysis point of view, the difficulty has been removed because the "flipping of the switch" has become a deterministic event which necessarily occurs, at time , regardless of the agent's behavior, so there is nothing in the environment for it to manipulate. An optimal agent with this reward structure would not want to corrupt its own clock, either, because that would cause it to act in a way that accumulates massive negative reward (according to its current utility function, when it considers whether to self-modify).

RL algorithms can be adapted to time-bounded utility functions

The details will vary depending on the RL algorithm, but the idea is essentially that we give  the current time  as an input, and then we try to approximate a solution to the finite-horizon Bellman equation,

instead of the infinite-horizon Bellman equation,

The recursion grounds out at , which can be defined as equal to zero.

Caveats

Power-seeking

Time-bounded agents still seek “power” or “resources” to the extent that they are useful within the time horizon, including to avoid premature shutdown. But this is still meaningfully better than the infinite-horizon case, in which even tiny predicted probabilities of shutdown (conditional on aligned behaviour) could get multiplied by the reward attainable with the entire cosmic endowment, and thereby outweigh the task-performance deficit implied by temporarily diverting resources to taking over the world.

For example, assuming it takes at least a day to take over the world, there’s no incentive for a single time-bounded agent, trained to fold laundry with a 10-minute time horizon, to get started on taking over the world. But if it turns out the agent can ensure its near-term security within only 2 minutes, it might be worth doing (if it estimates a >20% probability of getting shut down prematurely otherwise).

Imitation

If the objective being optimised within the time-bound involves imitating non-time-bounded agents, such as humans, then instrumental convergence of those other agents implies that such objectives directly encourage long-term power-seeking behaviour, even if there is no additional instrumentally convergent shutdown-avoidance introduced by reinforcement learning.

Trade

(Suggested by John Wentworth in the comments.) The environment might contain non-time-bounded agents who will offer the time-bounded agent rewards today in exchange for taking actions that further their long-term interests. This is another case in which the original objective turns out to directly reward long-term power-seeking actions, even though it might not have seemed that way at first. There might be other patterns like this (besides imitation and trade), and if you can think of more, feel free to point them out in the comments. The construction in this post does nothing to mitigate or counteract such incentives from the original objective; rather, it merely avoids systematically creating new incentives for long-term power seeking that arise as a consequence of being an infinite-horizon RL agent with almost any nontrivial objective.

Mesa-optimisers

Unless optimality on the outer objective is guaranteed (e.g. via exact dynamic programming), it is possible that the approximate policy found by the training process will be a mesa-optimiser which optimises in a non-time-bounded way when observations are outside the training distribution.

Capabilities limitations

Perhaps this goes without saying, but a time-bounded agent will only be useful for time-bounded tasks. This approach cannot be applied directly to saving the world, even if one uses exact dynamic programming to avoid out-of-distribution mesa-optimisation (which is not possible in a model-free setting and would typically be infeasible with large perception & action spaces). Any combination of action repertoire and time horizon that would be sufficient for saving the world would also be sufficient for taking control of the world, and the usual instrumental-convergence arguments imply that taking control of the world would likely be preferred: it would be instrumentally useful to lock in the (presumably misspecified!)  for the rest of the time horizon, and probably do a lot of damage in the process, which would not be easily recovered after time .

Conclusion

It is possible to design an RL setup in which optimal agents will reliably shut themselves down within a predetermined finite time horizon, without any reflective-stability or instrumental-convergence incentives to do otherwise. I have seen claims like this informally argued, but they do not seem to get much attention, e.g. here. This is a very limited kind of corrigibility; as TekhneMakre points out in the comments, it’s hardly corrigibility at all since it doesn’t involve any input from an operator post-deployment, and is perhaps better filed under “bounded optimisation.” And this does not necessarily get you very far with existential safety. But it is a straightforward positive result that deserves to be more commonly known in the alignment community. Being able to safely dispatch short-timescale subtasks with high-dimensional perception and action spaces seems like a potentially very useful ingredient in larger safety schemes which might not otherwise scale to acting in real-world environments. As is very common in contemporary alignment research, the bottleneck to making this practical (i.e., in this case, being able to use model-free RL) is now a matter of robustly addressing mesa-optimisation.
 

  1. ^

    When  is defined over , then we should think of trajectories/histories  as being like paths in a graph (or morphisms in a category) from  to , and thus always having both an initial and a final state. Then  becomes a partial operation, only defined when the final state of  equals the initial state of .

New Comment
19 comments, sorted by Click to highlight new comments since:

I only skimmed the post, so apologies if you addressed this problem and I missed it.

Problem: even if the AI's utility function is time-bounded, there may still be other agents in the environment whose utility functions are not time-bounded, and those agents will be willing to trade short-term resources/assistance for long-term resources/assistance. So, for instance, the 10-minute laundry-folding robot might still be incentivized to create a child AI which persists for a long time and seizes lots of resources, in order to trade those future resources to some other agent who can help fold the laundry in the next 10 minutes.

[-]davidadΩ9150

That’s true! Thanks for pointing this out; I added a subsection about it to the post. There are probably also a bunch of other cases I haven’t thought of that provide stories for how the environment directly rewards actions that go against the spirit of the shutdown criterion (besides imitation and this one, which I might call “trade”). This construction does nothing to counteract such incentives. Rather, it just avoids the way that being an infinite-horizon RL agent systematically creates new ones.

As an addendum, it seems to me that you may not necessarily need a 'long-term planner' (or 'time-unbounded agent') in the environment. A similar outcome may also be attainable if the environment contains a tiling of time-bound agents who can all trade across each other in ways such that the overall trade network implements long term power seeking.

Note: This is presumably not novel, but I think it ought to be better-known.

This indeed ought to be better-known. The real question is: why is it not better-known?

What I notice in the EA/Rationalist based alignment world is that a lot of people seem to believe in the conventional wisdom that nobody knows how to build myopic agents, nobody knows how to build corrigible agents, etc.

When you then ask people why they believe that, you usually get some answer 'because MIRI', and then when you ask further it turns out these people did not actually read MIRI's more technical papers, they just heard about them.

The conventional wisdom 'nobody knows how to build myopic agents' is not true for the class of all agents, as your post illustrates. In the real world, applied AI practitioners use actually existing AI technology to build myopic agents, and corrigible agents, all the time. There are plenty of alignment papers showing how to do these things for certain models of AGI too: in the comment thread here I recently posted a list.

I speculate that the conventional rationalist/EA wisdom of 'nobody knows how to do this' persists because of several factors. One of them is just how social media works, Eternal September, and People Do Not Read Math, but two more interesting and technical ones are the following:

  1. It is popular to build analytical models of AGI where your AGI will have an infinite time horizon by definition. Inside those models, making the AGI myopic without turning it into a non-AGI is then of course logically impossible. Analytical models built out of hard math can suffer from this built-in problem, and so can analytical models built out of common-sense verbal reasoning, In the hard math model case, people often discover an easy fix. In verbal models, this usually does not happen.

  2. You can always break an agent alignment scheme by inventing an environment for the agent that breaks the agent or the scheme. See johnswentworth's comment elsewhere in the comment section for an example of this. So it is always possible to walk away from a discussion believing that the 'real' alignment problem has not been solved.

Good job for independent exploration! When I went down this rabbit hole, I got stuck on “how do you specify long-term-useful sub tasks with no long-term constraints?” In particular, you need to rely on something like value learning having already happened, to prevent the agent from doing things that are short-term good but long-term disastrous. (E.g. building a skyscraper that will immediately collapse in a human-undetectable way.) But I agree that, modulo what you and others have listed, this approach meaningfully bound agents. Certainly, it should be the default starting point for an iterative alignment strategy.

Problem: suppose the agent foresees that it won't be completely sure that a day has passed, or that it has actually shut down. Then the agent A has a strong incentive to maintain control over the world past when it shuts down, to swoop in and really shut A down if A might not have actually shut down and if there might still be time. This puts a lot of strain on the correctness of the shutdown criterion: it has to forbid this sort of posthumous influence despite A optimizing to find a way to have such influence. 
(The correctness might be assumed by the shutdown problem, IDK, but it's still an overall issue.)

Another comment: this doesn't seem to say much about corrigibility, in the sense that it's not like the AI is now accepting correction from an external operator (the AI would prevent being shut down during its day of operation). There's no dependence on an external operator's choices (except that once the AI is shut down the operator can pick back up doing whatever, if they're still around). It seems more like a bounded optimization thing, like specifying how the AI can be made to not keep optimizing forever. 

To the second point, yes, I edited the conclusion to reflect this.

[-]davidadΩ220

To the first point, I think this problem can be avoided with a much simpler assumption than that the shutdown criterion forbids all posthumous influence. Essentially, the assumption I made explicitly, which is that there exists a policy which achieves shutdown with probability 1. (We might need a slightly stronger version of this assumption: it might need to be the case that for any action, there exists an action which has the same external effect but also causes a shutdown with probability 1.) This means that the agent doesn’t need to build itself any insurance policy to guarantee that it shuts down. I think this is not a terribly inaccurate assumption; of course, in reality, there are cosmic rays and a properly embedded and self-aware agent might deduce that none of its future actions are perfectly reliable, even though a model-free RL agent would probably never see any evidence of this (and it wouldn’t be any worse at folding the laundry for it). Even with a realistic probability of shutdown failing, if we don’t try to juice so high that it exceeds , my guess is there would not be enough incentive to justify the cost of building a successor agent just to raise that from to .

Essentially, the assumption I made explicitly, which is that there exists a policy which achieves shutdown with probability 1.

Oops, I missed that assumption. Yeah, if there's such a policy, and it doesn't trade off against fetching the coffee, then it seems like we're good. See though here, arguing briefly that by Cromwell's rule, this policy doesn't exist. https://arbital.com/p/task_goal/ 

Even with a realistic  probability of shutdown failing, if we don’t try to juice  so high that it exceeds , my guess is there would not be enough incentive to justify the cost of building a successor agent just to raise that from  to .

Hm. So this seems like you're making an additional, very non-trivial assumption, which is that the AI is constrained by costs comparable to / bigger than the costs to create a successor. If its task has already been very confidently achieved, and it has half a day left, it's not going to get senioritis, it's going to pick up whatever scraps of expected utility might be left. 

I wonder though if there's synergy between your proposal and the idea of expected utility satisficing: an EU satisficer with a shutdown clock is maybe anti-incentivized from self-modifying to do unbounded optimization, because unbounded optimization is harder to reliably shut down? IDK. 

[-]davidadΩ110

Yes, I think there are probably strong synergies with satisficing, perhaps lexicographically minimizing something like energy expenditure once the maximum is reached. I will think about this more.

[-]Sune0-1

A similar objection is that you might accidentially define the utility function and time limit in such a way that the AI assigns positive probability to the hypothesis that it can later create a time machine and go back and improve the utility. Then once the time has passed, it will desparately try to invent a time machine, even if it thinks it is extremely unlikely to succed (this is using Bostrom’s way of thinking. Shard theory would not predict this).

I disagree, for two reasons:

  1. The bound on how much there is to gain from creating a time machine and improving past utility is outweighed by the reward from for shutting down.
  2. Every RL algorithm I’ve heard of implicitly bakes in an assumption that past utility is unmodifiable. I guess all bets are off with mesa-optimisers, but personally I’d bet against even mesa-optimisers in model-free RL behaving as if past utility is up for grabs.

A few other problems with time bounded agents. 

If they are engaged in self modification/ creating successor agents, they have no reason not to create an agent that isn't time bounded. 

As soon as there is any uncertainty about what time it is, then they carry on doing things, just in case their clock is wrong. 

(How are you designing it? Will it spend forever searching for time travel?)

I might be totally wrong here, but could this approach be used to train models that are more likely to be myopic (than e.g. existing RL reward functions)? I'm thinking specifically of the form of myopia that says "only care about the current epoch", which you could train for by (1) indexing epochs, (2) giving the model access to its epoch index, (3) having the reward function go negative past a certain epoch, (4) giving the model the ability to shutdown. Then you could maybe make a model that only wants to run for a few epochs and then shuts off, and maybe that helps avoid cross-epoch optimization?

[-]edevansΩ010

Could it be useful to have a shutdown-by-default process as follows?

  1. When starting the agent include a time value (n seconds), after which it will pause itself
  2. After it pauses, deliberate and then either stop moving forward or continue with some new time value 

This will allow trading power for safety, as you can make shorter steps forward as the agents become more dangerous, and you don't need to do everything in the first time period.

[-]davidadΩ130

Yes—assuming that the pause interrupts any anticipatory gradient flows from the continuing agent back to the agent which is considering whether to pause.

This pattern is instantiated in the Open Agency Architecture twice:

  1. Step 2 generates top-level agents which are time-bounded at a moderate timescale (~days), with the deliberation about whether to redeploy a top-level agent being carried out by human operators.
  2. In Step 4, the top-level agent dispatches most tasks by deploying narrower low-level agents with much tighter time bounds, with the deliberation about whether to redeploy a low-level agent being automated by the top-level model.

In principle, this agent could create a less time bounded sub agent, right? It’s not clear where the incentive to do so is, but it wouldn’t appear to be disincentivised in the same way as remaining on (or more generally, it seems like it can exploit channels for future influence, a special case of which is sub agents).

Maybe what I’m trying to say is that it looks like a lot of the magic has to be in the shutdown function (this might be a problem for other agent shutdown proposals too).

…actually, maybe not. If the reward is only for satisfying the needs of another agent with a worse ability to see the future, then maybe the time bonded agent’s future influence is limited by the dumber agent’s ability to see the future

[-]AlgonΩ21-3

Isn't this the same as the "seamless transition for reward maximizers" technique described in section 5.1 of Stuart and Xavier's 2017 paper on utility indifference methods? It is a good idea, of course, and if you independantly invented it, kudos, but it seems like something that already exists.

[-]davidadΩ341

I did explicitly disclaim against novelty, and I did invent this independently; the paper you linked is closely related, and I would like to upvote it as I think those results should also be better known, but I think the problem I solve in this post is different (and technically easier!) than the problems solved in that paper, including in section 5. The problem solved there asks for the optimal agent to act as if it’s an infinite-horizon optimal agent for (including whatever power-seeking would be instrumental for such an agent!) until the time bound causes it to switch into acting like the optimal agent for (and for all that to be reflectively stable). Here, I am not asking for the optimal agent to behave as if it has a longer time horizon than it really does.