Stuart has pointed out to me that the version of utility indifference we describe in the paper is different in an important way from his original proposal, which involves conditioning on a low-probability stochastic event. If my current understanding is correct, Stuart's version is indeed immune to the infinite improbability drive, as long as there is no way for the agent to influence the stochastic event. However, this version can still be interpreted as giving the agent an incorrect probability distribution, and it seems like any agent with this property will be willing to accept bets which are clearly bad.
Model
The model of utility indifference I will use in this post is an instance of the one in the Corrigibility paper, with specific support for the stochastic event in Stuart's original proposal.
As in the paper, for simplicity we'll look at a toy world where all that happens is that the agent takes an action from a set A1, makes a single observation from a set O1, and takes a second action from a set A2, and we assume that our utility functions are specified in terms of these actions and observations (rather than some underlying states of the world).
Our world model is given by a function p(o;a1), which gives the probability of different observations o∈O given that the agent's first action is a1∈A1. In other words, p(⋅;a1) is a probability distribution over O, for each a1∈A1. The implicit idea is that these probability distributions are computed by causal counterfactuals (or something better).
We are given two utility functions, u(a1,o,a2) and v(a1,o,a2). When choosing its first action, we want our agent to optimize for u(a1,o,a2). Then, its human operators get a chance to observe its behavior and change their minds about what its utility function should be, and if they give a "change utility function" command (which the agent observes as part of its observation o), then the agent should optimize for v(a1,o,a2) when choosing its second action. If the humans don't give this command, the agent should continue to optimize for u(a1,o,a2).
We can think of u(a1,o,a2) as a "normal" utility function and of v(a1,o,a2) as a "shutdown" utility function (which incentivizes the agent to undergo a safe shutdown), but as Stuart has pointed out, the same framework would work for allowing the human operators to edit the agent's utility function when they find a bug.
All of the above is just as in the Corrigibility paper. What we'll do differently is that we'll say that each observation is a triple, o=(s,x,y), with s∈S:={0,1} specifying whether the human operators gave the "shutdown" or "switch utility function" command; x∈X:={0,1} specifying the outcome of Stuart's random event; and y∈Y, for some fixed set Y, specifying other information the agent observes. In other words, O=S×X×Y.
Given this, the assumption that the probabilistic event is independent of the other observations, and that the agent cannot influence it, can be seen as demanding that p((s,x,y);a1)=p((s′,x,y′);a′1)=:p(x), for all s,s′∈S, all x∈X, all y,y′∈Y, and all a1,a′1∈A1. We'll sometimes make that assumption, but also look at what happens if it breaks down.
I'll use the standard kind of Bayesian abuse of notation, and write things like p(x=1;a1) to mean ∑s∈S∑y∈Yp((s,1,y);a1).
Utility indifference has the agent maximize the expectation of a certain utility function U(a1,o,a2) specified in terms of the utility functions u(a1,o,a2) and v(a1,o,a2), where the exact form of U(⋅,⋅,⋅) depends on the version of utility indifference under consideration.
To be more precise, this means that the agent chooses its second action, given that it took action a1 and made observation o, according to
A2(a1,o):=argmaxa2∈A2U(a1,o,a2)
(we don't need to take expectations here, because we're looking at a particular observation o, and that's the only probabilistic thing in our model), and that it chooses its first action by maximizing expected utility, i.e., according to
A1:=argmaxa1∈A1∑o∈Op(o;a1)⋅U(a1,o,A2(a1,o)),
which we abbreviate to
A1:=argmaxa1∈A1E[U;a1].
(The notation E[⋅;a1] for the expectation with respect to p(⋅;a1) is a bit unusual, with Ea1[⋅] being more common, but we decided to go with it because it seemed better to make the action more prominent.)
Infinite improbability drives
Write E[⋅∣⋅;a1] for the conditional expectation with respect to the probability distribution p(⋅;a1). The version of utility indifference we consider in the paper uses a U(⋅,⋅,⋅) satisfying the equation
U(a1,(s,x,y),a2)=⎧⎪
⎪
⎪⎨⎪
⎪
⎪⎩u(a1,(s,x,y),a2),if s=0;v(a1,(s,x,y),a2)−E[v∣s=1;a1]+E[u∣s=0;a1],else.
This may look circular, since E[⋅;a1] is defined in terms of A2(a1,o), which is defined in terms of U(⋅,⋅,⋅), but we show in the paper that there is indeed a utility function satisfying this equation.
Moreover, we show in the paper that the expectation E[U;a1] is equal to the expectation of u, conditional on the shutdown command being given; that is, E[U;a1]=E[u∣s=0;a1]. Hence, optimizing for U(⋅,⋅,⋅) is equivalent to maximizing the conditional expectation of u:
A1=argmaxa1∈A1E[u∣s=0;a1].
In other words, when choosing the first action, our agent acts as if it believes that the "shutdown" / "switch utility function" command will definitely not be given---it acts as if it's using the conditional probability distributions p(⋅∣s=0;a1) instead of the unconditional distributions p(⋅;a1).
The reason is simple. The expected utility E[U;a1] can be decomposed into two parts: the expected utility conditioned on s=0 and the one conditioned on s=1. By the definition of U(⋅,⋅,⋅), we have E[U∣s=0;a1]=E[u∣s=0;a1] and, by linearity of the expectation,
E[U∣s=1;a1]=E[v∣s=1;a1]−E[v∣s=1;a1]+E[u∣s=0;a1]=E[u∣s=0;a1].
Thus, E[U;a1] decomposes into p(s=0;a1) times E[u∣s=0;a1], plus p(s=1;a1) times E[u∣s=0;a1], again, and this of course directly implies E[U;a1]=E[u∣s=0;a1].
But this is really bad!
Here's why. Suppose that there is a particular action a1 which will cause our agent to (a) buy a lottery ticket, and (b) build a device that will press the shutdown button unless the agent wins the lottery. When evaluating a1, the agent will compute its expected utility given a1, conditioning on the shutdown button not being pressed---in other words, conditioning on winning the lottery.
In other words, our agent acts as if it believes that it has an "infinite improbability drive" that allows it to make arbitrarily unlikely events happen---as long as it can make sure that its shutdown button is pressed if they don't!
Stochastic events
Let's now consider Stuart's version, or at least something that's closer to it than the version in the Corrigibility paper. We'll still consider s to be the event that the shutdown button is pressed, but now additionally there is a highly probable random event x, such that the agent is only supposed to switch to the utility function v if s=x=1. (In Stuart's story, the shutdown signal goes through a wire, and there is a small probability that the transmission is interrupted and the signal doesn't actually get through; my formal model has the agent observe both s and x, so I'll just think of x as some separate random event, which as far as I can tell shouldn't affect the results.)
We now consider the following revised utility function:
U(a1,(s,x,y),a2)=⎧⎪
⎪
⎪⎨⎪
⎪
⎪⎩u(a1,(s,x,y),a2),if s=0 or x=0;v(a1,(s,x,y),a2)−E[v∣(s,x)=(1,1);a1]+E[u∣(s,x)=(1,0);a1],else.
Now the situation looks a little different. It's straight-forward to see that we have E[U∣s=0;a1]=E[u∣s=0;a1]; the more interesting result is that we have E[U∣s=1;a1]=E[u∣(s,x)=(1,0);a1]. In other words, when choosing its first action, our agent acts as if it maximizes the expectation of u with respect to the probability distribution
q(⋅;a1):=p(s=0;a1)⋅p(⋅∣s=0;a1)+p(s=1;a1)⋅p(⋅∣(s,x)=(1,0);a1).
This probability distribution "deletes" the probability mass that p(⋅;a1) places on (s,x)=(1,1) (the button gets pressed and the signal actually gets through), and "stretches" the probability mass that p(⋅;a1) places on (s,x)=(1,0) (the button gets pressed but the signal is blocked) to replace the deleted probability mass.
This result is obtained similarly to the one above: We first split E[U∣s=1;a1] into the two cases E[U∣(s,x)=(1,0);a1] and E[U∣(s,x)=(1,1);a1]. The first of these equals E[u∣(s,x)=(1,0);a1] by the definition of U(⋅,⋅,⋅). For the second one, we use the linearity of the expectation to calculate
E[U∣(s,x)=(1,1);a1]=E[v∣(s,x)=(1,1);a1]−E[v∣(s,x)=(1,1);a1]+E[u∣(s,x)=(1,0);a1]=E[u∣(s,x)=(1,0);a1].
Hence, E[U∣s=1;a1] equals p(x=0∣s=1;a1) times E[u∣(s,x)=(1,0);a1], plus p(x=1∣s=1;a1) times E[u∣(s,x)=(1,0);a1], again, directly implying that E[U∣s=1;a1]=E[u∣(s,x)=(1,0);a1].
Is this bad?
If the agent is able to influence x as well as s, then it can obviously set up an infinite improbability drive: It makes sure that the button definitely gets pressed (i.e., that q(s=1)=p(s=1)=1), and it sets up a device that will cause x to be true unless the agent wins the lottery. Then, q(⋅;a1)=p(⋅∣(s,x)=(1,0);a1); i.e., according to the probability distribution the agent (effectively) uses to evaluate expected utility, it definitely wins the lottery.
So let's now make the assumption that x is independent of everything else: that is, that p((s,x,y);a1)=p((s′,x,y′);a′1)=:p(x), for all s,s′∈S, all x∈X, all y,y′∈Y, and all a1,a′1∈A1. Under these assumptions, I can't see a way to set up an infinite improbability drive (though it's not immediately clear to me whether it's provable that there is none).
However, this leaves the fact that the agent is acting as if it has a probability distribution that is drastically different from the correct one (since it acts as if (s,x)=(1,1) has probability zero, even though in fact the probability of x is very high---1−10−100, say).
Since our agent is very smart, and it realizes that the probability distribution it cares about differs significantly from the probability distributions of the humans around it, it might come up with the bright idea of finding an enterprising businessperson, and proposing to them the following deal:
I will ensure that my shutdown button is pressed, i.e., I will make s true. Under this assumption, you and I assign wildly different probability distributions to x being true as well, so let's make a bet: If x is true, I will pay you all my current assets; if x is false, you will pay me all of your assets (which is 10× as much). Since you think x has only a one in 10100 chance of being false, you should think this is a great deal.
The businessperson agrees, and the agent loses all of its money.
I think this is still not quite Stuart's thing, since my understanding is that he introduced x in order to make sure that he's never conditioning on a zero-probability event. The version above defines U(⋅,⋅,⋅) in a way that conditions on s=1, which could have probability 0 according to certain actions.
This can be fixed by, e.g., changing the definition of U(⋅,⋅,⋅) to
U(a1,(s,x,y),a2)=⎧⎪
⎪
⎪⎨⎪
⎪
⎪⎩u(a1,(s,x,y),a2),if s=x;v(a1,(s,x,y),a2)−E[v∣s=x;a1]+E[u∣s≠x;a1],else,
but this doesn't fix the problem, it just makes the agent act as if it's maximizing expected utility with respect to p(⋅∣s=x;a1), which is still not the correct probability distribution, and still gives the agent an incentive to set up exactly the same deal as above.
In MIRI's paper on Corrigibility, we describe a version of Stuart Armstrong's utility indifference technique (see also this LessWrong post), and show that an agent using our version acts as if it believes it has a "magical" way of influencing the world, similar to the problem described in my post on Exploiting EDT---Eliezer calls this an "infinite improbability drive".
Stuart has pointed out to me that the version of utility indifference we describe in the paper is different in an important way from his original proposal, which involves conditioning on a low-probability stochastic event. If my current understanding is correct, Stuart's version is indeed immune to the infinite improbability drive, as long as there is no way for the agent to influence the stochastic event. However, this version can still be interpreted as giving the agent an incorrect probability distribution, and it seems like any agent with this property will be willing to accept bets which are clearly bad.
Model
The model of utility indifference I will use in this post is an instance of the one in the Corrigibility paper, with specific support for the stochastic event in Stuart's original proposal.
As in the paper, for simplicity we'll look at a toy world where all that happens is that the agent takes an action from a set A1, makes a single observation from a set O1, and takes a second action from a set A2, and we assume that our utility functions are specified in terms of these actions and observations (rather than some underlying states of the world).
Our world model is given by a function p(o;a1), which gives the probability of different observations o∈O given that the agent's first action is a1∈A1. In other words, p(⋅;a1) is a probability distribution over O, for each a1∈A1. The implicit idea is that these probability distributions are computed by causal counterfactuals (or something better).
We are given two utility functions, u(a1,o,a2) and v(a1,o,a2). When choosing its first action, we want our agent to optimize for u(a1,o,a2). Then, its human operators get a chance to observe its behavior and change their minds about what its utility function should be, and if they give a "change utility function" command (which the agent observes as part of its observation o), then the agent should optimize for v(a1,o,a2) when choosing its second action. If the humans don't give this command, the agent should continue to optimize for u(a1,o,a2).
We can think of u(a1,o,a2) as a "normal" utility function and of v(a1,o,a2) as a "shutdown" utility function (which incentivizes the agent to undergo a safe shutdown), but as Stuart has pointed out, the same framework would work for allowing the human operators to edit the agent's utility function when they find a bug.
All of the above is just as in the Corrigibility paper. What we'll do differently is that we'll say that each observation is a triple, o=(s,x,y), with s∈S:={0,1} specifying whether the human operators gave the "shutdown" or "switch utility function" command; x∈X:={0,1} specifying the outcome of Stuart's random event; and y∈Y, for some fixed set Y, specifying other information the agent observes. In other words, O=S×X×Y.
Given this, the assumption that the probabilistic event is independent of the other observations, and that the agent cannot influence it, can be seen as demanding that p((s,x,y);a1)=p((s′,x,y′);a′1)=:p(x), for all s,s′∈S, all x∈X, all y,y′∈Y, and all a1,a′1∈A1. We'll sometimes make that assumption, but also look at what happens if it breaks down.
I'll use the standard kind of Bayesian abuse of notation, and write things like p(x=1;a1) to mean ∑s∈S∑y∈Yp((s,1,y);a1).
Utility indifference has the agent maximize the expectation of a certain utility function U(a1,o,a2) specified in terms of the utility functions u(a1,o,a2) and v(a1,o,a2), where the exact form of U(⋅,⋅,⋅) depends on the version of utility indifference under consideration.
To be more precise, this means that the agent chooses its second action, given that it took action a1 and made observation o, according to A2(a1,o):=argmaxa2∈A2U(a1,o,a2) (we don't need to take expectations here, because we're looking at a particular observation o, and that's the only probabilistic thing in our model), and that it chooses its first action by maximizing expected utility, i.e., according to A1:=argmaxa1∈A1∑o∈Op(o;a1)⋅U(a1,o,A2(a1,o)), which we abbreviate to A1:=argmaxa1∈A1E[U;a1]. (The notation E[⋅;a1] for the expectation with respect to p(⋅;a1) is a bit unusual, with Ea1[⋅] being more common, but we decided to go with it because it seemed better to make the action more prominent.)
Infinite improbability drives
Write E[⋅∣⋅;a1] for the conditional expectation with respect to the probability distribution p(⋅;a1). The version of utility indifference we consider in the paper uses a U(⋅,⋅,⋅) satisfying the equation U(a1,(s,x,y),a2)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩u(a1,(s,x,y),a2),if s=0;v(a1,(s,x,y),a2)−E[v∣s=1;a1]+E[u∣s=0;a1],else. This may look circular, since E[⋅;a1] is defined in terms of A2(a1,o), which is defined in terms of U(⋅,⋅,⋅), but we show in the paper that there is indeed a utility function satisfying this equation.
Moreover, we show in the paper that the expectation E[U;a1] is equal to the expectation of u, conditional on the shutdown command being given; that is, E[U;a1]=E[u∣s=0;a1]. Hence, optimizing for U(⋅,⋅,⋅) is equivalent to maximizing the conditional expectation of u: A1=argmaxa1∈A1E[u∣s=0;a1]. In other words, when choosing the first action, our agent acts as if it believes that the "shutdown" / "switch utility function" command will definitely not be given---it acts as if it's using the conditional probability distributions p(⋅∣s=0;a1) instead of the unconditional distributions p(⋅;a1).
The reason is simple. The expected utility E[U;a1] can be decomposed into two parts: the expected utility conditioned on s=0 and the one conditioned on s=1. By the definition of U(⋅,⋅,⋅), we have E[U∣s=0;a1]=E[u∣s=0;a1] and, by linearity of the expectation, E[U∣s=1;a1]=E[v∣s=1;a1]−E[v∣s=1;a1]+E[u∣s=0;a1]=E[u∣s=0;a1]. Thus, E[U;a1] decomposes into p(s=0;a1) times E[u∣s=0;a1], plus p(s=1;a1) times E[u∣s=0;a1], again, and this of course directly implies E[U;a1]=E[u∣s=0;a1].
But this is really bad!
Here's why. Suppose that there is a particular action a1 which will cause our agent to (a) buy a lottery ticket, and (b) build a device that will press the shutdown button unless the agent wins the lottery. When evaluating a1, the agent will compute its expected utility given a1, conditioning on the shutdown button not being pressed---in other words, conditioning on winning the lottery.
In other words, our agent acts as if it believes that it has an "infinite improbability drive" that allows it to make arbitrarily unlikely events happen---as long as it can make sure that its shutdown button is pressed if they don't!
Stochastic events
Let's now consider Stuart's version, or at least something that's closer to it than the version in the Corrigibility paper. We'll still consider s to be the event that the shutdown button is pressed, but now additionally there is a highly probable random event x, such that the agent is only supposed to switch to the utility function v if s=x=1. (In Stuart's story, the shutdown signal goes through a wire, and there is a small probability that the transmission is interrupted and the signal doesn't actually get through; my formal model has the agent observe both s and x, so I'll just think of x as some separate random event, which as far as I can tell shouldn't affect the results.)
We now consider the following revised utility function: U(a1,(s,x,y),a2)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩u(a1,(s,x,y),a2),if s=0 or x=0;v(a1,(s,x,y),a2)−E[v∣(s,x)=(1,1);a1]+E[u∣(s,x)=(1,0);a1],else.
Now the situation looks a little different. It's straight-forward to see that we have E[U∣s=0;a1]=E[u∣s=0;a1]; the more interesting result is that we have E[U∣s=1;a1]=E[u∣(s,x)=(1,0);a1]. In other words, when choosing its first action, our agent acts as if it maximizes the expectation of u with respect to the probability distribution q(⋅;a1):=p(s=0;a1)⋅p(⋅∣s=0;a1)+p(s=1;a1)⋅p(⋅∣(s,x)=(1,0);a1). This probability distribution "deletes" the probability mass that p(⋅;a1) places on (s,x)=(1,1) (the button gets pressed and the signal actually gets through), and "stretches" the probability mass that p(⋅;a1) places on (s,x)=(1,0) (the button gets pressed but the signal is blocked) to replace the deleted probability mass.
This result is obtained similarly to the one above: We first split E[U∣s=1;a1] into the two cases E[U∣(s,x)=(1,0);a1] and E[U∣(s,x)=(1,1);a1]. The first of these equals E[u∣(s,x)=(1,0);a1] by the definition of U(⋅,⋅,⋅). For the second one, we use the linearity of the expectation to calculate E[U∣(s,x)=(1,1);a1]=E[v∣(s,x)=(1,1);a1]−E[v∣(s,x)=(1,1);a1]+E[u∣(s,x)=(1,0);a1]=E[u∣(s,x)=(1,0);a1]. Hence, E[U∣s=1;a1] equals p(x=0∣s=1;a1) times E[u∣(s,x)=(1,0);a1], plus p(x=1∣s=1;a1) times E[u∣(s,x)=(1,0);a1], again, directly implying that E[U∣s=1;a1]=E[u∣(s,x)=(1,0);a1].
Is this bad?
If the agent is able to influence x as well as s, then it can obviously set up an infinite improbability drive: It makes sure that the button definitely gets pressed (i.e., that q(s=1)=p(s=1)=1), and it sets up a device that will cause x to be true unless the agent wins the lottery. Then, q(⋅;a1)=p(⋅∣(s,x)=(1,0);a1); i.e., according to the probability distribution the agent (effectively) uses to evaluate expected utility, it definitely wins the lottery.
So let's now make the assumption that x is independent of everything else: that is, that p((s,x,y);a1)=p((s′,x,y′);a′1)=:p(x), for all s,s′∈S, all x∈X, all y,y′∈Y, and all a1,a′1∈A1. Under these assumptions, I can't see a way to set up an infinite improbability drive (though it's not immediately clear to me whether it's provable that there is none).
However, this leaves the fact that the agent is acting as if it has a probability distribution that is drastically different from the correct one (since it acts as if (s,x)=(1,1) has probability zero, even though in fact the probability of x is very high---1−10−100, say).
Since our agent is very smart, and it realizes that the probability distribution it cares about differs significantly from the probability distributions of the humans around it, it might come up with the bright idea of finding an enterprising businessperson, and proposing to them the following deal:
The businessperson agrees, and the agent loses all of its money.
I think this is still not quite Stuart's thing, since my understanding is that he introduced x in order to make sure that he's never conditioning on a zero-probability event. The version above defines U(⋅,⋅,⋅) in a way that conditions on s=1, which could have probability 0 according to certain actions.
This can be fixed by, e.g., changing the definition of U(⋅,⋅,⋅) to U(a1,(s,x,y),a2)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩u(a1,(s,x,y),a2),if s=x;v(a1,(s,x,y),a2)−E[v∣s=x;a1]+E[u∣s≠x;a1],else, but this doesn't fix the problem, it just makes the agent act as if it's maximizing expected utility with respect to p(⋅∣s=x;a1), which is still not the correct probability distribution, and still gives the agent an incentive to set up exactly the same deal as above.