Response to: Universal agents and utility functions
Related approaches: Hibbard (2012), Hay (2005)
Background
Here is the function implemented by finite-lifetime AI:
,
where is the number of steps in the lifetime of the agent,
is the current step being computed,
is the set of possible observations,
is the set of possible actions,
is a function that extracts a reward value from an observation, a dot over a variable represents that its value is known to be the true value of the action or observation it represents, underlines represent that the variable is an input to a probability distribution, and
is a function that returns the probability of a sequence of observations, given a certain known history and sequence of actions, and starting from the Solomonoff prior. More formally,
,
where is the set of all programs,
is a function that returns the length of a program in bits, and a program applied to a sequence of actions returns the resulting sequence of observations. Notice that the denominator is a constant, depending only on the already known
, and multiplying by a positive constant does not change the argmax, so we can pretend that the denominator doesn't exist. If
is a valid program, then any longer program with
as a prefix is not a valid program, so
.
Problem
A problem with this is that it can only optimize over the input it receives, not over aspects of the external world that it cannot observe. Given the chance, AI would hack its input channel so that it would only observe good things, instead of trying to make good things happen (in other words, it would wirehead itself). Anja specified a variant of AI
in which she replaced the sum of rewards with a single utility value and made the domain of the utility function be the entire sequence of actions and observations instead of a single observation, like so:
.
This doesn't really solve the problem, because the utility function still only takes what the agent can see, rather than what is actually going on outside the agent. The situation is a little better because the utility function also takes into account the agent's actions, so it could punish actions that look like the agent is trying to wirehead itself, but if there was a flaw in the instructions not to wirehead, the agent would exploit it, so the incentive not to wirehead would have to be perfect, and this formulation is not very enlightening about how to do that.
[Edit: Hibbard (2012) also presents a solution to this problem. I haven't read all of it yet, but it appears to be fairly different from what I suggest in the next section.]
Solution
Here's what I suggest instead: everything that happens is determined by the program that the world is running on and the agent's actions, so the domain of the utility function should be . The apparent problem with that is that the formula for AI
does not contain any mention of elements of
. If we just take the original formula and replace
with , it wouldn't make any sense. However, if we expand out
in the original formula (excluding the unnecessary denominator), we can move the sum of rewards inside the sum over programs, like this:
.
Now it is easy to replace the sum of rewards with the desired utility function.
.
With this formulation, there is no danger of the agent wireheading, and all has to do is compute everything that happens when the agent performs a given sequence of actions in a given program, and decide how desirable it is. If the range of
is unbounded, then this might not converge. Let's assume throughout this post that the range of
is bounded.
[Edit: Hay (2005) presents a similar formulation to this.]
Extension to infinite lifetimes
The previous discussion assumed that the agent would only have the opportunity to perform a finite number of actions. The situation gets a little tricky when the agent is allowed to perform an unbounded number of actions. Hutter uses a finite look-ahead approach for AI, where on each step
, it pretends that it will only be performing
actions, where
.
.
If we make the same modification to the utility-based variant, we get
.
This is unsatisfactory because the domain of was supposed to consist of all the information necessary to determine everything that happens, but here, it is missing all the actions after step
. One obvious thing to try is to set
. This will be easier to do using a compacted expression for AI
:
,
where is the set of policies that map sequences of observations to sequences of actions and
is shorthand for the last observation in the sequence returned by
. If we take this compacted formulation, modify it to accommodate the new utility function, set
, and replace the maximum with a supremum (since there's an infinite number of possible policies), we get
,
where is shorthand for the last action in the sequence returned by
.
But there is a problem with this, which I will illustrate with a toy example. Suppose , and
when
, and for any
,
when
and
. (
does not depend on the program
in this example). An agent following the above formula would output
on every step, and end up with a utility of
, when it could have gotten arbitrarily close to
by eventually outputting
.
To avoid problems like that, we could assume the reasonable-seeming condition that if is an action sequence and
is a sequence of action sequences that converges to
(by which I mean
), then
.
Under that assumption, the supremum is in fact a maximum, and the formula gives you an action sequence that will reach that maximum (proof below).
If you don't like the condition I imposed on , you might not be satisfied by this. But without it, there is not necessarily a best policy. One thing you can do is, on step 1, pick some extremely small
, pick any element from
, and then follow that policy for the rest of eternity, which will guarantee that you do not miss out on more than
of expected utility.
Proof of criterion for supremum-chasing working
definition: If is an action sequence and
is an infinite sequence of action sequences, and
, then we say
converges to
. If
is a policy and
is a sequence of policies, and
, then we say
converges to
.
assumption (for lemma 2 and theorem): If converges to
, then
.
lemma 1: The agent described by
follows a policy that is the limit of a sequence of policies
such that
.
proof of lemma 1: Any policy can be completely described by the last action it outputs for every finite observation sequence. Observations are returned by a program, so the set of possible finite observation sequences is countable. It is possible to fix the last action returned on any particular finite observation sequence to be the argmax, and still get arbitrarily close to the supremum with suitable choices for the last action returned on the other finite observation sequences. By induction, it is possible to get arbitrarily close to the supremum while fixing the last action returned to be the argmax for any finite set of finite observation sequences. Thus, there exists a sequence of policies approaching the policy that the agent implements whose expected utilities approach the supremum.
lemma 2: If is a policy and
is a sequence of policies converging to
, then
.
proof of lemma 2: Let . On any given sequence of inputs
,
converges to
, so, by assumption,
.
For each , let
. The previous statement implies that
, and each element of
is a subset of the next, so
. The range of
is bounded, so
and
are defined. This also implies that the difference in expected utility, given any information, of any two policies, is bounded. More formally:
,
so in particular,
.
.
theorem:
,
where .
proof of theorem: Let's call the policy implemented by the agent .
By lemma 1, there is a sequence of policies
converging to such that
.
By lemma 2,
.
True. :)