Contrast these two expressions (hideously mashing C++ and pseudo-code):

  1. ,
  2. .

The first expression just selects the action that maximises for some function , intended to be seen as a reward function.

The second expression borrows from the syntax of C++; means the memory address of , while means the object at the memory address of . How is that different from itself? Well, it's meant to emphasise the ease of the agent wireheading in that scenario: all it has to do is overwrite whatever is written at memory location . Then can become - whatever the agent wants it to be.

Let's dig a bit deeper into the contrast between reward functions that can be easily wireheaded and those that can't.

The setup

The agent interacts with the environment in a series of timesteps, ending at time .

There is a 'reward box' which takes observations/inputs and outputs some numerical reward amount, given by the voltage, say. The reward function is a function of ; at timestep , that function is . The reward box will thus give out a reward of . Initially, the reward function that implements is .

The agent also gets a separate set of observations ; these observations may include full information about , but need not.

Extending the training distribution

Assume that the agent has had a training phase, for negative values of . And, during that training phase, was always equal to .

If is trained as a reinforcement agent, then there are two separate value functions that it can learn to maximise:

  1. , or
  2. .

Since for , which is all the that the agent has ever seen, both fit the data. The agent has not encountered a situation where it can change the physical behaviour of to anything other than - how will it deal with that?

Wireheading is in the eye of the beholder

Now, it's tempting to call the true reward, and the wireheaded reward, as the agent redefines the reward function in the reward box.

But that's a judgement call on our part. Suppose that we wanted the agent to maintain a high voltage coming out of , to power a device. Then is the 'true' reward, while is just information about what the current design of is.

This is a key insight, and a reason that avoiding wireheading is so hard. 'Wireheading' is not some ontologically fundamental category. It is a judgement on our part: some ways of increasing a reward are legitimate, some are not.

If the agent is to ever agree with our judgement, then we need ways of getting that judgement into the agent, somehow.

Solutions

The agent with a platonic reward

One way of getting the reward into the agent is to formally define it within the agent itself. If the agent knows , and is explicitly trained to maximise it, then it doesn't matter what changes to - the agent will still be wanting to maximise . Indeed, in this situation, the reward box is redundant, except maybe as a training crutch.

What about self modification, or the agent just rewriting the observations ? Well, if the agent is motivated to maximise the sum of , then, by the same argument as the standard Omohundro "basic AI drives", it will want to preserve maximising as the goal of its future copies. If we're still worried about this, we could insist that the agent knows about its own code, including , and acts to preserve in the future[1].

The agent modelling the reward

Another approach is to have the agent treat the data from its training phase as information, and try to model the correct reward.

This doesn't require the agent to have access to , from the beginning; the physical provides data about, and after the training phase, the agent might disassemble entirely (thus making trivial) in order to get at what is.

This approach relies on the agent having good priors over the reward function[2], and on these priors being well-grounded. For example, if the agent opens the box and sees a wire splitting at a certain point, it has to be able to interpret this a data on in a sensible way. In other words, we interpret as implementing a function between and the reward signal; it's important that the agent, upon disassembling or analysing , interprets it in the same way.

Thus we need the agent to interpret certain features of the environment (eg the wire splitting) in the way that we want it to. We thus want the agent to have an model of the environment, where the key features are abstracted in the correct way.

Provide examples of difference

The third example is to provide examples of divergence between and . We could, for example, unplug the for a time, then manually give the agent the rewards during that time. This distinguishes the platonic we want it to calculate, from the physical that the reward function implements.

Modifying the during the training phase, while preserving as the "true" reward, also allows the agent to learn the difference. Since is physical while is idealised, it might be a good idea to have the agent explicitly expect noise in the signal; including the expectation of noise in the agent's algorithm will prevent the agent overfitting to the actual , increasing the probability that it will figure out the idealised .

In the subsection above, we treated features of the reward box as information that the agent had to interpret in the desirable way. Here we are also wanting the agent to learn features correctly, but we are teaching it by example rather than by model - exploring the contours of the feature, illustrating when the feature behaved in a proper informative way, and when it failed to do so.

Extending the problem

All the methods above suffer from the same issue: though they might prevent wireheading within or , they don't prevent wireheading at the input boundary for , namely the value of .

As it stands, even the agent with a platonic reward will be motivated to maximise by taking control of the values of .

But the methods that we have mentioned above allow us to (try and) extend the boundary of wireheading out into the world, and hopefully reduce the problem.

To model this, let be the world at time (which is a function of the agent's past actions as well as the world's past), and let be the set of functions such that for .

Similarly to how the agent can't distinguish between and , if they were equal for all , the agent can't distinguish whether it should really maximise or , if they are both in .

In a sense, the agent's data for allows it to do function approximation for its objective when . And we want it to learn a good function, not a bad, wireheady one (again, the definition of wireheading depends on our objective).

The easiest and most general to learn is likely to be the one that outputs the actual physical ; and this is also the most likely candidate for leading to wireheading. If we want the agent to learn a different , how can we proceed?

Platonic

If we can model the whole world (or enough of it), we can present the agent with a labelled model of the whole environment, and maybe specify exactly what gives a true reward and what doesn't.

This is, of course, impossible; but there are variants on this idea which might work. In these examples, the agent is running on a virtual environment, and knows the properties of this virtual environment. It is then motivated to achieve goals within that virtual environment, but the second that the virtual environment doesn't behave as expected - the second our abstraction about that environment fails - the agent's motivation changes.

Modelling true reward

It might not be feasible to provide the agent with a full internal model of the environment; but we might be able to do part of the job. If we can give the agent a grounded definition of key features of the environment - what counts as a human, what counts as a coffee cup, what counts as a bin - then we can require the agent to model its reward function as being expressed in a particular way by those features.

Again, this is controlling the abstraction level at which the agent interprets the reward signal. So, if an agent sees a human get drenched in hot coffee and fall into a bin, it will interpret that as just described, rather than seeing it as the movement of atoms out in the world - or as the movement of electrons within its own circuits.

Examples of feature properties

Just as in the previous section, the agent could be taught about key features of the environment by example - by showing examples of good behaviour, of bad behaviour, of when abstractions hold (a human hand covered in hot coffee is still a human hand) and of when they fail (a human hand long detached from its owner is not a human, even in part, and need not be treated in the same way).

There are a lot of ideas as to who these principles could be illustrated, and, the more complex the environment and the more powerful the agent, the less likely it is that we have covered all the key examples sufficiently.

Feature information

Humans will not be able to describe all the properties of the key features we want to have as part of the reward function. But this is an area where the agent can ask the humans for more details about them, and get more information. Does it matter if the reward comes as a signal immediately, a delayed signal, or as a letter three weeks from now? When bringing the coffee to the human, what counts as spilling it, and what doesn't? Is this behaviour ok? What about this one?

Unlike issues of values, where its very easy to get humans confused and uncertain by expanding to new situations, human understanding of the implicit properties of features is more robust - the agent can ask many more questions, and consider many more hypotheticals, without the web of connotations of the feature collapsing.

This one way we could combat Goodhart problems, by including all the uncertainty and knowledge. In this case, the agent's definition of the key property of the key features, is... what a human could have told it about them if it had asked.


  1. This is an area where there is an overlap between issues of wireheading, symbol grounding, and self-modelling. Roughly speaking, we want a well-specified, well-grounded reward function, and an agent that can model itself in the world (including knowing the purpose of its various components), and that can distinguish which features of the world it can legitimately change to increase the reward, and which features it should not change to increase reward.

    So when an agent misbehaves with a "make humans happy" reward, this might be because terms like "humans" and "happy" are incorrectly defined, or not well-grounded, or because the agent wireheads the reward definition. In practice, there is a lot of overlap between all these failure modes, and they cannot necessarily be cleanly distinguished. ↩︎

  2. The platonic case can be seen as modelling, with a trivial prior. ↩︎

New Comment
2 comments, sorted by Click to highlight new comments since:

Will blackboxing the reward function help, either physically or cryptographically? It also should include the obscurity about the boundary between the BB and internal computations in AI, that is, the AI will not know which data actually trigger the BB reaction.

This is how human reward function seems to work. It is well protected from internal hacking: if I imagine that I got 100 USD, it will not create as much pleasure as in the situation when I am actually getting 100. When I send mental image of 100 USD into the my reward box, the box "knows" that I am lying and don't generate the reward. As don't know much about how the real human reward function works I have to get real 100 USD.

Another aspect / method is, let's call it, value hysteresis. If there are two functions which ambiguously both agree with the reward, then it's possible that the agent will come across one interpretation first, and then (motivated by that first goal) resist adopting the other interpretation. Like how drugs and family both give us dopamine, but if we start by caring about our family, we may shutter at the thought of abandoning the family life for drugs, and vice versa! So maybe we need to have the target interpretation salient and simple to learn early in training (for the kind of introspective agent to which this consideration applies)? (Doesn't seem all that reliable as a safety measure, but maybe worth keeping in mind..)