A putative new idea for AI control; index here.

This post supersedes previous posts (such as this, this, and, partially, this).

This post explains how we can get honest answers from an AI about the expectation of certain utility u, probabilities of specific events, and conditional expectations.

The next post will partially solve one of the main problems with using the approach in general, the problem of very low probabilities.

The setup


Let be any bounded utility function; by rescaling, we can assume it's bounded between 0 and 1.

Assume we have a smart AI that's boxed (as in the case of reduced impact AI, the boxing is also to protect the AI from us observing it). It has a single output channel, on which it can emit a message .

A random process (erasure) will erase before any human can observe it. The probability of erasure, , is low (thus , non-erasure, is high). Assume that the box and the setup (including the AI's internal setup) is such that if happens, humans cannot figure out what M was.

The AI will be given a utility to maximise (ignore the other box with Y coming from it, for the moment).

Honest expectation estimator

Suppose we want to figure out what the expectation of is. Let be this expectation, as estimated by the AI (and presumably unknown to us). Since is bounded by 0 and 1, so must be as well.

Let be the indicator function for : so if happens, and if does.

The AI's message gets to contain a single number , bounded between 0 and 1 (any other message is taken to be 0.5, say), and the AI is motivated to maximise:

  • ,

where .

First of all, note that every term in is multiplied by . This means the AI will get utility only in worlds where E happens. Thus it will work on the assumption that its message is never read by any humans (and thus that the contents of this message cannot affect human behaviour - in particular, it has no reason to lie or manipulate us through this message, as such manipulations only succeed in worlds where the AI gets 0 utility anyway).

If we assume that is independent of and , then the AI's expected utility is proportional to

  • .

But the AI can choose ! Differentiating this expression with respect to gives , which is zero iff . Differentiating a second time gives , showing that is a maximum.

Therefore, in the worlds where happens (the vast majority), we get to see a message that contains , which is a very accurate estimate, according to the AI, of the expected utility of . And the expected utility the AI derives is .

(Note: the entropy-inspired formula might be more usual to use for these purposes, but the formula used here is much easier to work with.)

Internal versus external probabilities

In some past posts, I've been designing AIs that used part of their probability/expectation estimator in order to define a pseudo-utility function. I've always been uncomfortable about this, as pseudo-utilities are likely to be unstable under self-improvement.

The approach above avoids that problem by having the AI's utility depend on specific facts about the world, and then gives the AI the ability to determine those specific facts. But those facts are precisely the information we wanted to known. This avoids the need to pseudo-utility, or constructions of the form "the AI wants to accurately convey the content of its probability estimate".

Probabilities and (conditional) utilities estimation

Now let's consider conditional utilities. Let be some bit string, of length , generated by some random process (see the image above).

Let be the indicator function for for some specific string . Then we can compute the expectation for , conditional on , by making the AI print containing such as to maximise

  • .

Just as before, the AI will choose .

Note that if , then is just , so we can figure out the probability of this way too.

Modify the definition of slightly, to

  • .

Here, the second term has moved out of the brackets. Now the AI will only focus on worlds where and happens, and will output .

Highest and lowest utilities and probabilities

Finally, we can use the methods above to have the AI give us the highest (and lowest) utilities and probabilities. Let the AI's message contain not only , but as well, and have it maximise:

  • .

For fixed , the AI will choose . It's expected reward is therefore . It will thus choose the with the highest , hence the highest . Using instead of will motivate it to instead find the with the lowest .

In fact, if we assume the AI cannot affect the probability of through the message (given ), the different probabilities and utilities are independent, and we can get the AI to emit the message , maximising the utility

  • .

Then (conditional on ), the AI will output so that, according to its best estimate,

  1. ,
  2. ,
  3. ,
  4. ,

The missing pieces

Note that the above algorithm gives the with the highest (or lowest) or . It does not give the with the highest .

If we assume that all output 's are equally probable, then . But there are issues with that assumption, and other ways of addressing the issue, which I'll get to in the next post.

New Comment
5 comments, sorted by Click to highlight new comments since:

Minimizing a loss function like is how we usually implement supervised learning. (It's pretty obvious this function is minimized at ...)

In plain language, your proposal seems to be: if a learner's output influences the system they are "predicting," and you want to interpret their output as a prediction in a straightforward way, then you could hide the learner's output whenever you gather training data.

Note that this doesn't let you access the beliefs of any particular learner, just one that is trained to optimize this supervised learning objective. I think the more interesting question is whether we can train a learner to accomplish some other task, and to reveal useful information about its internal state. (For example, to build an agent that simultaneously picks to maximize , and honestly reports its expectation of .)

u is a utility function, so squaring it doesn't work the same way as if it was a value (expectation of u^2 not square of expectation of u). That's why all the expressions are linear in utility (apart from the indicator functions/utilities, where its clear what multiplying by them does). If I could sensibly take non-linear functions of utilities, I wouldn't have the laborious construction in the next post to find the y's that maximise or minimise E(u|y).

Corrigibility could work for what you want, by starting with u and substituting in u#.

Another alternative is to have the AI be a maximiser, where u# is defined over a specific particular future message M (for which E is also defined). Then the AI acts (roughly) as a u-maximiser, but will output the useful M. I said roughly, because the u# term would cause it to want to learn more about the expectation of u than otherwise, but hopefully this wouldn't be a huge divergence. (EDIT: that leads to problems after M/E, but we can reset the utility at that point).

A loss function plays the same role as a utility function---i.e., we train the learner to minimize its expected loss.

I don't really understand your remark about linearity. Concretely, why is not an appropriate utility function?

Actually, does work, but "by coincidence" and has other negative properties.

Let me explain. First of all, note that things like do not work.

To show this: Let with probability , and with probability (I'm dropping the for this example, for simplicity). Then (so the correct is 0) while . Then in the expansion of , you will get , which in expectation is not 0. Hence the term in is non-zero, which means that cannot be a maximum of this function.

Then why does work then? Because it's (which is linear in ), minus (non-linear in , but the AI can't affect its value, so it's irrelevant in a boxed setup).

What other "negative properties" might have? Suppose we allow the AI to affect the value of , somehow, by something that is independent of the value of its output . Then an AI maximising will always set , for a total expectation of . Therefore it will also seek to maximise , which maximises if . So the agent will output the correct and maximise simultaneously.

But if it instead tries to maximise , then it will still pick , and gets expected utility of . Therefore it will pick actions that minimise the variance of , irrelevant of expectation.

Even without being able to affect , this messes up the rest of my setup. In particular, my "pick and so that you maximise " becomes maximising and the AI will now select the that minimises , instead of maximising . If ever or , it will choose those s.

What do you mean by "boxed"? Do you mean just physically walled off, or also walled off by an adjustment to its utility function, as in the "reduced impact" post?