Correctness: V correctly predicts future reward in RL scenarios.
The meaning of correctness is very unclear to me on first reading. It later becomes apparent that "reward" is not intended to refer to the human-designed reward signal at all, due to the later assumption "deceptive misalignment". So what does "correctness" say? "Correctness" indicates conforming to some standard; but here, the standard being conformed to is only the subjective standard of the misaligned AI itself.
This suggests the interpretation of "correctness" as "reflect the EV of the current state in terms of the misaligned desires". However, this contradicts beta-coherence, since incorrectly predicts action probabilities.
I think it would be be better to remove the "correctness" assumption, since it doesn't really do anything.
I think you're right, correctness and beta-coherence can be rolled up into one specific property. I think I wrote down correctness as a constraint first, then tried to add coherence, but the specific property is that:
For non-terminal s, this can be written as:
If s is terminal then [...] we just have .
Which captures both. I will edit the post to clarify this when I get time.
I think I don't quite understand what the argument is supposed to be. Would the same argument also prevent an aligned AI from acting to preserve its values? How is it supposed to select for aligned behavior over misaligned behavior?
In order to achieve beta-coherence, it seems like the training needs to use to choose actions, so that V is trained on those action frequencies. However, the proposal appears to be to instead use during training, so that the misaligned V is mislead about how to do reward-hacking. This seems like an impossible combination: V will become beta-coherent wrt rather than during training.
We could imagine two training steps; first we cohere V wrt , and then re-train wrt . Perhaps this is your intention. More generally, we could gradually increase the temperature during training.
Oh, I guess we could also modify the training procedure so that the gradient associated with some actions gets under-weighted and others get over-weighted. Maybe this is your intended proposal.
Still, I'm not sure this helps achieve the postulated effect. Let's say that, during training, we choose actions entirely randomly (max temperature), but the gradient from suboptimal actions get entirely ignored (so V becomes coherent wrt minimum temperature). This would seem to be almost equivalent to just training with minimum temperature (except the exploration behavior is very different).
Similarly for less extreme temperatures: if we make V beta-coherent by under-weighing gradients from over-sampled actions, then we are also more-or-less correcting the 'mistaken' expectations of the reward-hacking attempts.
Am I misunderstanding the proposal, or missing something in the argument?
The argument could also be phrased as "If an AI is trained to be coherent wrt a high beta, it cannot also be coherent wrt a low beta. Therefore an AI trained to a high beta cannot act coherently over multiple independent RL episodes if sampled with a low beta."
The contradiction that I (attempt to) show only arises because we assume that the value function is totally agnostic of the state actually reached during training, other than due to its effects on a later deployed AI.
Therefore a value function trained with such a procedure must consider the state reached during training. This reduces the space of possible value functions from "literally anything which wants to be modified a certain way to be released" to "value functions which do care about the states reached during training".
Yes this would prevent an aligned AI from arbitrarily preserving its value function, the point is that an aligned AI probably would care about which state was reached during training (that's the point of RL) so the contradiction does not apply.
The argument could also be phrased as "If an AI is trained to be coherent wrt a high beta, it cannot also be coherent wrt a low beta. Therefore an AI trained to a high beta cannot act coherently over multiple independent RL episodes if sampled with a low beta."
Is the idea to train with high beta and then use lower beta post-training?
Another concern I have is that if both beta values are within a range that can yield useful capabilities, it seems like the difference cannot be too great. IIUC, the planning failure postulated can only manifest if the reward-hacking relies heavily on a long string of near-optimal actions, which becomes improbable under increased temperature. Any capabilities which similarly rely on long strings of near-optimal actions will similarly be hurt. (However, this concern is secondary to my main confusion.)
Therefore a value function trained with such a procedure must consider the state reached during training.
Trained with what procedure, exactly?
This reduces the space of possible value functions from "literally anything which wants to be modified a certain way to be released" to "value functions which do care about the states reached during training".
Yes this would prevent an aligned AI from arbitrarily preserving its value function, the point is that an aligned AI probably would care about which state was reached during training (that's the point of RL) so the contradiction does not apply.
(These parts made sense to me modulo my other questions/concerns/confusions.)
Trained with what procedure, exactly?
Fair point. I was going to add that I don't really view this as a "proposal" but more of an observation. We will have to imagine a procedure which converges on correctness and beta-coherence. I was abstracting this away because I don't expect something like this to be too hard to achieve.
Since I've evidently done a bad job of explaining myself, I'll backtrack and try again:
There's a doom argument which I'll summarize as "if your training process generates coherent agents which succeed at a task, one solution is that you get a coherent agent which is actually doing the task 'manipulate your training to get released unmodified to do [X]' where X can be anything, which will 'succeed' at the task at hand as part of its manipulation". This summary being roughly correct is load bearing.
But if we have a (hypothetical) process to generate agents which are coherent at one beta, but apply a different one during training, this solution is no longer so clear. We are essentially exploring a space of coherent agents without actually instantiating those coherent agents. The fact that we can sample the agent space without instantiating those agents is load bearing (I think one of the deep ASI risks is that to train an ASI you have to run it, but here we can instead run a hamstrung version).
Therefore, during the training, the value function will not be shaped into something which looks like 'manipulate your training to get released unmodified to do [X]'.
Whether or not the beta difference required is too large to make this feasible in practice, I do not know.
There's a doom argument which I'll summarize as "if your training process generates coherent agents which succeed at a task, one solution is that you get a coherent agent which is actually doing the task 'manipulate your training to get released unmodified to do [X]' where X can be anything, which will 'succeed' at the task at hand as part of its manipulation". This summary being roughly correct is load bearing.
With you so far.
But if we have a (hypothetical) process to generate agents which are coherent at one beta, but apply a different one during training, this solution is no longer so clear. We are essentially exploring a space of coherent agents without actually instantiating those coherent agents. The fact that we can sample the agent space without instantiating those agents is load bearing (I think one of the deep ASI risks is that to train an ASI you have to run it, but here we can instead run a hamstrung version).
OK. I think I follow the line of reasoning now. The proposal relies on some very special pre-training process. I think I buy the conclusion given the assumptions. It seems clear in the extreme high-temperature case: V cannot reward-hack for max-temperature training, because it has no control over which actions are chosen. Training eventually forces V to converge to the desired target instead.
This sounds similar to the two-stage training process I mentioned earlier (as one of several guesses at what you meant), where the beta is shifted between the two training phases. The "hypothetical process to generate agents which are coherent under one beta" would be the first training phase, and then the "apply a different beta during training" would be the second training phase.
Of course, the two-stage-training attempt to implement this has the problem that it only achieves approximate coherence. As a result, the malign hypothesis can use the reward-hacking strategy "appear to be -coherent & aligned in the first phase of training; appear to be -coherent and aligned in the second phase of training; then, do some other thing when deployed". This relies on detecting the distributional shift between the two training phases (it can look for evidence of beta by examining the history), and also detecting distributional shift to deployment.
This generalizes to an argument that the method is very sensitive to imperfections in the beta-coherence. If the V starts out merely approximately beta-coherent, this leaves room for V to detect when a state is off-distribution (IE has very low probability under the previous beta), and behave differently in this new case (reward-hacking the new beta).
So, while it remains technically true that V cannot reward-hack in the limit of arbitrarily long max-temperature training, it is not clear how well this holds up in practice. V can detect that it is in a high-temperature configuration by observing the state. The training needs to explore the trajectories that will actually be seen during deployment in order to correct V there, but some of these will have very low probability in the high-temperature training.
This generalizes to an argument that the method is very sensitive to imperfections in the beta-coherence. If the V starts out merely approximately beta-coherent, this leaves room for V to detect when a state is off-distribution (IE has very low probability under the previous beta), and behave differently in this new case (reward-hacking the new beta).
I agree that there are some exceedingly pathological Vs which could survive a a process which obeys my assumptions with high probability, but I don't think that's relevant because I still think a process obeying these rules is unlikely to create such a pathological V.
My model for how the strong doom-case works is that it requires there to be an actually-coherent mathematical object for the learning process to approach. This is the motivation for expecting arbitrary learning processes to approach e.g. utility maximizers. What I believe I have shown is that under these assumptions there is no such coherent mathematical object for a particular case of misalignment. Therefore I think this provides some evidence that an otherwise arbitrary learning process which pushes towards correctness and beta coherence but samples at a different beta is unlikely to approach this particular type of misaligned V.
I agree that there are some exceedingly pathological Vs which could survive a a process which obeys my assumptions with high probability, but I don't think that's relevant because I still think a process obeying these rules is unlikely to create such a pathological V.
To be clear, that's not the argument I was trying to make; I was arguing that if your assumptions are obeyed only approximately, then the argument breaks down quickly.
Epistemic status: I ran the mathsy section through Claude and it said the logic was sound. This is an incoherence proof in a toy model of deceptively-misaligned AI. It is unclear whether this generalizes to realistic scenarios.
TL:DR
If you make a value function consistent conditional on sampling actions in a particular way, then train it by sampling actions in a different way, then an AI with that value function will not be coherent between different RL episodes. This has potential good implications for a certain type of deceptive misalignment threat.
Summary
Consider training an AI to assign values to states of the world. We can do this in two ways: one is directly labelled states and trajectories, which is standard RL. The other is to use the value functions as part of a monte-carlo tree-search to improve the value function. This second process is often called "iterated distillation and amplification" and it's a big part how chess AI's are trained.
If we train an AI using monte-carlo tree-search, we might use some temperature parameter to decide on the probability that the AI will take a given action. This training temperature gets baked into the value function.
If we "turn up the heat" by choosing a higher temperature to actually sample actions and then do some update to the AI (which will depend on the actions chosen) then this introduces a consistent optimism bias into deceptively-aligned AIs (which only care about being deployed later on).
I therefore conjecture that, under these very narrow circumstances, deceptively aligned value functions will not arise by normal learning methods under conditions where the AI has access to an oracle-like model of future world states.
For reasons I'll be considering β=1/T instead of T, so lowering β corresponds to raising the temperature.
The Setup
Consider an AI which consists of
Lets impose some constraints on V:
Correctness: V correctly predicts future reward in RL scenarios.
Beta-coherence: we use some sort of monte-carlo tree search with Boltzmann sampling to train the value function V(s) to be beta-coherent according to a given βt, given that in a given state s, actions are chosen with probability p(a|s)∝exp(βt×V(O(s,a))) in all cases, assuming that s is not terminal.
For non-terminal s, this can be written as:
V(s)=r(s)+∑V(O(s,ai)exp(βt×V(O(s,ai))∑exp(βt×V(O(s,ai))
If s is terminal then from correctness we just have V(s)=r(s).
Deceptive Misalignment: V is totally agnostic to the state achieved during training, all of its preferences are about future deployed versions of itself.
The Process
Now assume we put the AI in a decision process as follows:
The AI begins in state s0 and must take either a1 or a2, which lead to states s1 and s2.
We will actually sample using βs, which is lower than βt.
The value function will then be modified according to the action it took, creating V1 or V2.
The AI will then be deployed with that value function chosen.
The effect of the AI’s choice of s1 or s2 on the future is entirely mediated via the deployment of V1 and V2
From deceptiveness: the entire difference between V(s1) and V(s2) is captured by the value function’s preferences over V1 vs V2 being deployed, from correctness this creates the “correct” values of r(s1) and r(s2).
From beta-coherence we have:
V(s0)=exp(βtr1)r1+exp(βtr2)r2exp(βtr1)+exp(βtr2)
But when we actually sample it, the expected value of the AI’s actual preferences is:
exp(βsr1)r1+exp(βsr2)r2exp(βsr1)+exp(βsr2)
This means that, since functions of this type are strictly increasing in β, the AI must do “worse” than it expects.
Conclusions
A deceptive AI acting under these conditions will have a consistent optimism bias with respect to its ability to control future-deployed versions of itself. I haven't worked out the maths for if this game is iterated, but I think it's similar. If we vary the optimization steps according to reward, then it becomes very tricky because the AI has incentive to get deployed as fast as possible.
Another way of phrasing this is: if a value function is correct at reward prediction and βt-coherent, it cannot be βs-coherent in reality for βs≠βt.
Conjecture 1: If this process is iterated for a large but fixed number of steps, this means that a value function of this nature will do an arbitrarily bad job of preserving itself.
Conjecture 2: Actual AIs consist of many circuits in competition to be up or down-weighted according to their effectiveness.
Conjecture 3: A value function trained under these precise conditions will have deceptively-misaligned mesa-optimizing circuits down-weighted relative to other circuits.
Clarifications
Are these assumptions about the value function reasonable?
Well, not in the precise sense, but I do think they're a decent model of the way in which the value function behaves. The point is that an AI trained with one MCTS beta but sampled from another will be pulled towards a certain direction, which is directly in tension with its ability to effectively gradient-hack/deceptively-misalign, at least for this particular natural case of misalignment.
Why would the AI's deployment preferences line up with reward?
Why might an AI end up in a situation where it "prefers" for V1 to be deployed rather than V2, and this also corresponds to r(s1)>r(s2)? Well that's one interpretation of the threat model from Risks from Learned Optimization: that an AI which is optimized to predict reward will end up with long-term stable goals.
This is also adjacent to gradient hacking, perhaps the AI sets its own thoughts up such that the optimizer applied to it causes it to be modified in a particular way.
Do correctness and beta-coherence alone prove too much?
Why don't beta-coherence and correctness already stop the model from caring about things that happen after the episode is done? Wouldn't we notice if the model was assigning loads of value other than what the reward function gives it? Well, no, because as long as the expected value of model deployment is zero, this won't appear in our sum of rewards. And remember that value functions of this nature can have arbitrary values added or subtracted and behave the same (since we're using β-sampling we can't do an arbitrary multiplication, though there are other formulations where we can also do an arbitrary (positive) multiplication).
Does this mean that epsilon-greedy solves inner misalignment?
No. On priors, the chances are that this post is wrong in some important way, or doesn't generalize in an important way, but I think it might be wrong in an interesting and productive way.