This is a linkpost for https://arxiv.org/abs/2410.06213

by Michael K. Cohen, Marcus Hutter, Yoshua Bengio, Stuart Russell

Abstract:

In reinforcement learning, if the agent's reward differs from the designers' true utility, even only rarely, the state distribution resulting from the agent's policy can be very bad, in theory and in practice. When RL policies would devolve into undesired behavior, a common countermeasure is KL regularization to a trusted policy ("Don't do anything I wouldn't do"). All current cutting-edge language models are RL agents that are KL-regularized to a "base policy" that is purely predictive. Unfortunately, we demonstrate that when this base policy is a Bayesian predictive model of a trusted policy, the KL constraint is no longer reliable for controlling the behavior of an advanced RL agent. We demonstrate this theoretically using algorithmic information theory, and while systems today are too weak to exhibit this theorized failure precisely, we RL-finetune a language model and find evidence that our formal results are plausibly relevant in practice. We also propose a theoretical alternative that avoids this problem by replacing the "Don't do anything I wouldn't do" principle with "Don't do anything I mightn't do".

 

The "Don't do anything I wouldn't do" principle fails because Bayesian models allow unlikely actions in uncertain settings, which RL agents will exploit. KL regularization keeps policies near the base model but doesn’t guarantee alignment with the trusted policy, especially as data and capability grows.

The paper offers the “Don’t do anything I mightn’t do” principle, based on Cohen et al.'s (2022a) active imitation model, which has the imitator explicitly ask for help when uncertain. Unlike Bayesian models, this active imitation approach ensures the policy avoids actions it cannot align with trusted behavior in a formally bounded way. Unfortunately, so far, it remains computationally intractable and requires approximations.

New Comment
5 comments, sorted by Click to highlight new comments since:

This safety plan seems like it works right up until you want to use an AI to do something you wouldn't be able to do.

If you want a superhuman AI to do good things and not bad things, you'll need a more direct operationalization of good and bad.

If you're in a situation where you can reasonably extrapolate from past rewards to future reward, you can probably extrapolate previously seen "normal behaviour" to normal behaviour in your situation. Reinforcement learning is limited - you can't always extrapolate past reward - but it's not obvious that imitative regularisation is fundamentally more limited.

(normal does not imply safe, of course)

I dunno, I think you can generalize reward farther than behavior. E.g. I might very reasonably issue high reward for winning a game of chess, or arriving at my destination safe and sound, or curing malaria, even if each involved intermediate steps that don't make sense as 'things I might do.'

I do agree there are limits to how much extrapolation we actually want, I just think there's a lot of headroom for AIs to achieve 'normal' ends via 'abnormal' means.

I would be interested in what the questions of the uncertain imitator would look like in these cases.

Their empirical result rhymes with adversarial robustness issues - we can train adversaries to maximise ~arbitrary functions subject to small perturbation from ground truth constraints. Here the maximised function is a faulty reward model and the constraint is KL to a base model instead of distance to a ground truth image.

I wonder if multiscale aggregation could help here too as it does with image adversarial robustness. We want the KL penalty to ensure that the generations should look normal at any "scale", whether we look at them token by token or read a high-level summary of them. However, I suspect their "weird, low-KL" generations will have weird high-level summaries, whereas more desired policies would look more normal in summary (though it's not immediately obvious if this translates to low and high probability summaries respectively - one would need to test). I think a KL penalty to the "true base policy" should operate this way automatically, but as the authors note we can't actually implement that.