At Techniques for optimizing worst-case performance Paul Christiano says
The key point is that a malign failure requires leveraging the intelligence of the model to do something actively bad. If our model is trained by gradient descent, its behavior can only be intelligent when it is exercised on the training distribution — if part of the model never (or very rarely) does anything on the training distribution, then that part of the model can’t be intelligent. So in some sense a malign failure mode needs to use a code path that gets run on the training distribution, just under different conditions that cause it to behave badly.
Here is how I would rephrase it:
Aligned or Benign Conjecture: Let A be a machine learning agent you are training with an aligned loss function. If A is in a situation that is too far out of distribution for it to be aligned, it won't act intelligently either.
(Although I'm calling this a "conjecture", it's probably context dependent instead of being a single mathematical statement.)
This seems pretty plausible, but I'm not sure it's guaranteed mathematically 🤔. (For example: A neural network could have subcomponents that are great at specific tasks, and such that putting A in an out-of-distribution situation does not put those subcomponents out of distribution.)
I'm wondering if there is an empirical evidence or theoretical arguments against this conjecture.
As an example, can we make a ML agent, trained with stochastic descent, that abandons it's utility function out-of-distribution, but still has the same capabilities in some sense? For example, if the agent is fighting in an army, could an out-of-distribution environment cause it to defect to a different army, but still retain its fighting skills?
I definitely don't believe this!
I believe that any functional cognitive machinery must be doing its thing on the training distribution, and in some sense it's just doing the same thing at deployment time. This is important for having hope for interpretability to catch out-of-distribution failures.
(For example, I think there is very little hope of interpretability detecting the presence of arbitrary backdoors in a model, before having seen any examples of the backdoor trigger, which is what it would look like to try to detect OOD failures from machinery that is literally never doing anything the training distribution).
But that doesn't even mean that the cognitive machinery is effectively aiming at the same goal OOD, much less that it is aiming at achieving any goal related to the loss function, and even less that it is aligned just because the loss function reliably ranks policies based on the empirical quality of their behavior.