"Gradient hacking" is a term I've been using recently to describe the phenomenon wherein a deceptively aligned mesa-optimizer might be able to purposefully act in ways which cause gradient descent to update it in a particular way. In Risks from Learned Optimization, we included the following footnote to reflect this possibility:
Furthermore, a deceptively aligned mesa-optimizer would be incentivized to cause there to be a systematic bias in the direction of preventing the base optimizer from modifying its mesa-objective. Thus, in the context of a local optimization process, a deceptive mesa-optimizer might try to “hack” its own gradient (by, for example, making itself more brittle in the case where its objective gets changed) to ensure that the base optimizer adjusts it in such a way that leaves its mesa-objective untouched.
Outside of that footnote, however, there hasn't been much other discussion of this phenomenon, and I think that's a shame, as I think it's definitely something that's worth looking into.
For example, here's a gradient hacking story that I'm particularly concerned about. Suppose you're training a model and it becomes deceptive. You're not worried about that, though, because you have sufficient transparency tools that you're confident you can detect any deceptive cognition it might be doing. If it's thinking about how to deceive you, your transparency tools will pick that up, and you'll know to throw out that model and try again. However, your transparency tools are quite expensive, so you can only afford to run them once, which you're planning on doing after training before you deploy your model. If your model is deceptive, though, then it might know all of that—in particular, it might know that if it stays deceptive, it'll get caught.
However, there is still something it can do in that situation to be able to defect against you and cause whatever it wants to happen during deployment. That is, it could hack its own gradient to produce a new model which will still exhibit the same deceptive behavior, but without doing any of the detectable deceptive cognition. Perhaps the new model exhibits the deceptive behavior not because it's trying to act deceptive, but just because it's a hard-coded part of its algorithm, or because it has some bad proxy, or any other defect that might be more difficult to detect than outright deception.
How might such a deceptive model accomplish such a feat of gradient hacking? Well, I don't think it's actually that difficult once your deceptive model comes up with it as a strategy. First, note that since gradient descent is a local optimization process, it doesn't have to somehow change what the global minima might be, or even what the local minima are—it just has to create a ridge in the loss landscape that will push the gradient in the direction it wants.
The original footnote provides one example of this, which is for the model to check if its objective satisfies some criterion, and fail hard if it doesn't. Now, if the model gets to the point where it's actually just failing because of this, then gradient descent will probably just remove that check—but the trick is never to actually get there. By having such a check in the first place, the model makes it so that gradient descent won't actually change its objective, since any change to its objective (keeping all the other parameters fixed, which is what gradient descent does since it computes partial derivatives) would lead to such a failure.
The case of a deceptive model trying to erase evidence of its deception is not quite as straightforward, though I think it would still be possible to do something quite similar in that situation as well. This is especially true if the model has the ability to read and write to some sort of state (e.g. an RNN or even a transformer which can attend to its previous output). For example, it could write out a pre-commitment to follow the hard-coded policy and then just have a policy of always following any pre-commitments it finds such that it never thinks about the deception that led it to that policy in the first place.
So, that's the problem—how do we solve it? Well, there's a simple solution in the case above, which is just to run your transparency tools throughout training—not just at the end—so that you can catch the deception as soon as it happens. I think this is the right solution,[1] but it's worth noting that this adds an additional desideratum to your transparency tools, which is that they have to be cheap enough to run during the entire training process. That might not be that bad, however, if you don't do very many gradient descent update steps, and instead just use massive models and huge batch sizes—and there is some evidence that such a procedure is actually the most compute-efficient way to train neural networks. That being said, I think there are some transparency techniques which lend themselves better to being applied across training than others. Thus, the possibility of gradient hacking suggests that we should focus more on those techniques which allow us to not just understand a single model, but also the entire history of how that model was produced.[2]
This is one of the reasons I'm interested in optimization provenance, for example. ↩︎
As an example of a technique which I think translates well to this domain, one of the OpenAI clarity team's current major research thrusts is "model diffing," which is about trying to understand exactly how two similar models differ from each other. Compared to other approaches that are just focused on understanding a single model, model diffing could be quite useful in allowing one to understand the entirety of the training process, since you could diff the model at each gradient descent update step. ↩︎
This post states the problem of gradient hacking. It is valuable in that this problem is far from obvious, and if plausible, very dangerous. On the other hand, the presentation doesn’t go into enough details, and so leaves gradient hacking open to attacks and confusion. Thus instead of just reviewing this post, I would like to clarify certain points, while interweaving my critics about the way gradient hacking was initially stated, and explaining why I consider this problem so important.
(Caveat: I’m not pretending that any of my objections are unknown to Evan. We’re working together on searching for experimental evidence of gradient hacking, and this review is heavily influenced by this collaboration.)
What is gradient hacking?
Evan defines it as:
So we have a trained mesa-optimizer that is deceptively aligned (its mesa-objective differs from the base-objective it is trained on). This deceptive system finds a way to influence the gradient generated during training, and thus can basically ensure it never becomes inner aligned (among other things).
Before checking on how exactly this could be possible, we should think a bit more about what this implies.
If the deceptive model has arbitrary access to the gradient, then from that point on, the base-objective has only minimal influence on the training. Some influence remains because we probably want to test the end result, and thus it should work well with the base-objective. But that's pretty much the only constraint left.
It could also pretty much deals with deception detectors because it can make itself not detectable:
To say it pithy: if gradient hacking happens, we’re fucked.
How could it happen, though? Well, that’s actually two distinct questions: how could a model gradient hack, and how could training create a model which gradient hacks. The post mostly focuses on the first one.
How could a model gradient hack?
The first example comes from a quoted footnote of Risks from Learned Optimization:
This implies that the model is able to check for changes in itself and in its objective, such that changing this objective would make it work worse. An analogue version would be for the model to underperform except if it’s changed in the direction it likes, to push the gradient towards there.
Still, it might feel a bit weird to think about a model that can influence its training without somewhat hacking the computer it is in. After all, the training is external, right? One fundamental but easy to miss insight from the post is that the model can influence the loss surface, and thus the gradient (which comes from this surface)
So a model could gradient hacks because its output and how it computes its output influence the local loss surface around it, which means it could in theory make the gradient go its way. I feel like this is crucial, and it’s slightly problematic that the post doesn’t put more emphasis on it.
Also note that this is hardly solving the question. Let’s say the model is such that if its mesa-objective is changed, it performs badly. Then we probably have a form of gradient hacking. But what tells us that the change at this step will not slightly undermine the very mechanism that protects the mesa-objective? Unwrapping this thought, it’s harder to conceptualize how a model could gradient hack forever. It seems to involve some sort of fixed point or self correcting mechanism.
That complexity should probably make us update slightly towards the improbability of “sustainable” gradient hacking -- still not enough to dismiss it though. Understanding the requirements for such sustainable gradient hacking looks like the most important research question on this topic for me (which makes sense, I’m working on it after all…).
How could training create a model which gradient hacks?
What about this second question? Well, another issue with the post comes from the lack of clarification around the second question: how could training create a model which gradient hacks? I’m obviously not asking for a full-fledged answer, as this is still research in progress. But there are easy to make mistakes that I believe Evan could have debunked in the post.
One such mistake is to get confused about how the model could “decide” to gradient hack. After all, if the computation is deterministic, it’s not like the model has a decision to make, right? This question can be dissolved in a similar way to free will (since it’s basically about free will of neural networks): (spoiler for the solution to free will)
the model is completely determined by its parameters, but its parameters are part of itself.
The less philosophical approach to this problem is to notice that the appearance of gradient hacking would probably come from the training stumbling on a gradient hacker. There is no need to worry about how the model could turn itself into a gradient hacker; the issue is about how training could bump into a gradient hacker.
Another mistake would be to argue that what is called gradient hacking here is indistinguishable in terms of training behavior from some model that doesn’t gradient hack but happens to have this gradient. This is missing a big part of Evan’s viewpoint: that looking inside the model is fundamental for alignment. Here the question is whether we can catch the model doing some form of recognizable gradient hacking, because that allows us to infer more about what it might do (and that it will not be inner aligned). Whereas if we can be relatively sure that a model is not gradient hacking, there are less things to worry about, at least in terms of deceptive alignment.
This is almost obvious for people that talked a lot with Evan on these issues, or who are intimate with his work; but I feel that it’s only implicit in this post, which might brew confusion. A new but very observant reader could infer it from the solution (discussed below) using transparency tools, sure. I still think the post would be stronger if that possible misinterpretation was debunked explicitly.
As for further research, investigating the appearance of gradient hacking through common ML training procedure is crucial. But it probably requires a better understanding of the mechanisms we’re looking for, and thus a convincing answer (at least informal) to the first question.
Possible solution
Evan proposes a solution for gradient hacking in one case:
This is hardly revolutionary, and doesn’t say how to actually catch deception (although most of Evan research is focused on that). Yet I don’t think much more can be done without understanding in more detail the mechanisms of gradient hacking, and how they appear through training. That is, without solving some significant part of the two previous questions.
Wrapping-up
All in all, I think this post is very important, because the alignment issue it presents is novel, counterintuitive, and hard to dismiss without more research, once one thinks about it for long enough. That being said, this post is definitely a short presentation of an idea, written quickly to draw attention to this issue. Gradient hacking warrants a longer post that digs deeper into the details of the problem,
My point was that there's no reason that SGD will create specifically "deceptive logic" because "deceptive logic" is not privileged over any other logic that involves modeling the base objective and acting according to it. But I now think this isn't always true - see the edit block I just added.