But if the agent is repeatedly carrying out its commitment to fail, then there’ll be pretty strong pressure from gradient descent to change that. What changes might that pressure lead to? The two most salient options to me:
- The agent’s commitment to carrying out gradient hacking is reduced.
- The agent’s ability to notice changes implemented by gradient descent is reduced.
In a gradient hacking scenario, we should expect the malicious conditionally-fail-on-purpose logic to be optimized for such outcomes not to occur. For example, the malicious logic may involve redundancy: Suppose there are two different conditionally-fail-on-purpose logic pieces in the network such that each independently make the model fail if the x-component is large. Due to the redundancy, a potential failure should have almost no influence on the gradient components that are associated with the weights of the malicious logic pieces. (This is similar to the idea in this comment from a previous discussion we had.)
What mechanism would ensure that these two logic pieces only fire at the same time? Whatever it is, I expect that mechanism to be changed in response to failures.
The two pieces of logic can use the same activation values as their input. For example, suppose they both (independently) cause failure if a certain activation value is above some threshold. (In which case each piece of logic "ruins" a different critical activation value).
In that case, gradient descent will reduce the weights that are used to calculate that specific activation value.
I confess I’m confused as to how the network gradient hacks to create these redundant mechanisms in the first place? Since one of them seems completely useless unless designed deliberately.
Suppose that each subnetwork does general reasoning and thus up until some point during training the subnetworks are useful for minimizing loss.
Are you saying that such a mechanism occurs by coincidence, or that it’s actively constructed? It seems like for all the intermediate steps, all consumers of the almost-identical subnetworks would naturally just pick one and use that one, since it was slightly closer to what the consumer needed.
If gradient hacking is thought to be possible because gradient descent is a highly local optimization process, maybe it would help to use higher-order approaches. E.g., Newton's method uses second order derivative information, and the Householder methods use even higher order derivatives.
These higher order methods aren't commonly used in deep learning because of their additional computational expense. However, if such methods can detect and remove mechanisms of gradient hacking that are invisible to gradient descent, it maybe be worthwhile to occasionally use higher order methods in training.
Is gradient hacking a useful metaphor for human psychology? For example, peer pressure is a real thing. If I choose to spend time with certain people because I expect them to reinforce my behavior in certain ways, is that gradient hacking?
Even in the simple case no. 1, I don't quite see why Evan isn't right yet.
It's true that deterministically failing will create a sort of wall in the landscape that the ball will bounce off of and then roll right back into as you said. However, wouldn't it also perhaps roll in other directions, such as perpendicular to the wall? Instead of getting stuck bouncing into the wall forever, the ball would bounce against the wall while also rolling in some other direction along it. (Maybe the analogy to balls and walls is leading me astray here?)
I discuss the possibility of it going in some other direction when I say "The two most salient options to me". But the bit of Evan's post that this contradicts is:
Now, if the model gets to the point where it's actually just failing because of this, then gradient descent will probably just remove that check—but the trick is never to actually get there.
Gradient hacking is the hypothesised phenomenon of a machine learning model, during training, deliberate thinking in ways which guide gradient descent to update its parameters in the directions it desires. The key intuition here is that because the loss landscape of a model is based on the cognition it does, models can make decisions for the purpose of affecting their loss landscapes, thereby affecting the directions in which they are updated. Evan writes:
But I don't think this works as described, because of the way that gradient descent operates. Suppose that an agent has some trait which gradient descent is trying to push in direction x because the x-ness of that trait contributes to the agent’s high score; and that the agent wants to use gradient hacking to prevent this. Consider three possible strategies that the agent might try to implement, upon noticing that the x-component of the trait has increased:
This seems to rule out gradient hacking via the method that Evan described, of having a hypothetical commitment that never actually needs to be implemented. But if the agent is repeatedly carrying out its commitment to fail, then there’ll be pretty strong pressure from gradient descent to change that. What changes might that pressure lead to? The two most salient options to me:
Each of these makes the agent less likely to choose to fail at any given point. However, if the agent is intelligent and determined, then implementing these changes might actually be quite difficult, for many of the standard reasons that preventing intelligence agents from achieving their goals is difficult. In particular, a wide range of possible versions of the agent might agree that gradient hacking is a good idea, and vigilantly try to notice the effects that gradient updates have on its cognition.
One possible objection here: the first time an agent decides to gradient hack, its weights will be close to the region it was in when it wasn’t gradient hacking - so why can’t the optimiser just push it back to that region? One way the agent might prevent this from happening is by leaving itself messages which persist across episodes. This could permanently shift the loss landscape, if seeing the message makes future versions of the agent more likely to gradient hack. By contrast, if the agent’s environment and memory are totally reset after every episode (as is the standard assumption in RL theory, although impossible for any agent that interacts with the real world) then it seems much harder for that agent to start gradient hacking without being pushed back to a pre-hacking state by gradient descent. So perhaps we can think of gradient hacking as one more phenomenon (similar to reward tampering, but more subtle) which can be traced back to a mismatch between standard RL assumptions (in particular, stationarity) and the embeddedness of the real world.
This has all been fairly speculative, but I’ll finish with two ideas that I’m even more uncertain about. Firstly, could an agent gradient hack without ever changing its behaviour, just by making certain reasoning processes more accessible to gradient descent? For example, suppose that an agent has a goal G, which motivates it to take action A at the end of the episode and receive high reward. It could either reason "G implies I should do A" at the beginning of the episode, and store the cached thought “take action A”. Or it could reason "G implies I should do A" at the end of the episode, then do A directly. Either way it will take action A, but since it’s so hard to propagate gradients backwards over a long period, G might be more strongly reinforced in the latter case. (I haven’t thought about how this interacts with different RL algorithms, though.)
Secondly, in a comment on an earlier version of this post, Evan responded:
To me, the perfect loss regime doesn’t seem particularly relevant, since training AGIs will require them to interact with environments which are far too complex for them to score perfectly on (like the real world); and because we can just keep setting them harder tasks. But I may well be missing something, so I'd be interested to hear more about this regime and why it might matter.