I think one common intuition when thinking about gradient descent (GD) is to think about it as more efficient genetic algorithms (GAs). I certainly used to think this way—after all, taking a bunch of random mutations and keeping the ones that work well sounds basically just like a slower version of finding the steepest direction in the first place. GAs are also much more intuitive than GD, so it's tempting to think about GD in terms of them. Unfortunately, there are some instances where they behave completely differently, which is really relevant for thinking about things like gradient hacking. This post provides one such example.
Redundancy and Submodule Protection
Imagine you're given a network that has two identical submodules, and some kind of combining function where if it detects the outputs from both submodules are the same it passes the value through but if they're different it explodes and outputs zero or something totally random, or whatever. This is a natural idea to come up with if your goal is to ensure the optimizer doesn't mess with these modules, for example, if you're trying to protect the mesaobjective encoding for gradient hacking.
It's easy to convince yourself that this is effective against GAs: since mutations are inserted at random, the probability that both modules are mutated the same way is very small, and so any model that changes either of these modules will do very poorly since the output will be totally useless.
GD on the other hand will happily change both modules at the same time, since the direction of steepest descent is the one where both submodules change at the same time. This is even despite the fact that partial derivatives are taken assuming all other parameters stay the same, because partial derivatives don't care about the cliff just epsilon away but only the slope at the exact point where both modules are the same. Another way of thinking about it is the gradient necessarily points in the direction of steepest ascent, even if that direction isn't axis-aligned. If you're still not convinced, see this comment for a proof that there exists no combining function (under modest constraints) that can protect submodules this way.
Takeaways
The main takeaway from this is that gradient descent is really unintuitive sometimes. When in doubt, it's probably a good idea to work out the gradient by hand.
First, I love this example. Second, I think it's wrong.
It's true in idealized mathematical terms - if you're right at the point where the submodules agree, then the gradient will be along the continue-to-agree direction. But that's not what matters numerically - i.e. it's not what matters in actual practice. Numerically, the speed of (non-stochastic) gradient descent is controlled by the local condition number, and the condition number for this two-module example would be enormous - meaning that gradient descent will move along the submodules' parameter space extremely slowly. Unless the surface along which the submodules match is perfectly linear (and the parameters are already exactly on that surface), every step will take it just a little bit off the surface, so it will end up taking extremely small steps (so that it's always within numerical precision of the surface).
There should be a fair bit more than 2 epsilon of leeway in the line of equality. Since the submodules themselves are learned by SGD, they won’t be exactly equal. Most likely, the model will include dropout as well. Thus, the signals sent to the combining function are almost always more different than the limits of numerical precision allow. This mean the combining function will need quite a bit of leeway, otherwise the network’s performance is just zero always.