Normally, when I compare ML training methods with parenting or education methods I have an idea of what the correspondence one is but here I am a bit of a loss. Telling selective lies that can later be found to be wrong - like Santa Claus?
Here are some suggested correspondences:
Not exactly parenting but performance evaluation also has analogs too, and goodharting is common. Many of my counter-measure against goodharting are informed by ML:
Thanks for writing it up! I don't know if I buy the human caregiver model, as OP said above, but I do like this way of thinking about it. Esp. the zone of proximal development thing is interesting, and for some reason I hadn't thought about performance evaluation analogies before even though the correspondence is quite clear. Much food for thought.
I keep saying that parenting is a useful source of inspiration and insight into ML training and alignment methods but so far few people seemed to believe me. Happy to hear that you are interested. I will write up some correspondences.
I agree that it may be a useful source of insight, since it may tell you some learning techniques like this one, but I find it unlikely that this will end up involving giving the AI a "human caregiver".
Maybe it is not the most likely scenario but a lot of mediocre AIs trained "on the job" in a close loop with humans that are not just overseers but provide a lot of real-world context doesn't seem so unlikely in a Robin Hanson style slow takeoff.
Good question, I don't know the answer.
I suspect telling selective lies that can later be found to be wrong is not the right analogy for adding noise, because this seems to me to be adding noise to the labels rather than to the gradients. I'd assume if you add noise to the labels, that could be counterproductive because the resulting gradients would be propagated disproportionately to all of the most functional and predictive parts of the network, destroying them in favor of flexibility.
I think equation 3 in the paper maybe suggests one way to view understand it: When practicing, rather than going with the way you have learned to do things, you should make random temporary changes to the process in which you do them.
When practicing, rather than going with the way you have learned to do things, you should make random temporary changes to the process in which you do them.
That sounds a lot like Deliberate Play (or Deliberate Practice?) and kids do it a lot (both).
Just a study I saw on /r/MachineLearning: link.
Basically, one way of training neural networks is to add random noise during the training. Usually, the noise that gets added is independent between the training steps, but in the paper they make it negatively correlated between the steps, and argue that this helps with the generalization of the networks because it moves them towards flatter minima.
This seems conceptually related to things that have been discussed on LessWrong, e.g. the observations by John Wentworth that search tends to lead to flat minima, which may have beneficial properties.
I would have liked to see them test this on harder problems than the ones they used, and/or on a greater variety of real-world problems.