Less ambitious - The could be Goodharted, but it's expensive, and this shifts the inductive biases to favour aligned cognition
Hey 👋 , maybe a "tool" or which word were missing between
"Less ambitious - The" & "[...]could be Goodharted,"?
Thank you 🙏 for your feedback what the sentence refers to with "The".
instead deep learning tends to generalise incredibly well to examples it hasn’t seen already. How and why it does so is, however, still poorly-understood.
In my opinion generalisation is a very interesting point!
Are there any new insights into deep learning generalisation, similar to the ideas of:
1) implicit regularisation through optimisation methods like stochastic gradient descent,
2) the double descent risk curve where more parameters can reduce error again,
or
3) margin-based measures to predict generalisation gaps?
Or more generally asked:
How do we maybe ensure regular update(s) of this or similar article(s)?
interpolation, interpretation or which word is meant here?
Thank you 🙏 for elaborating