Neural networks biased towards geometrically simple functions?
Neural networks (NNs) do not output all functions with equal probability, but seem to be biased towards functions of certain types; heuristically, towards 'simple' functions. In VPCL18, MSVP+19, MVPSL20 evidence is given that functions output by NNs are inclined to have low information-theoretic complexity - nice summaries are given on...
I love the idea of this! But it worries me a bit that when I look through the ones under "mathematics" the ordering seems pretty erratic. I'm a professional mathematician and managing editor for a good mathematics journal, so this should be the field I know best, and my doubts here make me question the rest.
It's awkward for me to criticise the ranking of specific papers publicly, but to give one example the paper "Progress in the mirror symmetry program?: a criterion for the rationality of cubic fourfolds" seems vastly under-rated on the "big if true" axis relative to many other works (I think the p(generalises) for that paper is fair).
On the other hand, mathematics has a reputation for being hard for outsiders to evaluate; I'm curious of what people think of the rankings in other fields?