"The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties."
“For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order.” These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones."
Interesting article, and just diving into the paper now, but it looks like this is a big boost to the simulation argument. If the universe is built like a game engine, with stacked sets like Mandelbrots, then the simplicity itself becomes a driver in a fabricated reality.
https://www.technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/
Why does deep and cheap learning work so well?
http://arxiv.org/abs/1608.08225
It could be that you only get civilizations "in universes is governed by a tiny subset of all possible functions" because else wise either evolution can't "discover" how to create intelligent life, or evolved intelligent life can't figure out science.
That reminds me of a fantasy novel I began and abandoned - in it, there's a civilization that can do astonishing things and even though they have math beyond ours, they still have no idea how just about any of it works, because the rules are so much more complicated that they have a hard time pulling off balls rolling down ramps kinds of experiments (the ramp would remember balls rolling and, depending on the details of the ramp, make it happen slower or faster; and if you made a new ramp each time the pattern of your interaction with ramps would develop the same sort of reaction). One of them was kicked out to a place where magic was weaker, allowing her to figure it all out; she ended up stronger than any of them.