It could be that I overuse the word complexity in the text, but I think it is essential to convey the message. And honestly, I find the terms "intelligence" and "understanding" more obscuring that the term complexity. Let me try to explain my point in more detail:
However, for a Laplace's Demon with complete information about the world, none of these players would be considered intelligent since their decisions are just consequences of the natural evolution of the dynamics of the universe (the fact that some of these dynamics could be stochastic/random is irrelevant here). For a Laplace's Demon nothing will be intelligent since for him there's zero relative ignorance of the dynamics of any decision-making system.
Is the player intelligent in any of the two cases? Why?
To summarize:
It seems obvious to me that complexity is necessary for intelligence but not sufficient, since we can have complex systems that are not effective at making decisions. For example, a star might be complex but is not intelligent. This is where I introduce the term "targeted complexity", which might not be the best selection of words, although I don't find a better one. Targeted complexity means the use of flexible/adaptative systems to create tools that can solve difficult tasks (or another way to put it: that can take intelligent decisions).
I don't think I can agree with the affirmation that NNs don't have memory of previous training runs. It depends a bit on the definition of memory, but in the weights distribution there's certainly some information stored about previous episodes which could be view as memory.
I don't think memory in animals is much different, just that the neural network is much more complex. But memories do happen because updates in network structure, just as it happens in NNs during a RL training.