Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I don't think I can agree with the affirmation that NNs don't have memory of previous training runs. It depends a bit on the definition of memory, but in the weights distribution there's certainly some information stored about previous episodes which could be view as memory.

I don't think memory in animals is much different, just that the neural network is much more complex. But memories do happen because updates in network structure, just as it happens in NNs during a RL training.

It could be that I overuse the word complexity in the text, but I think it is essential to convey the message. And honestly, I find the terms "intelligence" and "understanding" more obscuring that the term complexity. Let me try to explain my point in more detail:

  • I understand intelligence as a relative measure of the ignorance of the dynamics of a decision-making system. Let's use four types of chess players to illustrate the idea:
    • The first player is a basic "hard-coded" chess engine, for example, based on MTD-bi search (like [sunfish](thomasahle/sunfish: Sunfish: a Python Chess Engine in 111 lines of code (github.com)). Very few people will consider this player/algorithm intelligent, since we understand how it takes decisions. Even if I might not know the details of the decisions, I know that I can follow the logical process of the algorithm to find them. Even if this logical process is computationally very long.
    • The second player is a more sophisticated chess engine based on a large deep neural network trained via self-play (for example Alpha Zero). In this case, some people might consider this player/algorithm somewhat intelligent. The argumentation to attribute the algorithm intelligence is not other than the fact that the rules that this algorithm uses to make decisions are obscured by the complexity of the neural network. We can understand how the MTD-bi search algorithm works, but it's impossible for a human brain to understand the distribution of weights of a neural network with billions of parameters. The complexity of the algorithm increased and with it our ignorance over its methods for decision. Although in this case, we are still able to numerically compute its decisions.
    • The third player is a grandmaster of chess (for example, Magnus Carlsen). In this case, most people will consider that the player is intelligent because we have very limited information about how Magnus takes decisions. We know some things about how the brain works, but we don't know the details and certainly, we can't compute the decisions since we don't even have access to Magnus Carlsen's brain state.

However, for a Laplace's Demon with complete information about the world, none of these players would be considered intelligent since their decisions are just consequences of the natural evolution of the dynamics of the universe (the fact that some of these dynamics could be stochastic/random is irrelevant here). For a Laplace's Demon nothing will be intelligent since for him there's zero relative ignorance of the dynamics of any decision-making system.

  • The 4th player is a random player that just takes random valid actions in each turn. However, there are 2 versions of this player:
    • One is a pseudorandom number generator that selects the actions.
    • The other is a human taking random actions.

Is the player intelligent in any of the two cases? Why?

To summarize:

  • Complexity: the amount of information needed to describe a system.
  • Intelligence: a measure of the relative ignorance of the dynamics of a decision-making system.

It seems obvious to me that complexity is necessary for intelligence but not sufficient, since we can have complex systems that are not effective at making decisions. For example, a star might be complex but is not intelligent. This is where I introduce the term "targeted complexity", which might not be the best selection of words, although I don't find a better one. Targeted complexity means the use of flexible/adaptative systems to create tools that can solve difficult tasks (or another way to put it: that can take intelligent decisions).