Would readers be interested in a sequence of posts offering an intuitive explanation of my underway thesis on the application of information theory to reinforcement learning? Please also feel free to comment on the quality of my presentation.
In this first post I offer a high-level description of the Perception-Action Cycle as an intuitive explanation of reinforcement learning.
Imagine that the world is divided into two parts: one we shall call the agent and the rest - its environment. Imagine that the two interact in turns. One moment the agent receives information from its environment in the form of an observation. Then the next moment the agent sends out information to its environment in the form of an action. Then it makes another observation, then another action, and so on.
To break down the cycle, we start with the agent having a belief about the state of its environment. This is actually the technical term: the belief is the probability that the agent assigns, implicitly, to each possible state of the environment. The cycle then proceeds in 4 phases.
In the first phase, the agent makes an observation. Since the observation conveys information of the environment, the agent needs to update its belief, ideally using Bayes' theorem. The agent now has more information about the environment.
In the second phase, the agent uses this new information to update its plan. Note the crucial underlying principle that information about the environment is useful in making better plans. This gives a desired fusion between Bayesian updates and decision making.
In the third phase, the agent executes a step of its plan - a single action. This changes the environment. Some of the things that the agent knew about the previous state of the environment may no longer be true, and the agent is back to having less information.
In the fourth phase, the agent makes a prediction about future observations. The importance of making a prediction before a scientific experiment is well understood by philosophers of science. But the importance of constantly making predictions of all of our sensory inputs as a functional part of our cognition, is only now dawning on neuroscientists and machine learning researchers.
The Perception-Action Cycle is an intuitive explanation of the technical setting of reinforcement learning. Reinforcement learning is a powerful model of machine learning, in which decision making, learning and evaluation occur simultaneously and somewhat implicitly while a learner interacts with its environment. This can be used to describe a wide variety of real-life scenarios, including biological and artificial agents. It is so general, in fact, that our work is still ahead of us if we want it to have any explanatory power, and solving it in the most general form is a computationally hard problem.
But the Perception-Action Cycle still offers symmetries to explore, analogies to physics to draw, practical learning algorithms to develop; all of which improve its Occam's razor prior score as a good model of intelligence. And to use it to actually explain things, we can narrow it down further. Not everything that it makes possible is equally probable. By applying information theory, a collection of statistical concepts, theorems and methods implied by strong Bayesianism, we can get a better picture of what intelligence is and isn't.
To pick a trivial case: A blind person with acute hearing taps a cane on the floor in order to ascertain, from echoes, the relative positions of nearby objects.
The issue is that "action" and "observation" can be entangled; your description of observation makes it into a passive process, ignoring the role of activity in observation. "Step one of my plan: Figure out where the table is so I don't run into it." Which is to say, your pattern is overly rigid.
You might argue that the tapping of the cane is itself an observation, in which case you'd also have to treat walking into a room to see what's in it as an observation; the former removes no information, but the latter reduces your certainty of the positions of objects in the room you've just left, meaning either actions can generate information, or observations can reduce it. You could preserve the case that actions cannot generate information if you instead treat hearing the echoes as a secondary observation, but this still leaves you with the case that an action did not, in fact, eliminate any information.
I realize now that an example would be helpful, and yours is a good one.
Any process can be described on different levels. The trick is to find a level of description that is useful. We make an explicit effort to model actions and observation so as to separate the two directions of information flow between the agent and the environment. Actions are purely "active" (no information is received by the agent) while observations are purely "passive" (no information is sent by the agent). We do this because these two aspects of the process hav... (read more)