This is long, but it's the shortest length I could cut from the material and have a complete thought.
1. Alien Space Bats have abducted you.
In the spirit of this posting, I shall describe a magical power that some devices have. They have an intention, and certain means available to achieve that intention. They succeed in doing so, despite knowing almost nothing about the world outside. If you push on them, they push back. Their magic is not invincible: if you push hard enough, you may overwhelm them. But within their limits, they will push back against anything that would deflect them from their goal. And yet, they are not even aware that anything is opposing them. Nor do they act passively, like a nail holding something down, but instead they draw upon energy sources to actively apply whatever force is required. They do not know you are there, but they will struggle against you with all of their strength, precisely countering whatever you do. It seems that they have a sliver of that Ultimate Power of shaping reality, despite their almost complete ignorance of that reality. Just a sliver, not a whole beam, for their goals are generally simple and limited ones. But they pursue them relentlessly, and they absolutely will not stop until they are dead.
You look inside one of these devices to see how it works, and imagine yourself doing the same task...
Alien Space Bats have abducted you. You find yourself in a sealed cell, featureless but for two devices on the wall. One seems to be some sort of meter with an unbreakable cover, the needle of which wanders over a scale marked off in units, but without any indication of what, if anything, it is measuring. There is a red blob at one point on the scale. The other device is a knob next to the meter, that you can turn. If you twiddle the knob at random, it seems to have some effect on the needle, but there is no fixed relationship. As you play with it, you realise that you very much want the needle to point to the red dot. Nothing else matters to you. Probably the ASBs' doing. But you do not know what moves the needle, and you do not know what turning the knob actually does. You know nothing of what lies outside the cell. There is only the needle, the red dot, and the knob. To make matters worse, the red dot also jumps along the scale from time to time, in no particular pattern, and nothing you do seems to have any effect on it. You don't know why, only that wherever it moves, you must keep the needle aligned with it.
Solve this problem.
That is what it is like, to be one of these magical devices. They are actually commonplace: you can find them everywhere.
They are the thermostat that keeps your home at a constant temperature, the cruise control that keeps your car at a constant speed, the power supply that provides a constant voltage to your computer's circuit boards. The magical thing is how little they need to know to perform their tasks. They have just the needle, the mark on the scale, the knob, and hardwired into them, a rule for how to turn the knob based only on what they see the needle and the red dot do. They do not need to sense the disturbing forces, or predict the effects of their actions, or learn. The thermostat does not know when the sun comes out. The cruise control does not know the gradient of the road. The power supply does not know why or when the mains voltage or the current demand will change. They model nothing, they predict nothing, they learn nothing. They do not know what they are doing. But they work.
These things are called control systems. A control system is a device for keeping a variable at a specified value, regardless of disturbing forces in its environment that would otherwise change it. It has two inputs, called the perception and the reference, and one output, called the output or the action. The output depends only on the perception and the reference (and possibly their past histories, integrals, or derivatives) and is such as to always tend to bring the perception closer to the reference.
Why is this important for LW readers?
2. Two descriptions of the same thing that both make sense but don't fit together.
I shall come to that via an autobiographical detour. In the mid-90's, I came across William Powers' book, Behavior: the Control of Perception, in which he set out an analysis of human behaviour in terms of control theory. (Powers' profession was -- he is retired now -- control engineering.) It made sense to me, and it made nonsense of every other approach to psychology. He gave it the name of Perceptual Control Theory, or PCT, and the title of his book expresses the fundamental viewpoint: all of the behaviour of an organism is the output of control systems, and is performed with the purpose of controlling perceptions at desired reference values. Behaviour is the control of perception.
This is 180 degrees around from the behavioural stimulus-response view, in which you apply a stimulus (a perception) to the organism, and that causes it to emit a response (a behaviour). I shall come back to why this is wrong below. But there is no doubt that it is wrong. Completely, totally wrong. To this audience I can say, as wrong as theism. That wrong. Cognitive psychology just adds layers of processing between stimulus and response, and fares little better.
I made a simulation of a walking robot whose control systems were designed according to the principles of PCT, and it works. It stands up, walks over uneven terrain, and navigates to food particles. (My earliest simulation is still on the web in the form of this Java applet.) It resists a simulated wind, despite having no way to perceive it. It cannot see, sensing the direction of food only by the differential scent signals from its antennae. It walks on uneven terrain, despite having no perception of the ground other than the positions of its feet relative to its body.
And then, a year or two ago, I came upon Overcoming Bias, and before that, Eliezer's article on Bayes' theorem. (Anyone who has not read that article should do so: besides being essential background to OB and LW, it's a good read, and when you have studied it, you will intuitively know why a positive result on a screening test for a rare condition may not be telling you very much.) Bayes' theorem itself is a perfectly sound piece of mathematics, and has practical applications in those cases where you actually have the necessary numbers, such as in that example of screening tests.
But it was being put forward as something more than that, as a fundamental principle of reasoning, even when you don't have the numbers. Bayes' Theorem as the foundation of rationality, entangling one's brain with the real world, allowing the probability mass of one's beliefs to be pushed by the evidence, acting to funnel the world through a desired tunnel in configuration space. And it was presented as even more than a technique to be learned and applied well or badly, but as the essence of all successful action. Rationality not only wins, it wins by Bayescraft. Bayescraft is the single essence of any method of pushing probability mass into sharp peaks. This all made sense too.
But the two world-views did not seem to fit together. Consider the humble room thermostat, which keeps the temperature within a narrow range by turning the heating on and off (or in warmer climes, the air conditioning), and consider everything that it does not do while doing the single thing that it does:
- The thermostat knows only one thing about its environment: the temperature.
- It has no model of its surroundings.
- It has no model of itself.
- It makes no predictions.
- It performs no Bayesian calculations.
- It has no priors.
- It has no utility function.
- It computes nothing but the difference between perception and reference, and its rule for what to do when they differ could hardly be simpler. Low temperature: turn on. High temperature: turn off.
- It does not think. It does nothing any more suggestive of thought than a single transistor is suggestive of a Cray.
And yet despite that, it has a sliver of the Ultimate Power, the ability to funnel the world through its desired tunnel in configuration space. In short, control systems win while being entirely arational. How is this possible?
If you look up subjects such as "optimal control", "adaptive control", or "modern control theory", you will certainly find a lot of work on using Bayesian methods to design control systems. However, the fact remains that the majority of all installed control systems are nothing but manually tuned PID controllers. And I have never seen, although I have looked for it, any analysis of general control systems in Bayesian terms. (Except for one author, but despite having a mathematical background, I couldn't make head nor tail of what he was saying. I don't think it's me, because despite his being an eminent person in the field of "intelligent control", almost no-one cites his work.) So much for modern control theory. You can design things that way, but you usually don't have to, and it takes a lot more mathematics and computing power. I only mention it because anyone googling "Bayes" and "control theory" will find all that and may mistake it for the whole subject.
3. Why it matters.
If this was only about cruise controls and room thermostats, it would just be a minor conundrum. But it is also about people, and all living organisms. The Alien Space Bat Prison Cell describes us just as much as it describes a thermostat. We have a large array of meter needles, red dots, and knobs on the walls of our cell, but it remains the case that we are held inside an unbreakable prison exactly the same shape as ourselves. We are brains in vats, the vat of our own body. No matter how we imagine we are reaching out into the world to perceive it directly, our perceptions are all just neural signals. We have reasons to think there is a world out there that causes these perceptions (and I am not seeking to cast doubt on that), but there is no direct access. All our perceptions enter us as neural signals. Our actions, too, are more neural signals, directed outwards -- we think -- to move our muscles. We can never dig our way out of the cell. All that does is make a bigger cell, perhaps with more meters and knobs.
We do pretty well at controlling some of those needles, without having received the grace of Bayes. When you steer your car, how do you keep it directed along the intended path? By seeing through the windscreen how it is positioned, and doing whatever is necessary with the steering wheel in order to see what you want to see. You cannot do it if the windows are blacked out (no perception), if the steering linkage is broken (no action), or if you do not care where the car goes (no reference). But you can do it even if you do not know about the cross-wind, or the misadjusted brake dragging on one of the wheels, or the changing balance of the car according to where passengers are sitting. It would not help if you did. All you need is to see the actual state of affairs, and know what you want to see, and know how to use the steering wheel to get the view closer to the view you want. You don't need to know much about that last. Most people pick it up at once in their first driving lesson, and practice merely refines their control.
Consider stimulus/response again. You can't sense the crosswind from inside a car, yet the angle of the steering wheel will always be just enough to counteract the cross-wind. The correlation between the two will be very high. A simple, measurable analogue of the task is easily carried out on a computer. There is a mark on the screen that moves left and right, which the subject must keep close to a static mark. The position of the moving mark is simply the sum of the mouse position and a randomly drifting disturbance calculated by the program. So long as the disturbance is not too large and does not vary too rapidly, it is easy to keep the two marks fairly well aligned. The correlation between the mouse position (the subject's action) and the disturbance (which the subject cannot see) is typically around -0.99. (I just tried it and scored -0.987.) On the other hand, the correlation between mouse position and mark position (the subject's perception) will be close to zero.
So in a control task, the "stimulus" -- the perception -- is uncorrelated with the "response" -- the behaviour. To put that in different terminology, the mutual information between them is close to zero. But the behaviour is highly correlated with something that the subject cannot perceive.
When driving a car, suppose you decide to change lanes? (Or in the tracking task, suppose you decide to keep the moving mark one inch to the left of the static mark?) Suddenly you do something different with the steering wheel. Nothing about your perception changed, yet your actions changed, because a reference signal inside your head changed.
If you do not know that you are dealing with a control system, it will seem mysterious. You will apply stimuli and measure responses, and end up with statistical mush. Since everyone else does the same, you can excuse the situation by saying that people are terribly complicated and you can't expect more. 0.6 is considered a high correlation in a psychology experiment, and 0.2 is considered publishable (link). Real answers go ping!! when you hit them, instead of slopping around like lumpy porridge. What is needed is to discover that a control system is present, what it is controlling, and how.
There are ways of doing that, but this is enough for one posting.
4. Conclusion.
Conclusion of this posting, not my entire thoughts on the subject, not by a long way.
My questions to you are these.
Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.
If, as is speculated, a living organism's brain is a collection of control systems, is Bayescraft no more related to its physical working than arithmetic is? Our brains can learn to do arithmetic, but arithmetic is not how our brains work. Likewise, we can learn Bayescraft, or some practical approximation to it, but do Bayesian processes have anything to do with the mechanism of brains?
Does Bayescraft necessarily have anything to do with the task of building a machine that ... can do something not to be discussed here yet?
5. Things I have not yet spoken of.
The control system's designer who put the rule in, that tells it what output to emit given the perception and the reference: whether he supplied the rationality that is the real source of its miraculous power?
How to discover the presence of a control system and discern its reference, even if its physical embodiment remains obscure.
How to control a perception even when you don't know how.
Hierarchical arrangements of control systems as a method of building more complex control systems.
Simple control systems win at their limited tasks while being arational. How much more is possible for arational systems built of control systems?
6. WARNING: Autonomous device
After those few thousand words of seriousness, a small dessert.
Exhibit A: A supposedly futuristic warning sign.
Exhibit B: A contemporary warning sign in an undergraduate control engineering lab: "WARNING: These devices may start moving without warning, even if they appear powered off, and can exert sudden and considerable forces. Exercise caution in their vicinity."
They say the same thing.
Richard's post is similar to something I was thinking about a few months ago. I tried to attack the problem of AI by looking at very simple systems that can be said to accomplish "goals" without all the fancy stuff that people typically think they have to put in AI, and and asking how that works.
For example, a mass hanging by a spring: it moves the mass back to its equilibrium position without doing the things listed in 2). But here, Richard is asking an easier question in 4), since he's asking about systems that are specifically designed to track some reference, rather than systems that happen to do it as a consequence of their other properties.
In that case, the answer (about how an arational system accomplishes the goals of rationality) is pretty simple: the system has been physically set up in a way that exploits the laws of nature to create mutual information between the system and its environment. If you view Bayescraft as a way to increase the mutual information between yourself (hopefully meaning the brain part!) and your environment, then the system is in fact doing that, so it is not arational. Its design implements Bayesian inference.
In the case of the thermostat, the temperature sensor, via heat transfer, becomes entangled with its environment, a natural process that happens to have an isomorphism to the Bayes Theorem. Then, something else senses the reading, causing another set of effents that determines what temperature air to blow out.
The next question is why this mutual information is such that it keeps the temperature within a specific range, rather than making it spiral out of control. The answer to that part, as others have mentioned, is that the person who set up the system, chose rules that happened to work. That required another kind of entanglement with the environment, which does not need to be done again during the operation of the thermostat.
Well, as long as the assumptions it's based on don't change too much...