Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

An Introduction to Control Theory

35 Post author: Vaniver 19 January 2015 08:50PM

Behavior: The Control of Perception by William Powers applies control theory to psychology to develop a model of human intelligence that seems relevant to two of LW's primary interests: effective living for humans and value-preserving designs for artificial intelligence. It's been discussed on LW previously here, here, and here, as well as mentioned in Yvain's roundup of 5 years (and a week) of LW. I've found previous discussions unpersuasive for two reasons: first, they typically only have a short introduction to control theory and the mechanics of control systems, making it not quite obvious what specific modeling techniques they have in mind, and second, they often fail to communicate the differences between this model and competing models of intelligence. Even if you're not interested in its application to psychology, control theory is a widely applicable mathematical toolkit whose basics are simple and well worth knowing.

Because of the length of the material, I'll split it into three posts. In this post, I'll first give an introduction to that subject that's hopefully broadly accessible. The next post will explain the model Powers introduces in his book. In the last post, I'll provide commentary on the model and what I see as its implications, for both LW and AI.


Control theory is a central tool of modern engineering. Briefly, most interesting things can be modeled as dynamical systems, having both states and rules on how those states change with time. Consider the 3D position and velocity of a ball in a bowl (with friction); six numbers tell you where the ball is, its speed, and its direction of movement, and a formula tells you how you can predict what those six numbers will be in the next instant given where they are now. Those systems can be characterized by their attractors, states that are stable and are the endpoint for nearby states. The ball sitting motionless in the bottom of the bowl is an attractor- if it's already there, it will stay there, and if it's nearby (releasing from a centimeter away, for example), it will eventually end up there. The point of control theory is that adding a control to a dynamical system allows you to edit the total system dynamics so that the points you want to be stable attractors are stable attractors.

Let's flesh out that sketch with an example. Suppose you want to keep a house within a specific temperature range. You have a sensor of the current temperature, a heater, and a cooler. A thermostat takes the sensor's output, compares it to the desired temperature range, and turns the heater on if the sensed temperature is below the desired temperature range, and turns if off if the sensed temperature is above the minimum of that range, and does the reverse with the cooler (turning it on if the sensor is above the desired max, and turning it off if it's below).

Most interesting control systems are have a finer range of control values- instead of simply flipping an on or off switch, a car's cruise control can smoothly vary the amount of gas or brake that's being applied. A simple way to make a control system is to take the difference between the desired speed and actual speed, multiply it by some factor to go from units of distance per time to angle of pedal, and adjust the position of the pedals accordingly. (If the function is linear, it's called a linear controller.)

Let's introduce more of the technical vocabulary. The thing we're measuring is the input (to the controller), the level we want it to be at is the reference, the difference between those is the error, and the adjustment the control system makes is the output or feedback (sometimes we'll talk about the actuator as the physical means by which the controller emits its output). None of them have to be single variables- they can be vectors, which allow us to describe arbitrarily complicated systems. (The six numbers that express the position and velocity of a ball, each in three dimensions, are an example of an input vector.) I'll often use the noun state to refer to the, well, state of the system, and 'points in state-space' refers to those states as vectors. There's also a possible confusion in that the plant (the system being controlled) and the controller are mirrored- the controller's output is the plant's input, and the plant's output is the controller's input.

Control systems naturally lend themselves to diagrams: here's the block diagram from the thermostat and cruise control:

 

 

In a block diagram, each block is some function of its inputs, and the arrows show what affects what. Moving left to right, the reference is the temperature you've set, the current temperature is subtracted (that's the point of the plus and minus sign), and the error goes into the yellow box which represents the function that converts from the error to the effort put into altering the system. That's the controller output arrow that goes into the green box (the house), which represents the external system. This is also a functional block, because it takes the controller output and any disturbances (often represented as another arrow pointing in from the top) and converts them into the system temperature. The arrow leading out of the house points both to the right- to remind you that this is the temperature you're living with- and back into the thermocouple, the sensor that measures the temperature to compare with the reference, and now we've finished our feedback loop.

Now that we have a mathematical language for modeling lots of different systems, we can abstract away the specifics and prove properties about how those systems will behave given various controllers (i.e. feedback functions). Feedback functions convert the input and reference to the output, and are the mathematical abstraction of a physical controller. They can be arbitrary functions, but most of the mathematical edifice of control theory assumes that everything is continuous (but not necessarily linear). If you know the dynamics of a system, you can optimize your control function to match the system and be guaranteed to converge to the reference with a particular time profile. Rather than go deeply into the math, I'll discuss a few concepts that have technical meanings in control theory that are useful to think about.

First is convergence: the system output will eventually match the reference. This means that any errors that get introduced into the system are transient (temporary), and ideally we know the time profile of how large those errors will be as time progresses. A common goal here is exponential convergence, which means that the error decreases with a rate proportional to its size. (If the temperature is off by 2 degrees, the rate at which the error is decreasing is twice that of the rate when the temperature is off by 1 degree.) A simple linear controller will, for simple state dynamics, accomplish exponential convergence. If your system doesn't converge, then you are not successfully controlling it, and if your reference changes unpredictably at a rate faster than your system can converge, then you are not going to be able to match your reference closely.

Second is equilibrium: a point when the forces are balanced. If the system is at an equilibrium state, then nothing will change. This is typically discussed along with steady state error: imagine that my house gets heated by the sun at a rate of 1° an hour, and rather than a standard air conditioner that's on or off I have a 'dimmer switch' for my AC. If my controller sets the AC rate at the difference between the reference temperature and the current temperature per hour, then when the house is at 30° and I want it to be at 23° it'll try to reduce the temperature by 7° an hour, but when the house is at 24° and I want it to be at 23° it will try to reduce the temperature by 1° an hour, which cancels the effect of the sun, and so the house is at equilibrium at 24°. (Most real controllers have an integrator in order to counteract this effect.)

Third is stability: even when we know a point is an equilibrium, we want to know about the behavior in the neighborhood of that point. A stable equilibrium is one where a disturbance will be corrected; an unstable equilibrium is one where a disturbance will be amplified. Imagine a pendulum with the bob balanced at the bottom- tap it and it'll eventually be balanced at the bottom again, because that's a stable equilibrium (also called an attractor). Now imagine the bob balanced at the top- tap it, and it'll race away from that point, unlikely to return, because that's an unstable equilibrium (also called a repulsor).

Stability has a second meaning in control theory: a controller that applies too much feedback will cause the system to go from a smaller positive error to a moderate negative error, and then again too much feedback will be applied, and the system will go from a moderate negative error to a huge positive error. Imagine a shower with a delay: you turn the temperature knob and five seconds later the temperature of the water coming out of the showerhead changes. If you react too strongly or too quickly, then you'll overreact, and the water that started out too hot will correct to being too cold by the time you've stopped turning the knob away from heat. That an overexuberant negative feedback controller can still lead to explosions is one of the interesting results of control theory, as is that making small, gradual, proportionate changes to the current state can be as effective as memory / implementing a delay in your controller. Sometimes, you achieve better control exactly because you applied less control effort.

It's also worth mentioning here that basically all real controllers (even amplifiers!) are negative feedback controllers- positive feedback leads to explosions (literally), because "positive feedback" in this context means "pushing the system in the direction of the error" rather than its meaning in psychology.


So what's the point of control systems? If we have a way of push the states of a system, we can effectively adjust the system dynamics to make any state that we want a stable state. If we put a motor at the joint of a pendulum, we can adjust the acceleration of the bob so that it has an equilibrium at one radian away from vertical, rather than zero radians away from vertical. If we put a thermostat in a house, we can adjust the temperature changes of the house so that it returns to a comfortable range, instead of whatever the temperature is outside. (The point of control theory is to understand how the systems work, so we can make sure that our control systems do what we want them to do!)

Why are control systems interesting? Three primary reasons:

  1. They're practical. Control systems are all over the place, from thermostats to cars to planes to satellites.
  2. They can be adaptive. A hierarchical control system has a control loop which determines the parameters used in another control loop, and a natural application is adaptive control systems. When you launch a satellite, you might not have precise measurements of its rotational inertia, or you might expect that to change as it uses its fuel over its lifetime. One control system could observe how the satellite moves in response to its thrusters, and adjust the parameters of a rotational inertia model to correct errors to match the model to the observations. A second control system could use the inertia model created by the first control system to determine how to use the thrusters to adjust the satellite's alignment to match the desired rotation.
  3. They give concrete mathematical models of how simple signal processing can create 'intentional' behavior, and of what it looks like to be intentional without an explicit model of reality. A centrifugal governor is not an agent in the LW sense of the term, but it is an agent in another sense of the term--an entity that performs actions on behalf of another. The governor is just some pieces of metal, it doesn't have a mind, it's not viewed with moral concern, it doesn't have goals as humans think of them, and it doesn't have even a rudimentary internal model of the dynamical system it's controlling, and it still gets the job done. Controllers seem like a class of entities that are best modeled by intentionality, in that they alter the state of their external environment to match their desired internal environment based on their perceptions, and can do so in arbitrarily complicated and powerful ways, but while they do steer the future they don't seem to be cross-domain and they don't look like anthropomorphic models of "human intelligence."
Next, B:CP on the use of control theory in psychology.

Comments (13)

Comment author: Vaniver 19 January 2015 08:50:22PM 8 points [-]

Special thanks to Ari Rabkin, Peter McCluskey, Christian Kleineidam, Carlos Serrano, Daniel Light, and Harsh Pareek for helpful comments on drafts, as well as the various LWers I've talked to about this, like gwern and the DC and Austin meetup groups.

I currently plan to post the second post tomorrow, and the third post on Wednesday.

Comment author: RichardKennaway 21 January 2015 11:30:43AM 5 points [-]

I'm glad to see this subject get more attention. There's just one terminological point I want to raise. You use the word "controls" a lot, where I would expect either "control theory" or "control systems" depending on context.

Comment author: Vaniver 21 January 2015 03:01:16PM 3 points [-]

You use the word "controls" a lot, where I would expect either "control theory" or "control systems" depending on context.

Good point. I've gone through and disambiguated the posts. (The one place I left it in is 'controls engineer' in the next post, because that's what I hear people call it, and Google seems to agree it's common enough.)

Comment author: Dagon 19 January 2015 10:58:08PM *  5 points [-]
Comment author: Vaniver 20 January 2015 12:17:49AM 2 points [-]

Yep! Those, as well as a comment on the subject (and the resulting discussion), are linked in the first paragraph.

Comment author: kpreid 23 January 2015 02:02:57AM 4 points [-]

Control theory is one of those things that permeates large parts of your understanding of the world about you once you've learned about it even a little bit.

I learned that this category of problems existed when I built a simulation of something equivalent to a quadcopter (not that they were A Thing at the time) in a video game prototype I was working on. This is an interestingly hard problem because it has three layers of state-needing-controlling between "be at point X" and "apply this amount of (asymmetric) thrust". Failure modes aren't just flipping over and crashing into the ground — they also can be like continuously orbiting your target rather than reaching it.

Comment author: emr 11 February 2015 01:55:24AM *  1 point [-]

That an overexuberant negative feedback controller can still lead to explosions is one of the interesting results of control theory...

Terminology question: Does "negative feedback" have a precise definition? So if I point at something and say "this is a negative feedback loop", is that exactly the same as saying "the current state of this thing is stable, or the state is known to be in the neighborhood of an implicitly communicated stable point"? (And conversely for "positive feedback" = "unstable") I'm considering that a physical explosion will reliably reach a stable state. Or something that pushes a real value in [0,1] towards the nearest bound, but then stops.

Comment author: Vaniver 11 February 2015 02:36:37AM 2 points [-]

Does "negative feedback" have a precise definition?

Yes; the correction applied is in the opposite direction of the error. A positive feedback controller is one where the feedback is in the same direction as the error.

I'm considering that a physical explosion will reliably reach a stable state.

Not really, because stable implies that it will return to that state if disturbed. If you push around some of the ash after an explosion, it doesn't restore itself. (It is true that explosions stop when they burn through their energy source, and models that take that into account look realistic.)

Comment author: emr 11 February 2015 03:04:53AM 1 point [-]

Thanks for clarifying. I saw a few definitions that were less precise: wikipedia describes negative feedback as "...when some function of the output of a system...is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances." I think I was confused by skipping the tends part, and applying the resulting definition to the shower example.

You're right on the explosion.

So "negative feedback" does not imply "stable point". Although "stable point" presumably implies "negative feedback" somewhere?

Comment author: Vaniver 11 February 2015 02:08:54PM 1 point [-]

So "negative feedback" does not imply "stable point". Although "stable point" presumably implies "negative feedback" somewhere?

Yes, with an emphasis on the 'somewhere.' (Is it really 'feedback' if the restorative force is already inherent in the system? Well, that depends on how you look at things, but I'd generally say yes.)