pjeby comments on Richard Dawkins TV - Baloney Detection Kit video - Less Wrong

1 [deleted] 25 June 2009 12:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 26 June 2009 01:15:03AM *  2 points [-]

The reason why expressing the connection between not having a mate and seeking a mate in terms of PCT is so difficult is because "not having a mate" is not a perception, and because "seeking a mate" is not a behavior. Rather, these are an abstract world state with multiple perceptual correlates, and a broad class of complex behaviors that no known model explains fully. Given such a confusing problem statement, what did you expect if not a confused response?

The second problem, I think, is that you may have gotten a somewhat confused idea of what (non-perceptual) control systems look like. There was a series of articles about them on LW, but unfortunately, it stopped just short of the key insight, which is the PID controller model. A PID controller looks at not just the current value of its sensor (position, P), but also its recent history (integral, I) and rate of change (derivative, D).

If you want to test PCT, you need to step back and look at something simpler. The most obvious example is motor control. Most basic motor control tasks, like balancing, are a matter of generating some representation of body and object position, figuring out which neurons trigger muscles to push it in certain ways, and holding position constant; and to do that, any organism, whether it's a human or a simple invertebrate, needs some neural mechanism that acts very much like a PID controller. That establishes that controllers are handled in neurology somehow, but not their scope. There's another example, however, which shows that it's considerably broader than just motor control.

Humans and animals have various neurons which respond to aspects of their biochemistry, such as concentrations of certain nutrients and proteins in the blood. If these start changing suddenly, we feel sick and the body takes appropriate action. But the interesting thing is that small displacements which indicate dietary deficiencies somehow trigger cravings for foods with the appropriate nutrient. The only plausible mechanism I can think of for this is that the brain remembers the effect that foods had, and looks for foods which displaced sensors in the direction opposite the current displacement. The alternative would be a separate chemical pathway for monitoring each and every nutrient, which would break every time the organism became dependent on a new nutrient or lost access to an old one.

Moving up to higher levels of consciousness, things get significantly more muddled. Psychology and clear explanations have always been mutually exclusive, and no single mechanism can possibly cover everything, but then it doesn't need to, since the brain has many obviously-different specialized structures within it, each of which presumably requires its own theory. But I think control theory does a good job explaining a broad enough range of psychological phenomena that it should be kept in mind when approaching new phenomena.

Comment author: pjeby 26 June 2009 04:07:07AM -1 points [-]

Moving up to higher levels of consciousness, things get significantly more muddled.

I disagree, but that's probably because I've seized on PCT as a compressed version of things that were already in my models, as disconnected observations. (Like time-delayed "giving up" or "symptom substitution".) I don't really see many gaps in PCT because those gaps are already filled (for me at least), by Ainslie's "conditioned appetites" and Hawkins' HTM model.

Ainslie's "interests" model is a very strong fit with PCT, as are the hierarchy, sequence, memory, and imagination aspects of HTM. Interests/appetites and HTM look just like more fleshed-out versions of what PCT says about those things.

Is it a complete model of intelligence and humans? Heck no. Does it go a long way towards reverse-engineering and mapping the probable implementation of huge chunks of our behavior? You bet.

What's still mostly missing, IMO, after you put Ainslie, PCT, and HTM together, is dealing with "System 2" thinking in humans: i.e. dealing with logic, reasoning, complex verbalizations, and some other things like that. From my POV, though, these are the least interesting parts of modeling a human, because these are the parts that generally have the least actual impact on their behavior. ;-)

So, there is little indication as to whether System 2 thinking can be modeled as a controller hierarchy in itself, but it's also pretty plain that it is subject to the System 1 control hierarchy, that lets us know (for example) whether it's time for us to speak, how loud we're speaking, what it would be polite to say, whether someone is attacking our point of view, etc. etc.

It's also likely that the reason we intuitively see the world in terms of actions and events rather than controlled variables is simply because it's easier to model discrete sequences in a control hierarchy, than it is to directly model a control hierarchy in another control hierarchy! Discrete symbolic processing on invariants lets us reuse the controllers representing "events", without having to devote duplicated circuitry to model other creatures' controller hierarchies. (The HTM model has a better detailed explanation of this symbolic/pattern/sequence processing, IMO, than PCT, even though in the broad strokes, they're basically the same.)

(And although you could argue that the fact we use symbols means they're more "compressed" than control networks, it's important to note that this is a deliberately lossy compression; discrete modeling of continuous actions makes thinking simpler, but increases prediction errors.)