Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Controlling your inner control circuits

45 Post author: Kaj_Sotala 26 June 2009 05:57PM

On the topic of: Control theory

Yesterday, PJ Eby sent the subscribers of his mailing list a link to an article describing a control theory/mindhacking insight he'd had. With his permission, here's a summary of that article. I found it potentially life-changing. The article seeks to answer the question, "why is it that people often stumble upon great self-help techniques or productivity tips, find that they work great, and then after a short while the techniques either become ineffectual or the people just plain stop using them anyway?", but I found it to have far greater applicability than just that.

Richard Kennaway already mentioned the case of driving a car as an example where the human brain uses control systems, and Eby mentioned another: ask a friend to hold their arm out straight, and tell them that when you push down on their hand, they should lower their arm. And what you’ll generally find is that when you push down on their hand, the arm will spring back up before they lower it... and the harder you push down on the hand, the harder the arm will pop back up! That's because the control system in charge of maintaining the arm's position will try to keep up the old position, until one consciously realizes that the arm has been pushed and changes the setting.

Control circuits aren't used just for guiding physical sequences of actions, they also regulate the workings of our mind. A few hours before typing out a previous version of this post, I was starting to feel restless because I hadn't accomplished any work that morning. This has often happened to me in the past - if, at some point during the day, I haven't yet gotten started on doing anything, I begin to feel anxious and restless. In other words, in my brain there's a control circuit monitoring some estimate of "accomplishments today". If that value isn't high enough, it starts sending an error signal - creating a feeling of anxiety - in an attempt to bring that value into the desired range.

The problem with this is that more often than not, that anxiety doesn't push me into action. Instead I become paralyzed and incapable of getting anything started. Eby proposes that this is because of two things: one, the control circuits are dumb and don't actually realize what they're doing, so they may actually take counter-productive action. Two, there may be several control circuits in the brain which are actually opposed to each other.

Here we come to the part about productivity techniques often not working. We also have higher-level controllers - control circuits influencing other control circuits. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do. When they notice that we've found a method to actually accomplish something we've been struggling with for a long time, they start sending an error signal... causing neural reorganization, eventually ending up at a stage where we don't use those productivity techniques anymore and solving the "crisis" of us actually accomplishing things. Moreover, these circuits are to a certain degree predictive, and they can start firing when they pick up on a behavior that only even possibly leads to success - that's when we hear about a great-sounding technique and for some reason never even try it. A higher-level circuit, or a lower-level one set up by the higher-level circuit, actively suppresses the "let's try that out" signals sent by the other circuits.
But why would we have such self-sabotaging circuits? This ties into Eby's more general theory of the hazards of some kinds of self-motivation. He uses the example of a predator who's chased a human up to a tree. The human, sitting on a tree branch, is in a safe position now, so circuits developed to protect his life send signals telling him to stay there and not to move until the danger is gone. Only if the predator actually starts climbing the tree does the danger become more urgent and the human is pushed to actively flee.

Eby then extends this example into a social environment. In a primitive, tribal culture, being seen as useless to the tribe could easily be a death sentence, so we evolved mechanisms to avoid giving the impression of being useless. A good way to avoid showing your incompetence is to simply not do the things you're incompetent at, or things which you suspect you might be incompetent at and that have a great associated cost for failure. If it's important for your image within the tribe that you do not fail at something, then you attempt to avoid doing that.

You might already be seeing where this is leading. The things many of us procrastinate on are exactly the kinds of things that are important to us. We're deathly afraid of the consequences of what might happen if we fail at them, so there are powerful forces in play trying to make us not work on them at all. Unfortunately, for beings living in modern society, this behavior is maladaptive and buggy. It leads to us having control circuits which try to keep us unproductive, and when they pick up on things that might make us more productive, they start suppressing our use of those techniques.

Furthermore, the control circuits are stupid. They are occasionally capable of being somewhat predictive, but they are fundamentally just doing some simple pattern-matching, oblivious to deeper subtleties. They may end up reacting to wholly wrong inputs. Consider the example of developing a phobia for a particular place, or a particular kind of environment. Something very bad happens to you in that place once, and as a result, a circuit is formed in your brain that's designed to keep you out of such situations in the future. Whenever it detects that you are in a place resembling the one where the incident happened, it starts sending error signals to get you away from there. Only that this is a very crude and unoptimal way of keeping you out of trouble - if a car hit you while you were crossing the road, you might develop a phobia for crossing the road. Needless to say, this is more trouble than it's worth.

Another common example might be a musician learning to play an instrument. Learning musicians are taught to practice their instrument in a variety of postures, for otherwise a flutist who's always played his flute sitting down may realize he can't play it while standing up! The reason being that while practicing, he's been setting up a number of control circuits designed to guide his muscles the right way. Those control circuits have no innate knowledge of what muscle postures are integral for a good performance, however. As a result, the flutist may end up with circuits that try to make sure they are sitting down when playing.

This kind of malcalibration extends to higher-level circuits as well. Eby writes:

I know this now, because in the last month or so, I’ve been struggling to identify my “top-level” master control circuits.

And you know what I found they were controlling for? Things like:

* Being “good”
* Doing the “right” thing
* “Fairness”

But don’t be fooled by how harmless or even “good” these phrases sound.

Because, when I broke them down to what subcontrollers they were actually driving, it turned out that “being good” meant “do things for others while ignoring your own needs and being resentful”!

“Fairness”, meanwhile, meant, “accumulate resentment and injustices in order to be able to justify being selfish later.”

And “doing the right thing” translated to, “don’t do anything unless you can come up with a logical justification for why it’s right, so you don’t get in trouble, and no-one can criticize you.”

Ouch!

Now, if you look at that list, nowhere on there is something like, “go after what I really want and make it happen”. Actually doing anything – in fact, even deciding to do anything! – was entirely conditional on being able to justify my decisions as “fair” or “right” or “good”, within some extremely twisted definitions of those words!

So that's the crux of the issue. We are wired with a multitude of circuits designed for controlling our behavior... but because those circuits are often stupid, they end up in conflict with each other, and end up monitoring values that don't actually represent the things they ought to.

While Eby provides few references and no peer-reviewed experimental work to support his case of motivation systems being controlled in this way, I find it to mesh very well with everything I know about the brain. I took the phobia example from a textbook on biological psychology, while the flutist example came from a lecture by a neuroscientist emphasizing the stupidity of the cerebellum's control systems. Building on systems that were originally developed to control motion and hacking them to also control higher behavior is a very evolution-like thing to do. We already develop control systems for muscle behavior starting from the time when we first learn to control our body as infants, so it's very plausible that we'd also develop such mechanisms for all kinds of higher cognition. The mechanism by they work is also fundamentally very simple, making it easy for new circuits to form: a person ends up in an unpleasant situation, causing an emotional subsystem to flood the whole brain with negative feedback, leading to pattern recognizers which were active at the time to start activating the same kind of negative feedback the next time when they pick up on the same input. (At its simplest, it's probably a case of simple Hebbian learning.)

Furthermore, since reading his text, I have noticed several things in myself which could only be described as control circuits. After reading Overcoming Bias and Less Wrong for a long time, I've found myself noticing whenever I have a train of thought that seems to be indicative of a number of certain kinds of cognitive biases. In retrospect, that is probably a control circuit that has developed to detect the general appearance of a biased thought and to alert me about it. The anxiety circuit I already mentioned. A closely related circuit is one that causes me to need plenty of time to accomplish whatever it is that I'm doing - if I only have a couple of hours before a deadline, I often freeze up and end up unable of doing anything. This leads to me being at my most productive in the mornings, when I have a feeling of having the whole day for myself and of not being in any rush. That's easily interpreted as a circuit that looks at the remaining time and sends sending an alarm when the time runs low. Actually, the circuit in question is probably even stupid than that, as the feeling of not having any time is often tied only what the clock is, not to the time when I'll be going to bed. If I get up at 2 PM and go to bed at 4 AM, I have just as much time as if I'd get up at 9 AM and went to bed at 11 PM, but the circuit in question doesn't recognize this.

So, what can we do about conflicting circuits? Simply recognizing them for what they are is already a big step forward, one which I feel has already helped me overcome some of their effects. Some of them can probably be dismantled simply by identifying them, working out their purpose and deciding it to be unnecessary. (I suspect that this process might actually set up new circuits whose function is to counteract the signals sent by the harmful ones. Maybe. I'm not very sure of what the actual neural mechanism might be.) Eby writes:

So, you want to build Desire and Awareness by tuning in to the right qualities to perceive. Then, you need to eliminate any conflicts that come up.

Now, a lot of times, you can do this by simple negotiation with yourself. Just sit and write down all your objections or issues about something, and then go through them one at a time, to figure out how you can either work around the problem, or find another way to get your other needs met.

Of course, you have to enter this process in good faith; if you judge yourself for say, wanting lots of chocolate, and decide that you shouldn’t want it, that’s not going to work.

But it might work, to be willing to give up chocolate for a while, in order to lose weight. The key is that you need to actually imagine what it would be like to give it up, and then find out whether you can be “okay” with that.

Now, sadly, about 97% of the people who read this are going to take that last paragraph and go, “yeah, sure, I’m going to give up [whatever]”, but without actually considering what it would be like to do so.

And those people are going to fail.

And I kind of debated whether or not I should even mention this method here, because frankly, I don’t trust most people’s controllers any further than I can reprogram them (so to speak).

See, I know from bitter experience that my own controllers for things like “being smart” used to make me rationalize this sort of thing, skipping the actual mental work involved in a technique, because “clearly I’m smart enough not to need to do all that.”

And so I’d assume that just “thinking” about it was enough, without really going through the mental experience needed to make it work. So, most of the people who read this, are going to take that paragraph above where I explained the deep, dark, master-level mindhacking secret, and find a way to ignore it.

They’re going to say things like, “Is that all?” “Oh, I already knew that.” And they’re not going to really sit down and consider all the things that might conflict with what they say they want.

If they want to be wealthy, for example, they’re almost certainly not going to sit down and consider whether they’ll lose their friends by doing so, or end up having strained family relations. They’re not considering whether they’re going to feel guilty for making a lot of money when other people in the world don’t have any, or for doing it easily when other people are working so hard.

They’re not going to consider whether being wealthy or fit or confident will make them like the people they hate, or whether maybe they’re really only afraid of being broke!

But all of them will read everything I’ve just written, and assume it doesn’t apply to them, or that they’ve already taken all that into account.

Only they haven’t.

Because if they had, they would have already changed.

That's a pretty powerful reminder not to ignore your controllers. When you've been reading this, some controller that tries to keep you from doing things has probably already picked up on the excitement some emotional system might now be generating... meaning that you might be about to stumble upon a technique that might actually make you more productive... causing signals to be sent out to suppress attempts to even try it out. Simply acknowleding its existence isn't going to be enough - you need to actively think things out, identify different controllers within you, and dismantle them.

I feel I've managed to avoid the first step, of not doing anything even after becoming aware of the problem. I've been actively looking at different control circuits, some of which have plagued me for quite a long time, and I at least seem to have managed to overcome them. My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it. It feels that the best way to counteract that is to try to consciously set up new circuits dedicated to the task of monitoring for the presence of new circuits, and alarming me of their presence. In other words, keep actively looking for anything that might be a mental control circuit, and teach myself to notice them.

(And now, Eby, please post any kind of comment here so that we can vote it up and give you your fair share of this post's karma. :))

Comments (146)

Comment author: SilasBarta 26 June 2009 08:39:48PM *  18 points [-]

Let me clarify where I do and do not agree with PJ Eby, since we've been involved in some heated arguments that often seem to go nowhere.

I accept that the methods described here could work, and intend to try them myself.

I accept that all of the mechanisms involved in behavior can be restated in the form of a network of feedback loops (or a computer program, etc.).

I accept that Eby is acting as a perfect Bayesian when he says "Liar!" in response to those who claim they "gave it a try" and it didn't work. To the extent that he has a model, that is what it obligates him to believe, and Eliezer Yudkowsky has extensively argued that you should find yourself questioning the data when it conflicts with your model.

So what's the problem, then?

I do not accept that these predictions actually follow from, or were achieved through the insights of, viewing humans as feedback control systems. The explanations here for behavioral phenomena look like commonsense reasoning that is being shoehorned into controls terminology by clever relabeling. (ETA: Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?)

For that reason, I ran the standard check to see if a model is actually constraining expectations by asking pjeby what it rules out, and, more importantly, why PCT says you shouldn't observe such a phenomenon. And I still don't have an answer.

(This doesn't contradict my third point of agreement, because pjeby can believe something contradicts his model, even if, it turns out, the model he claims to believe in draws no such conclusion.)

Rather, based on this article, it looks like PCT is in the position of:

"Did PCT Method X solve your problem? Well, that's because it reset your control references to where they should be. Did it fail? Well, that's because PCT says that other (blankly solid, blackbox) circuts were, like, fighting it."

Comment author: GuySrinivasan 26 June 2009 09:52:44PM 10 points [-]

I purchased Behavior: The Control of Perception and am reading it. Unless someone else does so first, I plan to write a review of it for LW. A key point is that at least part of PCT is actually right. The lowest level controllers, such as those controlling tendon tension, are verifiably there. So far as I can see so far, real physical structures corresponding pretty closely to second and third level controllers also exist and have been pointed to by science. I haven't gotten further than this yet, but teasers within the book indicate that (when the book was written of course) there is good evidence that some fifth level control systems exist in particular places in the brain, and thus fourth level somewhere. Whether it's control systems (or something closish to them) all the way up? Dunno. But the first couple levels, not yet into the realm of traditional psychology or whatnot, those definitely exist in humans. And animals of course. The description of the scattershot electrodes in hundreds of cats experiment was pretty interesting. :)

That said, you're absolutely right, there should be some definite empirical implications of the theory. For example due to the varying length of the paths at various supposed levels, it should be possible to devise an experiment around a simple tracking task with carefully selected disturbances which will have one predicted result under PCT and another under some other model. Also, predicting specific discretization of tasks that look continuous should be possible... I have not spent a lot of time thinking about how to devise a test like this yet, unfortunately.

Comment author: Eliezer_Yudkowsky 27 June 2009 05:15:36PM 5 points [-]

Please add PCT to the wiki as Jargon and link there when this term, whatever it means, is used for the first time in a thread. It is not in the first 10 Google hits.

Comment author: SoullessAutomaton 27 June 2009 05:52:48PM 1 point [-]

It seems jimrandomh has taken the time to do so; the wikipedia article should be helpful.

In defense of people using the acronym without definition, though, it seemed fairly obvious if you look at the wikipedia disambig page for the acronym in question.

Comment author: SoullessAutomaton 26 June 2009 10:31:50PM 4 points [-]

Whether it's control systems (or something closish to them) all the way up? Dunno. But the first couple levels, not yet into the realm of traditional psychology or whatnot, those definitely exist in humans.

As a general-purpose prior assumption for systems designed by evolutionary processes, reusing or adapting existing systems is far more likely than spontaneous creation of new systems.

Thus, if it can be demonstrated that a model accurately represents low-level hierarchical systems, this is reasonably good evidence in favor of that model applying all the way to the top levels as opposed to other models with similar explanatory power for said upper levels.

Comment author: pjeby 26 June 2009 09:29:36PM 2 points [-]

Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?

You don't. As it says in the report I wrote, I've been teaching most of these things for years.

I ran the standard check to see if a model is actually constraining expectations by asking pjeby what it rules out, and, more importantly, why PCT says you shouldn't observe such a phenomenon. And I still don't have an answer.

And I'm still confused by what it is you expect to see, that I haven't already answered. AFAIK, PCT doesn't say that no behavior is ever generated except by control systems, it just says that control systems are an excellent model for describing how living systems generate behavior, and that we can make more accurate predictions about how a living system will behave if we know what variables it's controlling for.

Since the full PCT model is Turing complete, what is it exactly that you are asking be "ruled out"?

Personally, I'm more interested in the things PCT rules in -- that is, the things it predicts that other models don't, such as the different timescales for "giving up" and symptom substitution. I'm not aware of any other model where this falls out so cleanly as a side effect of the model.

"Did PCT Method X solve your problem? Well, that's because it reset your control references to where they should be. Did it fail? Well, that's because PCT says that other (blankly solid, blackbox) circuts were, like, fighting it."

It's no more black-box than Ainslie's picoeconomics. In fact, it's considerably less black box than picoeconomics, which doesn't do much to explain the internal structure of "interests" and "appetites". PCT, OTOH provides a nice unboxing of those concepts into likely implementations.

Comment author: SilasBarta 27 June 2009 08:52:06PM *  5 points [-]

Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?

You don't. As it says in the report I wrote, I've been teaching most of these things for years.

Then why does it get you there faster? If someone had long ago proposed to you that the body operates as a network of negative feedback controllers, would that make you more quickly reach the conclusion that "I should rationally think through the reasons I'm afraid of something", as opposed to, say, blindly reflecting on your own conscious experience?

PCT ... says that control systems are an excellent model for describing how living systems generate behavior, and that we can make more accurate predictions about how a living system will behave if we know what variables it's controlling for.

Yes, and that's quite a monster "if". So far, I haven't seen you identify -- in the rationalist sense -- a "variable being controlled". That requires you to be able to explain it in terms "so simple a computer could understand it". So far, no one can do that for any high-level behavior.

For example, judging sexiness of another person. To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format. Only then am I able to set up a model that shows an error signal which can drive behavior.

(Aside: note that the above can be rephased as saying that you need to find the person's "invariants" of sexiness, i.e., the features that appear the same in the "sexiness" dimension, even despite arbitrary transformations applied to the sense data, like rotation, scaling, environment changes, etc. Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

But all of these tasks are just as difficult as the original problem!

Since the full PCT model is Turing complete, what is it exactly that you are asking be "ruled out"?

Now I'm running into "inferential distance frustration", as I'd have to explain the basics of technical explanation, what it means to really explain something, etc., i.e. all those posts by Eliezer Yudkowsky, starting back from his overcomingbias.com posts.

But suffice to say, yes, PCT is Turing complete. So is the C programming language, and so is a literal tape-and-table Turing machine. So, if you accept the Church-Turing Thesis, there must be an isomorphism between some feedback control model and the human body.

And between some C program and the human body.

And between some Turing machine and the human body.

Does this mean it is helpful to model the body as a Turing machine? No, no, no, a thousand times, NO! Because a "1" on the tape is going to map to some hideously complex set of features on a human body.

In other words, the (literal) Turing machine model of human behavior fails to simplify the process of predicting human behavior: it will explain the same things we knew before, but require a lot more complexity to do so, just like using a geocentric eypicycle model.

In OB/LW jargon, it lengthens the message needed to describe the observed data.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

Personally, I'm more interested in the things PCT rules in -- that is, the things it predicts that other models don't, ... I'm not aware of any other model where this falls out so cleanly as a side effect of the model.

Okay, good. Those are things that rationalists should look for. But people have a tendency to claim their theory predicted something after-the-fact, when an objective reading of it would say the the theory predicted no such thing. So I need something that can help me distinguish between:

a) "PCT predicts X, while other models do not."

vs.

b) "PJ Eby's positive affect toward PCT causes him to believe it implies we should observe X, while other models do not."

A great way to settle the matter would be an objective specification of how exactly PCT generates predictions. But so far, it seems that to learn what PCT predicts, you have to pass the data up through the filter of someone who already likes PCT, and thus can freely claim the model says what they want it to say, with no one able to objectively demonstrate, "No, PCT says that shouldn't happen."

Comment author: pjeby 27 June 2009 10:57:52PM 1 point [-]

If someone had long ago proposed to you that the body operates as a network of negative feedback controllers, would that make you more quickly reach the conclusion that "I should rationally think through the reasons I'm afraid of something", as opposed to, say, blindly reflecting on your own conscious experience?

Of course not; the paths between Theory and Practice are not symmetrical. In the context of my work, the usefulness of a theory like this is it provides me with a metaphorical framework to connect practical knowledge to. Instead of teaching all the dozens of principles, ideas, aphorisms, etc. that I have as isolated units, being able to link each one ot a central metaphor of controllers, levels, etc. makes communication and motivation easier.

To be perfectly fair, PCT would be useful for this purpose even if it were not a true or semi-true theory. However, all else being equal, I'd rather have something true that fits my practical observations, and PCT fits more of my practical observations than anything else. And I believe it is, in fact, true.

I'm merely stating the above so as to make it clear that if I thought it were only a metaphor, I would have no problem with saying, "it's just a metaphor that aids education and motivation in applying certain practical observations by giving them a common conceptual framework."

For example, judging sexiness of another person. To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format. Only then am I able to set up a model that shows an error signal which can drive behavior.

I still don't see your point about this. Any model has to do the same thing, doesn't it? So how is this a flaw of PCT?

And on a practical level, I just finished a webinar where we spent time breaking down people's references for things like "Preparedness" and "Being a good father" and showing how to establish controllable perceptions for these things that could be honored in more than the breach. (For example, if your perceptual definition of "being a good father" is based solely on the actions of your kids rather than your actions, then you are in for some pain!)

IOW, I don't actually see a lot of problem with reference breakdowns and even reference design, at the high-level applications for which I'm using the theory. Would I have a hard time defining "length" or "color" in terms of its referents? Sure, but I don't really care. Powers does a good job of explaining what's currently known about such invariants, and pointing to what research still needs to be done.

Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

Did you ever look at any of the Numenta HTM software demos? AFAIK, they actually have some software that can learn the idea of "airplane" from noisy, extremely low-res pictures of them flying by. That is, HTMs can learn invariants from combinations of features. I'm not sure if they have any 3D rotation stuff, but the HTM model appears to explain how it could be done.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

And I've already pointed out why this claim is false, since the controller hierarchy/time-scale correlation has already been a big help to me in my work; it was not something that was predicted by any other model of human behavior.

But so far, it seems that to learn what PCT predicts, you have to pass the data up through the filter of someone who already likes PCT, and thus can freely claim the model says what they want it to say, with no one able to objectively demonstrate, "No, PCT says that shouldn't happen."

Or, you could just go RTFM, instead of asking people to summarize 300 pages in a comment for you... Or you could just wait until someone you trust gives you a summary. But if you don't trust anyone who holds a positive attitude about PCT, why do you insist on asking more questions? As I said, if you want all the detailed evidence and models, you're eventually going to be asking me for virtually every chapter in B:CP.

What I'm teaching to my group is only a watered-down version of the highest levels, specifically as a framework to clarify, connect, and enhance things I've already been teaching. So my writings on it are really not the place to be looking for the math and the science.

Comment author: SilasBarta 28 June 2009 06:26:30PM *  4 points [-]

Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

Did you ever look at any of the Numenta HTM software demos? AFAIK, they actually have some software that can learn the idea of "airplane" from noisy, extremely low-res pictures of them flying by.

Actually, yes, I have downloaded their demos, expecting to be wowed, but then fell over laughing. Specifically, this one. It claims to be able to learn to recognize simple black/white 16x16x pixel images using HTM and saccading the images around. But then I gave it a spin, had it learn the images, and then tested it by drawing one of the figures with a very, very slight rotation, which completely screwed up its ability to identify it.

Not impressive.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

And I've already pointed out why this claim is false, since the controller hierarchy/time-scale correlation has already been a big help to me in my work; it was not something that was predicted by any other model of human behavior.

No, what you have shown is that you learned of PCT and HTM, and then you believe you improved in your work. As per my two previous comments in this thread, I can (edited phrasing) accept both of those claims and still doubt the more interesting claims, specifically, that the model actually did help, rather than you merely thinking it did because you could rephrase your intuitive, commonsense reasoning in the model's terminology. I could also doubt that your ability to help people improved.

Or, you could just go RTFM, instead of asking people to summarize 300 pages in a comment for you... As I said, if you want all the detailed evidence and models, you're eventually going to be asking me for virtually every chapter in B:CP.

Please pay attention. I am R-ingTFM, and I even complained that one of the Powers demos understated the strength of their point about feedback control. I already told you I'm going to try your advice. I've read several of the pdfs you've linked, including TheSelfHelpMyth.pdf linked here, and will read several more, and probably even buy Behavior. (Though I couldn't get the freebie you mentioned to work because the website crapped out after I entered my info). I am making every effort to consider this model.

But it is simply not acceptable of you to act like the only alternatives are to repeat hundreds of pages, or speak in dumbed-down blackbox terminology. You can e.g. summarize the chain of useful, critical insights that get me from "it's a network of feedback controllers" to a useful model, so I know which part I'd be skeptical of and which parts assume the solution of problems I know to be unsolved, so I know where to direct my attention.

Comment author: pjeby 28 June 2009 09:55:00PM 2 points [-]

drawing one of the figures with a very, very slight rotation, which completely screwed up its ability to identify it.

I'm not clear on whether you took this bit from their docs into account:

The system was NOT trained on upside down images, or rotations and skews beyond a simple right-to-left flip. In addition, the system was not trained on any curved lines, only straight line objects.

That is, I'm not clear whether the steps you're describing include training on rotations or not.

rather than you merely thinking it did because you could rephrase your intuitive, commonsense reasoning in the model's terminology

No, I gave you one specific prediction that PCT makes: higher-level controllers operate over longer time scales than low-level ones. This prediction is not a part of any other model I know of. Do you know of another model that makes this prediction? I only know of models that basically say that symptom substitution takes time, with no explanation of how it occurs.

This doesn't have anything to do with whether I believe that prediction to be useful; the prediction is still there, the observation that people do it is still there, and the lack of explanation of that fact is still there, even if you remove me from the picture entirely.

You can e.g. summarize the chain of useful, critical insights that get me from "it's a network of feedback controllers" to a useful model, so I know which part I'd be skeptical of and which parts assume the solution of problems I know to be unsolved, so I know where to direct my attention.

I can only do that if I understand specifically what it is you don't get -- and I still don't.

For example, I don't see why the existence of unsolved problems is a problem, or even remotely relevant, if all the other models we have have to make the same assumption.

From my POV, you are ignoring the things that make PCT useful: namely that it actually predicts as normal, things that other current behavioral models have to treat as special cases or try to handwave out of existence. It's not that PCT is "simpler" than stimulus-response or "action steps" models, it's that it's the simplest model that improves on our ability to make correct predictions about behavior.

Your argument seems to be, "but PCT requires us to gather more information in order to make those predictions". And my answer to that is, "So what? Once you have that information, you can make way better predictions." And it's not that you could just feed the same information into some other model and get similar predictions - the other models don't even tell you what experiments to perform to get yes-or-no answers.

To put it another way, to the extent that PCT requires you to be more precise or gather more information, it is doing so because that degree of actual uncertainty or lack of knowledge exists... and current experimental models disguise that lack of understanding behind statistics.

In contrast, to do a PCT experiment, you need to have a more-specific, falsifiable hypothesis: is the animal or person controlling quantity X or not? You may have to do more experiments in order to identify the correct "X", but you will actually know something real, rather than, "47% of rats appear to do Y in the presence of Z".

Comment author: SilasBarta 29 June 2009 06:20:27PM *  4 points [-]

That is, I'm not clear whether the steps you're describing include training on rotations or not.

But that's a pretty basic transformation, and if they could handle it, they would have done so. In any case, the rotation was very slight, and was only one of many tests I gave it. It didn't merely assign a slightly lower probability to the correct answer, it fell off the list entirely.

Consider how tiny the pictures are, this is not encouraging.

Your argument seems to be, "but PCT requires us to gather more information in order to make those predictions". And my answer to that is, "So what? Once you have that information, you can make way better predictions."

No, you misunderstand: my complaint is that PCT requires us to solve problems of equal or greater difficulty than the initial problem being solved. To better explain what I mean, I gave you the example with the literal tape-and-table Turing machine. Watch what happens when I make your same point, but in advocacy of the "Turing machine model of human behavior".

"I've discovered a great insight that helps unify my research and better assist people with their problems. It's to view them as a long, sectioned tape with a reader and state record, which [explanation of Turing machine]. This model is so useful because all I have to do is find out whether people have 1's rather than 0's in places 5000-5500 on their tape, and if they do, I just have to change state 4000 to erase rather than merely move state! This helps explain why people have trouble in their lives, because they don't erase bad memories."

See the problems with my version?

1) Any model of a human as a Turing machine would be way more complex than the phenomenon I'm trying to explain, so the insight it gives is imaginary.

2) Even given a working model, the mapping from any part of the TM model to the human is hideously complex.

3) "Finding someone's 1's and 0's" is near impossible because of the complexity of the mapping.

4) The analogy between erasing memories and erasure operations is only superficial, and not indicative of the model's strength.

5) Because I obviously could not have a TM model of humans, I'm not actually getting my insight from the model, but from somewhere else.

And points 1-5 are exactly what I claim is going on with you and PCT.

Nevertheless, I will confess I've gotten more interested in PCT, and it definitely looks scientific for the low level systems. I've read the first two Byte magazine articles and reproduced it in Matlab's Simulink, and I'm now reading the third, where it introduces hierarchies.

My main dispute is with your insistence that you can already usefully apply real predictions from PCT at higher-level systems, where the parallels with feedback control systems appear very superficial and the conclusions seem to be reached with commonsense reasoning unaided by PCT.

Btw: my apologies, but somehow I accidentally deleted a part of my last reply before posting it, and my remark now resides only in my memory. It's related to the same point I just made. I'll put it here so you don't need to reply a second time to that post:

To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format ... Only then am I able to set up a model that shows an error signal which can drive behavior.

I still don't see your point about this. Any model has to do the same thing, doesn't it? So how is this a flaw of PCT?

No, a model doesn't need to do the same thing. A purely neuronal model would not need to have the concept of "sexiness" and a comparator for it. Remember, the whole framing of a situation as a "romantic relationship" is just that: a framing the we have imposed on it to make sense of the world. It does not exist at lower levels, and so models need not be able to indentify such complex "invariants".

Comment author: pjeby 29 June 2009 07:00:24PM 0 points [-]

I'm sorry, but I'm still utterly baffled by your comments, since your proposed "purely neuronal" model is more analogous to the Turing machine.

It sounds a bit like the part you're missing is the PCT experimental design philosophy, aka the Test -- a way of formulating and testing control hypotheses at arbitrary levels of the hierarchy. To test "sexiness" or some other high-level value, it is not necessary to completely specify all its lower-level components, unless of course the goal of your experiment is to identify those components.

We don't need, for example, to break down how object invariance happens to be able to do an experiment where a rat presses a bar! We assume the rat can identify the bar and determine whether it is currently pressed. The interesting part is what other things (like food, mate availability, shock-avoidance, whatever) that you can get the rat to control by pressing a bar. (At least, at higher levels.)

Comment author: SilasBarta 29 June 2009 10:22:18PM 2 points [-]

I'm sorry, but I'm still utterly baffled by your comments, since your proposed "purely neuronal" model is more analogous to the Turing machine.

So? I agree that the "purely neuronal" model would be really complex (though not as complex as the Turing machine would be). I just brought it up in order to show how a model doesn't "need to have a sexiness comparator anyway", so you do have to justify the simplicity gained when you posit that there is one.

It sounds a bit like the part you're missing is the PCT experimental design philosophy, aka the Test -- a way of formulating and testing control hypotheses at arbitrary levels of the hierarchy. To test "sexiness" or some other high-level value, it is not necessary to completely specify all its lower-level components, unless of course the goal of your experiment is to identify those components.

But if you don't specify all of the lower level components, then your controls explanation is just a restating of the problem, not a simplifying of it. The insight you claim you are getting from it is actually from your commonsense reasoning. Indeed, virtually every insight you "explain" by PCT, you got some other way.

We don't need, for example, to break down how object invariance happens to be able to do an experiment where a rat presses a bar!

Sure, but that's because you don't need to account for the rat's ability to identify the bar in a wide variety of contexts and transformations, which is the entire point of looking for invariants.

Comment author: pjeby 29 June 2009 10:44:29PM -1 points [-]

But if you don't specify all of the lower level components, then your controls explanation is just a restating of the problem, not a simplifying of it. The insight you claim you are getting from it is actually from your commonsense reasoning.

Kindly explain what "commonsense reasoning" explains the "symptom substitution" phenomenon in hypnosis, and in particular, explains why the duration of effect varies, using any model but PCT.

Comment author: jimrandomh 26 June 2009 09:08:54PM *  8 points [-]

PCT is the first thing I've encountered that seems like it can make real headway in understanding the brain. Many thanks to PJ, Kaj and the others who've written about it here.

I notice that all of the writings about controllers I've seen so far assume that the only operations controllers can perform on each other are to set a target, push up and push down. However, there are two more natural operations with important consequences: damping and injecting noise. Conversely, a controller need not measure only the current value of other controllers, but can also measure their rate of change in the short term and their domain and instability in the long term.

Stress seems like it might be represented by the global rate of change and oscillation in a large group of controllers. That would explain why conflicts between controllers induce stress, and why reorganizations that eliminate the conflict can reduce it. Focus meditation is probably best explained as globally damping a large set of normally-oscillating controllers at once, which would explain why it's calming.

Injecting noise into controllers allows them to find new equilibria, where they'll settle when the noise goes away. This seems like a likely purpose for REM sleep. The very-high activity levels recorded during REM using EEG and similar methods suggest that's exactly what it's doing. This would predict that getting more REM sleep would decrease stress, as the new equilibria would have fewer conflicts, and that is indeed the case.

If fMRI studies can confirm that the brain activity it measures corresponds to oscillating controllers, then combined with meditations thar dampen and excite particular regions, this could be a powerful crowbar for exposing more of the mind.

Comment author: cousin_it 29 June 2009 09:56:54AM *  9 points [-]

So Vassar was right, we have reached a crisis. A self-help sales pitch with allegations of first-percentile utility right here on LW. This gets my downvote on good old Popperian grounds.

You say this stuff helps with akrasia? However hot your enthusiasm burns, you don't get to skip the "controlled study" part. Come back with citations. At this point you haven't even ruled out the placebo effect, for Bayes' sake!

Comment author: Kaj_Sotala 29 June 2009 04:15:00PM 14 points [-]

However hot your enthusiasm burns, you don't get to skip the "controlled study" part.

While I agree with some of what you're saying, it isn't like "cached thoughts" or many of Eliezer's other classics come with references to controlled studies, either. Like Robin Hanson pointed out in response to my own critique of evpsych:

claims can be "tested" via almost any connection they make with other claims that connect etc. to things we see. This is what intellectual exploration looks like.

No, Eby's article didn't have direct references to empirical work establishing the connection between PCT and akrasia, but it did build on enough existing work about PCT to make the connection plausible and easy to believe. If this were a peer-reviewed academic journal, that wouldn't be enough, and it'd have to be backed with experimental work. But I see no reason to require LW posts to adhere to the same standard as an academic journal - this is also a place to simply toss out interesting and plausible-seeming ideas, so that they can be discussed and examined and somebody can develop them further, up to the point of gathering that experimental evidence.

Comment author: Nick_Tarleton 29 June 2009 05:05:20PM 5 points [-]

However hot your enthusiasm burns, you don't get to skip the "controlled study" part.

To what end do you not get to skip it? Others may legitimately have lower standards for something being interesting, as Kaj said, or for a technique being worth a try.

Honestly, it sounds more like you're trying to take down Kaj for getting uppity and violating the norms of Science, than like you're trying to contribute to finding truth or usefulness.

Comment author: jimrandomh 29 June 2009 12:15:20PM *  6 points [-]

You say this stuff helps with akrasia? However hot your enthusiasm burns, you don't get to skip the "controlled study" part. Come back with citations. At this point you haven't even ruled out the placebo effect, for Bayes' sake!

The term "placebo effect" was coined to refer to phsychological effects intruding on non-psychological studies. In this case, since the desired effect is purely psychological, it's meaningless at best and misleading at worst. There is no self-help advice equivalent to a sugar pill. The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

So, if you have an experimental procedure, go ahead and suggest it. Absent that, the only available data comes from self-experimentation and anecdotes.

Comment author: wedrifid 04 July 2009 10:50:42PM 4 points [-]

The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

It would seem ethically acceptable to give groups advice selected from common social norms. For example, give one group some "Getting Things Done", another group nothing at all, a third some instruction on calculus (irrelevant but still high status attention and education), a fifth a drill sergeant motivational yelling at and the fourth group gets PJEby's system.

Comment author: cousin_it 29 June 2009 12:53:30PM *  5 points [-]

What if you're wrong? What if the most effective anti-procrastination technique is tickling your left foot in exactly the right manner, and this works regardless of whether you believe in its efficacy, or even know about it? That (predicated on a correct theory of human motivation) is the kind of stuff we're looking for.

There is no self-help advice equivalent to a sugar pill. The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

You're saying that there's no neutral (non-positive and non-negative) self-help advice? That's a pretty weird statement to make. Some advice is good, some is bad; why do you suspect a gap at zero? Failing all else, you could refrain from telling the subjects that the study is about self-control and anti-procrastination, just tell them to blindly follow some instructions and measure the effects covertly.

No, I have no experimental protocol ready yet, but have the impudence to insist that we as a community should create one or shut up.

Comment author: Nick_Tarleton 29 June 2009 04:47:36PM *  3 points [-]

That (predicated on a correct theory of human motivation) is the kind of stuff we're looking for.

You don't know what "we" are looking for. There is no one thing "we" are looking for. Some of us may be interested in plausible, attested-to self-help methods, even without experimental support.

Comment author: Vladimir_Nesov 29 June 2009 05:01:53PM 2 points [-]

Some of us may be interested in plausible, attested-to self-help methods, even without experimental support.

Without experimental support is fine. But without extraordinary support isn't. Something must make the plausibility of a particular thing stand out, because you can't be interested in all the 1000 of equally plausible things unless you devote all your time to that.

Comment author: wedrifid 04 July 2009 11:04:24PM 1 point [-]

No, I have no experimental protocol ready yet, but have the impudence to insist that we as a community should create one or shut up.

I certainly agree with the 'create one' part of what you're saying. Not so much the 'shut up'. Talking about the topic (and in so doing dragging all sorts of relevant knowledge from the community) and also self experimenting has its use. Particularly in as much as it can tell us whether something is worth testing.

I do note that there are an awful lot of posts here (and on Overcoming Bias) which do not actually have controlled studies backing them. Is there a reason why Kaj's post requires a different standard to be acceptable? (And I ask that non-rhetorically, I can see reasons why you may reasonably do just that.)

Comment author: thomblake 29 June 2009 01:08:00PM 1 point [-]

The closest thing to a sugar pill available is known-bad advice,

  1. One example of a control group in a psychological study (can't find reference): researchers compared freudian psychoanalysis to merely sitting there and listening.

  2. sugar has physiological effects, so you can't really assume a sugar pill is neutral with no side-effects

Comment author: wedrifid 04 July 2009 10:53:51PM 0 points [-]

sugar has physiological effects, so you can't really assume a sugar pill is neutral with no side-effects

And when you are testing the psychological effects of urea based salts you can't really assume lithium salts are neutral with no side-effects.

Comment author: Vladimir_Nesov 29 June 2009 02:02:46PM 0 points [-]

Is it how the real studies view the situation?

Comment author: thomblake 29 June 2009 05:32:52PM *  1 point [-]

This gets my downvote on good old Popperian grounds. ... you don't get to skip the "controlled study" part. Come back with citations.

I'm afraid you have Popper all turned around. According to Popper, one should make claims that are testable, and then it's the job of (usually other) scientists to perform experiments to try to tear them apart.

If you're a Popperian and you disagree, go ahead and perform the experiment. If your position is that the relevant claim isn't testable, that's a different complaint entirely.

Comment author: Annoyance 29 June 2009 05:38:05PM 4 points [-]

You're supposed to try to tear apart your own claims, first. Making random but testable assertions for no particular reason is not part of the methodology.

Comment author: cousin_it 29 June 2009 07:43:36PM *  0 points [-]

Yes, I'm a Popperian. Yes, people should make testable claims and other people should test them. That's how everything is supposed to work. All right so far.

As to the nature of my complaint... Here's a non-trivial question: how do we rigorously test Kaj and Eby's assertions about akrasia? I took Vassar's words very seriously and have been trying to think up an experiment that would (at least) properly control for the belief effect, but came up empty so far. If I manage to solve this problem, I'll make a toplevel post about that.

Comment author: wedrifid 04 July 2009 11:14:17PM 0 points [-]

Why is it so difficult? Even a head to head test between PJ's magic and an arbitrarily selected alternative would provide valuable information. Given the claims made for, as you pointed out, first percentile utility, it seems that just a couple of tests against arbitrary alternatives should be expected to show drastic differences and at least tell us whether it is worth thinking harder.

Comment author: Jonathan_Graehl 27 June 2009 09:56:35AM 7 points [-]

I have an alternative theory for why some self-help methods that at first seem to work, eventually don't.

You were excited. You wanted to believe. You took joy in every confirmation. But either you couldn't maintain the effort, or the method became routine, and it seems you have rather less than you first thought.

The next revelation will change EVERYTHING.

Comment author: pjeby 26 June 2009 07:06:04PM 5 points [-]

My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it.

You don't need to be quite that paranoid. PCT's model of "reorganization" presumes that it is associated with "intrinsic error" -- something we generally perceive as pain, stress, fear, or "pressure".

So if you are experiencing a conflict between controllers that will result in rewiring you, you should be able to notice it as a gradually increasing sense of pressure or stress, at which point you can become aware of the need to resolve a conflict in your goals.

Remember: your controllers are not alien monsters taking you over; they are you, and reflect variables that at some point, you considered important to control for. They may have been set up erroneously or now be obsolete, but they are still yours, and to let go of them therefore requires actual reflection on whether you still need them, whether there is another way to control the variable, whether the variable should be slightly redefined, etc.

Comment author: Kaj_Sotala 27 June 2009 11:28:41PM 1 point [-]

Ah, yeah. After thinking it through for a while, I realized you were right. At its bottom, it's a question of me (or whoever who's suffering from the problem) not really wanting to change and also not wanting acknowledge this. Not a malevolent demon invisibly rewriting reality whenever things don't go the way it likes.

Comment author: pjeby 28 June 2009 12:02:29AM *  3 points [-]

At its bottom, it's a question of me (or whoever who's suffering from the problem) not really wanting to change and also not wanting acknowledge this.

It's not even that; it's just that unless you make connections between what you want and how to get it, you're just going to end up with whatever slop of a controller structure you previously managed to throw together well enough to just barely work... for your circumstances at the time.

And to get an improved control structure, you have to be willing to look at what's already there, not just throw in a bunch of optimistic new stuff and expect it to work. Most likely, things are the way they are for good reasons, and if your changes don't take those reasons into account, you will end up with conflicts and relapse.

Of course, as long as you take the relapse as feedback that something else in the control structure needs modification, you'll be fine. It's the interpretation of a relapse as meaning that you lack sufficient willpower or something like that, that creates problems.

Comment author: djcb 27 June 2009 08:38:11AM *  8 points [-]

So we have:

  • A new metaphor to Finally Explain The Brain;

  • "While Eby provides few references and no peer-reviewed experimental work to support his case [...]"

  • A self-help book: "Thinking things Done(tm) The Effortless way to Start, Focus and finally Finish..." (really, I did not make this up).

I'd say some more skepticism is warranted.

Comment author: pjeby 27 June 2009 05:30:45PM 4 points [-]

A new metaphor to Finally Explain The Brain;

Not even remotely new; "Behavior: The Control Of Perception" was written in 1973, IIRC. And yes, it's cited by other research, and cites prior research that provides evidence for specific control systems in the brain and nervous system, at several of the levels proposed by Powers.

provides few references and no peer-reviewed experimental work

I don't, but "Behavior: The Control Of Perception" has them by the bucket load.

Comment deleted 27 June 2009 03:53:03PM [-]
Comment author: djcb 27 June 2009 07:53:26PM 2 points [-]

I wasn't saying the post wasn't useful - at least it brought my attention to Richard Kennaway's post on the interesting concept of explaining brain functions in terms of control systems.

But, the thing is that every day brings us new theories which have great potential value - if true. But most of them aren't. Given limited time, we cannot pursue each of them. We have to be selective.

So, when I open that PDF linked in the first line of the article... that is, to put it mildly, not up to LessWrong-standards. Is that supposed to be 'more important than [...] 99% of what you or I have ever read'? It even ends in a sales pitch for books and workshops.

So while Control Theory may be useful for understanding the brain, this material is a distraction at best.

Comment deleted 27 June 2009 08:25:02PM [-]
Comment author: pjeby 27 June 2009 10:31:09PM 3 points [-]

There are lots of PCT textbooks out there; I wrote based on two of them (combined with my own prior knowledge): "Behavior: The Control Of Perception" by William T. Powers, and "Freedom From Stress", by Edward E. Ford. The first book has math and citations by the bucketload, the latter is a layperson's guide to practical PCT applications written by a psychologist.

Comment author: Yvain 28 June 2009 07:37:25PM 12 points [-]

Wait a second. There's a guy who writes textbooks about akrasia named Will Powers? That's great.

Comment author: Alicorn 28 June 2009 08:37:20PM 2 points [-]

It is in fact so great, that I suspect it might be a pen name.

Comment author: RichardKennaway 07 July 2009 07:35:27AM 6 points [-]

It really is his name. I know him personally. (But he is informally known as Bill, not Will.)

Comment author: [deleted] 04 August 2009 04:35:23PM 1 point [-]

Can you tell him that many of the links on this page are broken? http://www.brainstorm-media.com/users/powers_w/

Comment author: pjeby 28 June 2009 09:25:27PM 1 point [-]

Wait a second. There's a guy who writes textbooks about akrasia named Will Powers? That's great.

"Behavior: The Control of Perception" has very little to say about akrasia actually. The chapter on "Conflict" does a wee bit, I suppose, but only from the perspective of what a PCT perspective predicts should happen when control systems are in conflict.

I haven't actually seen a PCT perspective on akrasia, procrastination, or willpower issues yet, apart from my own.

Comment author: Vladimir_Nesov 28 June 2009 09:33:01PM 2 points [-]

I haven't actually seen a PCT perspective on akrasia, procrastination, or willpower issues yet, apart from my own.

If I'm not mistaken, there is a little cottage industry researching it for years. See e.g.
Albert Bandura, Edwin A. Locke. (2003). Negative Self-Efficacy and Goal Effects Revisited. (PDF) (it's a critique, but there are references as well).

Comment author: pjeby 28 June 2009 10:06:48PM 2 points [-]

Fascinating. However, it appears that both that paper and the papers it's critiquing are written by people who've utterly failed to understand it, in particular the insight that aggregate perceptions are measured over time... which means you can be positively motivated to achieve goals in order to maintain your high opinion of yourself -- and still have it be driven by an error signal.

That is, the mere passage of time without further achievement will cause an increasing amount of "error" to be registered, without requiring any special action.

Both this paper and the paper it critiques got this basic understanding wrong, as far as I can tell. (It also doesn't help that the authors of the paper you linked seem to think that materialistic reduction is a bad thing!)

Comment author: Vladimir_Nesov 27 June 2009 03:59:28PM 0 points [-]

And how's that at all important? The info isn't unique, so the progress in its development and application doesn't depend on whether you or I study it. If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time. There is little value in studying it now.

Comment deleted 27 June 2009 04:20:56PM [-]
Comment author: Vladimir_Nesov 27 June 2009 06:05:25PM *  1 point [-]

I was literally asking about what in particular makes this topic so important as to qualify it as "something that is more important than - in my estimate - 99% of what your or I have ever read" (and doubting that anything could).

You gave only a meta-reply, saying that if anything important was involved and I chose to ignore it, my strategy would not be a good one. But I don't know that it's important, and it's a relevant fact to consider when selecting a strategy. It's decision making under uncertainty. Mine is a good strategy a priori: 99 times out of 100 when in fact info is dross, I make room the the sure shots.

Comment deleted 27 June 2009 08:33:44PM [-]
Comment author: Vladimir_Nesov 27 June 2009 08:53:12PM 0 points [-]

The info I have gives me good confidence in the belief that studying PCT won't help me with procrastination (as I mentioned, it was out there for a lot of time without drastically visible applications of this sort, plus I skimmed some highly-cited papers via google scholar, but I can't be confident in what I read because I didn't grasp the outline of the field given how little I looked). The things I study and think about these days are good math, tools for better understanding of artificial intelligence. Not terribly good chances for making useful progress, but not woo either (unlike, say, a year ago, much worse two years ago).

Comment author: SoullessAutomaton 27 June 2009 05:46:52PM 1 point [-]

Perhaps you could clarify why you feel it is urgent?

I agree that if this theory is correct it is of tremendous importance--but I'm not sure I see why it is more urgent than any other scientific theory.

The only thing I can see is the "understanding cognition in order to build AI" angle and I'm not sure that understanding human cognition specifically is a required step in that.

Comment author: Vladimir_Nesov 27 June 2009 06:45:30PM *  0 points [-]

Secondly, acceptance of this kind of theory - if it is true - could take say 20-30 years by the scientific community. You will then hear about it in the media, as will anyone else with half a brain.

By the way, PJ Eby mentions a relevant fact: PCT was introduced more than 30 years ago.

Comment author: pjeby 27 June 2009 11:26:44PM 0 points [-]

From the second edition of B:CP , commenting on changes in the field since it was first written:

Gradually, the existence of closed causal loops is beginning to demand notice in every field of behavioral science and biology, in cell biology and neuroscience. They are simply everywhere, at every level of organization in every living system. The old concepts are disappearing, not fast enough to suit me but quite fast enough for the good of science, which must necessarily remain conservative.

Comment author: Vladimir_Nesov 28 June 2009 12:06:30AM *  0 points [-]

Sure, there are lots of mentions of the terms, in particular "control system", as something that keeps a certain process in place, guarding it against deviations, sometimes overreacting and swinging the process in the opposite direction, sometimes giving in under the external influence. This is all well and good, but this is an irrelevant observation, one that has no influence on it being useful for me, personally, to get into this.

If it's feasible for me to develop a useful anti-procrastination technique based on this whatever, I expect that these techniques would already be developed, and their efficacy demonstrated. Given that no such thing conclusively exist (and people try, and this stuff is widely known!), I don't expect to succeed either.

I might get a chance if I study the issue very carefully for a number of years, as it'd place me in the same conditions as other people who studied it carefully for many years (in which case I don't expect to place too much effort into a particular toy classification, as I'd be solving the procrastination problem not PCT death spiral strengthening problem), but that's a different game, irrelevant to the present question.

Comment author: pjeby 28 June 2009 12:22:35AM *  2 points [-]

That's not why I referenced the quote, it was to address the, "so if it came out 30 years ago, why hasn't anything happened yet?" question. i.e., many things have happened. That is, the general trend in the life sciences is towards discovering negative-feedback continuous control at all levels, from the sub-cellular level on up.

If it's feasible for me to develop a useful anti-procrastination technique

Actually, PCT shows why NO "anti-procrastination" technique that does not take a person's individual controller structure into account can be expected to work for very long, no matter how effective it is in the short run.

That is, in fact, the insight that Kaj's post (and the report I wrote that inspired it) are intended to convey: that PCT predicts there is no "silver bullet" solution to akrasia, without taking into account the specific subjective perceptual values an individual is controlling for in the relevant situations.

That is: no single, rote anti-procrastination technique will solve all problems for all people, nor even all the problems of one person, even if it completely solves one or more problems for one or more people.

This seems like an important prediction, when made by such a simple model!

(By contrast, I would say that Freudian drives and hypnotic "symptom substitution" models are not actually predicting anything, merely stating patterns of observation of the form, "People do X." PCT provides a coherent model for how people do it.)

Comment author: Vladimir_Nesov 28 June 2009 12:28:57AM *  0 points [-]

Rote, not-rote, it doesn't really matter. A technique is a recipe for making the effect happen, whatever the means. If no techniques exist, if it's shown that this interpretation doesn't give a technique, I'm not interested, end of the story.

That's not why I referenced the quote, it was to address the, "so if it came out 30 years ago, why hasn't anything happened yet?" question. i.e., many things have.

The exact quote is "If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time", by which I meant applications to procrastination in particular.

Comment author: pjeby 28 June 2009 12:35:00AM 1 point [-]

A technique is a recipe for making the effect happen, whatever the means. If no techniques exist, if it's shown that this interpretation doesn't give a technique, I'm not interested, end of the story.

To most people, a "technique" or "recipe" would involve a fixed number of steps that are not case-specific or person-specific. At the point where the steps become variable (iterative or recursive), one would have an "algorithm" or "method" rather than a "recipe".

PCT effectively predicts that it is possible for such algorithms or methods to exist, but not techniques or recipes with a fixed number of steps for all cases.

That still strikes me as a significant prediction, since it allows one to narrow the field of techniques under consideration - if the recipe doesn't include a "repeat" or "loop until" component, it will not work for everything or everyone.

Comment author: Vladimir_Nesov 27 June 2009 09:23:45AM *  0 points [-]

Which is fishy, given there is large literature on interpretation of behavior in terms of control systems. Just look at google scholar. But forming a representative sample of these works with adequate understanding of what they are about would take me, I think, a couple of days, so I'd rather someone else more interested in the issue do that.

Comment author: djcb 27 June 2009 10:51:16AM *  2 points [-]

There is also a large literature on understanding the brain in terms of chaos theory, cellular automata, evolution, .... , and all of those can shed light on some aspects. The same is definitely true for control systems theory.

The trouble comes when extrapolating this to universal hammers or to the higher cognitive levels; the literature I could find seems mostly about robotics. Admittedly, I did not search very thoroughly, but then again, life is short and if the poster wants to convince me, the burden of proof lies not on my side.

Comment author: jimrandomh 27 June 2009 03:17:46PM 4 points [-]

There is also a large literature on understanding the brain in terms of chaos theory, cellular automata, evolution, .... , and all of those can shed light on some aspects.

This statement strikes me as false. Evolution says things about what the brain does, and what it ought to do, but nothing about how it does it. Chaos theory and cellular automata are completely unrelated pieces of math. Everything else is either at the abstraction level of neurons, or at the abstraction level of "people like cake"; PCT is the only model I am aware of which even attempts to bridge the gap in between.

life is short and if the poster wants to convince me, the burden of proof lies not on my side.

Reality does not care who has the burden of proof, and it does not always provide proof to either side.

Comment author: Kaj_Sotala 27 June 2009 11:20:08PM 3 points [-]

Evolution says things about what the brain does, and what it ought to do, but nothing about how it does it.

Neural Darwinism?

Comment author: Vladimir_Nesov 27 June 2009 11:51:44PM 0 points [-]

In name only, and probably woo.

Comment author: Vladimir_Nesov 27 June 2009 03:27:46PM *  3 points [-]

Reality does not care who has the burden of proof, and it does not always provide proof to either side.

If I'm only willing to expend a certain amount of effort for gaining understanding of a given aspect of reality, then I won't listen to any explanation that requires more effort than that. Preparing a good explanation that efficiently communicates a more accurate picture of that aspect of reality is the burden of proof in question, a quite reasonable requirement in this case, where the topic doesn't appear terribly important.

Comment author: djcb 29 June 2009 06:42:05PM 2 points [-]

I don't see anything 'false' about the statement. I simply stated some other fields that have been used to explain aspects of the brain as well, and that, while PCT may be a useful addition, I have seen no evidence yet that it is 'life changing'.

I enjoy reading LW for all the bright people and new ideas things to learn. In this case however, I was a bit disappointed, mainly because of the self-help-fluff. There are enough places for that kind of material already, I think.

Of course, I cannot demand anything, it's just some (selfish?) concern for LW's S/N-ratio.

Comment author: pjeby 27 June 2009 05:37:52PM 0 points [-]

PCT is the only model I am aware of which even attempts to bridge the gap in between.

FWIW, Hawkins's HTM model (described in "On Intelligence") makes another fair stab at it, and has many similar characteristics to some of PCT's mid-to-high layers, just from a slightly different perspective. HTM (or at least the "memory-prediction framework" aspect of it) also makes much more specific predictions about what we should expect to find at the neuroanatomy level for those layers.

OTOH, PCT makes more predictions about what we should see in large-scale human behavioral phenomena, and those predictions match my experience quite well.

Comment author: timtyler 26 June 2009 06:31:26PM 2 points [-]

This article is quite long. As general feedback, I won't usually bother reading long articles unless they summarise their content up front with an abstract, or something similar. This post starts with more of a teaser. A synopsis at the end would be good as well: tell me three times.

Comment author: Cyan 26 June 2009 06:40:29PM 1 point [-]

I don't mind the length; I second the "tell me three times".

Comment author: thoughtfulape 28 June 2009 05:16:17AM 2 points [-]

An observation: PJeby if you really have a self help product that does what it says on the tin for anyone who gives it a fair try, I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact. To that end I would suggest that offering your product to Eliezer Yudkowsky for free or even paying him to try it in the form of a donation to his singularity institute would be more effective than the back and forth that I see here. It should be possible to establish an mutually satisfactory set of criteria of what constitutes 'really trying it' beforehand to avoid subsequent accusations of bad faith.

Comment author: pjeby 28 June 2009 05:02:57PM 2 points [-]

I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact.

What makes you think that that's my goal?

Comment author: thoughtfulape 29 June 2009 01:49:46AM 2 points [-]

Pjeby: If your goal isn't to convince the less wrong community of the effectiveness of your methodology then I am truly puzzled as to why you post here. If convincing others is not your goal, then what is?

Comment author: pjeby 29 June 2009 01:55:01AM 1 point [-]

If convincing others is not your goal, then what is?

Helping others.

Comment author: Alicorn 29 June 2009 02:37:10AM 5 points [-]

Do you expect anyone to benefit from your expertise if you can't convince them you have it?

Comment author: Cyan 28 June 2009 03:57:32PM 0 points [-]

pjeby will be more likely to notice this proposition if you post it as a reply to one of his comments, not one of mine.

Comment author: Vladimir_Nesov 28 June 2009 10:47:25AM *  -2 points [-]

Nope. The fact that you, personally, experience winning a lottery, doesn't support a theory that playing a lottery is a profitable enterprise.

Comment author: conchis 28 June 2009 01:31:00PM 2 points [-]

What? If the odds of the lottery are uncertain, and your sample size is actually one, then surely it should shift your estimate of profitability.

Obviously a larger sample is better, and the degree to which it shifts your estimate will depend on your prior, but to suggest the evidence would be worthless in this instance seems odd.

Comment author: Vladimir_Nesov 28 June 2009 02:18:28PM *  0 points [-]

It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars. The tenth decimal place doesn't really matter.

Comment author: Vladimir_Golovin 28 June 2009 03:02:57PM *  1 point [-]

It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars

I wonder what's your definition of 'profit'.

True story: when I was a child, I "invested" about 20 rubles in a slot machine. I won about 50 rubles that day and never played slot machines (or any lottery at all) again since then. So:

  • Expenses: 20 rubles.
  • Income: 50 rubles.
  • Profit: 30 rubles.

Assuming that we're using a dictionary definition of the word 'profit', the entire 'series of transactions' with the slot machine was de-facto profitable for me.

Comment author: Vladimir_Nesov 28 June 2009 03:12:19PM *  1 point [-]

It's obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn't count, only if you show that you were likely to win a million dollars (even if you didn't).

Comment author: conchis 28 June 2009 03:36:06PM *  2 points [-]

The only way I can make sense of your comment is to assume that you're defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don't know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.

Comment author: Vladimir_Nesov 28 June 2009 03:48:40PM *  1 point [-]

That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it's not.

That's one of the traps of woo: you often can't efficiently demonstrate that it's effective, and through intuition probably related to conservation of expected evidence you insist that if you don't have a better method to show its effectiveness, the best available method should be enough, because it's ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn't carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it's effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.

Comment author: Alicorn 28 June 2009 03:34:30PM 1 point [-]

I don't think the principle of charity generally extends so far as to make people reinterpret you when you don't go to the trouble of phrasing your comments so they don't sound obviously wrong.

Comment author: Vladimir_Nesov 28 June 2009 03:55:12PM *  2 points [-]

If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it's a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It's much more likely that the intended message behind the inapt textual transcription wasn't the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.

Comment author: pjeby 26 June 2009 07:18:58PM 1 point [-]

FWIW, the original article on Kaj's blog is formatted in a way that makes it much easier to read/skim than here.

Comment deleted 27 June 2009 08:51:03PM *  [-]
Comment author: pjeby 27 June 2009 11:07:59PM 2 points [-]
  1. See this comment.

  2. Given your statement #1, why would you want to be on a mailing list of "non-rational, non-high IQ" people? ;-)

(I'm joking, of course; I have many customers who read and enjoy OB and LW, though I don't think any have been top-level posters. Interestingly enough, my customers are so well-read that I usually receive more articles on recent research from them as emailed, "hey didja see"s, than I come across directly or see on LW!)

Comment deleted 27 June 2009 11:49:29PM *  [-]
Comment author: pjeby 27 June 2009 11:56:07PM 1 point [-]

Huh? More articles than you see on LW? That's absurd!

I usually see more articles about recent scientific research from my paying customers than I encounter via LW postings.

Or more precisely, and to be as fair as possible, I remember seeing more articles emailed to me from my customers about relevant research of interest to me than I remember discovering via LW... or such memories are at any rate easier to recall. Less absurd now? ;-)

Comment author: Vladimir_Nesov 28 June 2009 12:21:38AM *  3 points [-]

That's called "irony", hinting to the fact that not a whole lot of articles are cited on LW, too few to warrant it a mention as a measure for the quantity of articles. Routine research browsing makes such quantity irrelevant, the only benefit might come from a mention of something you didn't think existed, because if you thought it existed, you'd be able to look it up yourself.

P.S. I deleted my comment (again) before seeing your reply, thought it's too mindless.

Comment author: Kaj_Sotala 27 June 2009 11:23:55PM 1 point [-]

I think I got on the mailing list here. Alternatively, it could've been a result of giving my e-mail addy on this page.

Comment author: timtyler 27 June 2009 10:05:26AM 1 point [-]

I found the article painful reading. Things like the section entitled "Desire minus Perception equals Energy" very rapidly make me switch off.

Comment author: jimrandomh 27 June 2009 04:58:16PM *  21 points [-]

I found the article painful reading.

I've heard this sort of statement repeatedly about pjeby's writing style, from different people, and I have a theory as to why. It's a timing pattern, which I will illustrate with some lorem ipsum:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec pharetra varius nisl, quis interdum lectus porta vel...

Main point!

Nullam sit amet risus nibh. Suspendisse ut sapien et tellus semper scelerisque.

The main points are set off from the flow of the text by ellipses and paragraph breaks. This gives them much more force, but also brings to mind other works that use the same timing pattern. Most essays don't do this, or do it exactly once when introducing the thesis. On the other hand, television commercials and sales pitches use it routinely. It is possible that some people have built up an aversion to this particular timing pattern, by watching commercials and not wanting to be influenced by them. If that's the problem, then when those people read it they'll feel bothered by the text, but probably won't know why, and will attribute it to whatever minor flaws they happen to notice, even if unrelated. People who only watch DVDs and internet downloads, like me, won't be bothered, nor will people who developed different mechanisms for resisting commercials. This is similar to the "banner blindness" issue identified in website usability testing with eye trackers, where people refuse to look at anything that looks even remotely like a banner ad, even if it's not a banner ad but the very thing they're supposed to be looking for.

If this is true, then fixing the style issue is simply a matter of removing some of the italics, ellipses and paragraph breaks in editing. It should be possible to find out whether this is the problem by giving A/B tests to people who dislike your writing.

Comment author: Eliezer_Yudkowsky 27 June 2009 05:19:26PM 8 points [-]

This is a fascinating suggestion and might well be correct. Certainly, my inability to read more than a paragraph of PJ Eby's writing definitely has something to do with it "sounding like a sales pitch". May be a matter of word choice or even (gulp) content too, though.

Comment author: derekz 27 June 2009 08:38:40PM *  5 points [-]

I suppose for me it's the sort of breathless enthusiastic presentation of the latest brainstorm as The Answer. Also I believe I am biased against ideas that proceed from an assumption that our minds are simple.

Still, in a rationalist forum, if one is to not be bothered by dismissing the content of material based on the form of its presentation, one must be pretty confident of the correlation. Since a few people who seem pretty smart overall think there might be something useful here, I'll spend some time exploring it.

I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time.

If anybody does manage to achieve lasting results that seem like purposeful rewiring, I'm sure we'd all like to hear descriptions of your methods and experience.

Comment author: pjeby 27 June 2009 11:20:53PM 3 points [-]

I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time.

This is one place where PCT is not as enlightening without adding a smidge of HTM, or more precisely, the memory-prediction framework.

The MPF says that we match patterns as sequences of subpattern: if one subpattern "A" is often followed by "B"", our brain compresses this by creating (at a higher layer) a symbol that means "AB". However, in order for this to happen, the A->B correlation has to happen at a timescale where we can "notice" it. If "A" happens today, and "B" tomorrow (for example), we are much less likely to notice!

Coming back to your question: most of our problematic controller structures are problematic at too long of a timescale for it to be easily detected (and extinguished). So PCT-based approaches to problem solving work by forcing the pieces together in short-term memory so that an A->B sequence fires off ... at which point you then experience an "aha", and change the intercontroller connections or reference levels. (Part of PCT theory is that the function of conscious awareness may well be to provide this sort of "debugging support" function, that would otherwise not exist.)

PCT also has some interesting things to say about reinforcement, by the way, that completely turn the standard ideas upside down, and I would really love to see some experiments done to confirm or deny. In particular, it has a novel and compact explanation of why variable-schedule reinforcement works better for certain things, and why certain schedules produce variable or "superstitious" action patterns.

Comment author: derekz 27 June 2009 11:42:11PM 0 points [-]

Thank you for the detailed reply, I think I'll read the book and revisit your take on it afterward.

Comment author: pjeby 27 June 2009 04:04:44PM 2 points [-]

As SA says, I did not write the article for the LW audience. However, D-P=E is a straightforward colloquial reframing of PCT's "r-p=e" formula, i.e. reference signal minus perception signal equals error, which then gets multiplied by something and fed off to an effector.

Comment author: SoullessAutomaton 27 June 2009 03:16:45PM 1 point [-]

Obviously, it was written with a very different demographic in mind than LW. I imagine many of the people that article was written for would find the material here to be unfriendly, cryptic, and opaque.

This is probably a rational approach to marketing on P. J. Eby's part, but it does make it hard for some people here to read his other work.