On the topic of: Control theory

Yesterday, PJ Eby sent the subscribers of his mailing list a link to an article describing a control theory/mindhacking insight he'd had. With his permission, here's a summary of that article. I found it potentially life-changing. The article seeks to answer the question, "why is it that people often stumble upon great self-help techniques or productivity tips, find that they work great, and then after a short while the techniques either become ineffectual or the people just plain stop using them anyway?", but I found it to have far greater applicability than just that.

Richard Kennaway already mentioned the case of driving a car as an example where the human brain uses control systems, and Eby mentioned another: ask a friend to hold their arm out straight, and tell them that when you push down on their hand, they should lower their arm. And what you’ll generally find is that when you push down on their hand, the arm will spring back up before they lower it... and the harder you push down on the hand, the harder the arm will pop back up! That's because the control system in charge of maintaining the arm's position will try to keep up the old position, until one consciously realizes that the arm has been pushed and changes the setting.

Control circuits aren't used just for guiding physical sequences of actions, they also regulate the workings of our mind. A few hours before typing out a previous version of this post, I was starting to feel restless because I hadn't accomplished any work that morning. This has often happened to me in the past - if, at some point during the day, I haven't yet gotten started on doing anything, I begin to feel anxious and restless. In other words, in my brain there's a control circuit monitoring some estimate of "accomplishments today". If that value isn't high enough, it starts sending an error signal - creating a feeling of anxiety - in an attempt to bring that value into the desired range.

The problem with this is that more often than not, that anxiety doesn't push me into action. Instead I become paralyzed and incapable of getting anything started. Eby proposes that this is because of two things: one, the control circuits are dumb and don't actually realize what they're doing, so they may actually take counter-productive action. Two, there may be several control circuits in the brain which are actually opposed to each other.

Here we come to the part about productivity techniques often not working. We also have higher-level controllers - control circuits influencing other control circuits. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do. When they notice that we've found a method to actually accomplish something we've been struggling with for a long time, they start sending an error signal... causing neural reorganization, eventually ending up at a stage where we don't use those productivity techniques anymore and solving the "crisis" of us actually accomplishing things. Moreover, these circuits are to a certain degree predictive, and they can start firing when they pick up on a behavior that only even possibly leads to success - that's when we hear about a great-sounding technique and for some reason never even try it. A higher-level circuit, or a lower-level one set up by the higher-level circuit, actively suppresses the "let's try that out" signals sent by the other circuits.
But why would we have such self-sabotaging circuits? This ties into Eby's more general theory of the hazards of some kinds of self-motivation. He uses the example of a predator who's chased a human up to a tree. The human, sitting on a tree branch, is in a safe position now, so circuits developed to protect his life send signals telling him to stay there and not to move until the danger is gone. Only if the predator actually starts climbing the tree does the danger become more urgent and the human is pushed to actively flee.

Eby then extends this example into a social environment. In a primitive, tribal culture, being seen as useless to the tribe could easily be a death sentence, so we evolved mechanisms to avoid giving the impression of being useless. A good way to avoid showing your incompetence is to simply not do the things you're incompetent at, or things which you suspect you might be incompetent at and that have a great associated cost for failure. If it's important for your image within the tribe that you do not fail at something, then you attempt to avoid doing that.

You might already be seeing where this is leading. The things many of us procrastinate on are exactly the kinds of things that are important to us. We're deathly afraid of the consequences of what might happen if we fail at them, so there are powerful forces in play trying to make us not work on them at all. Unfortunately, for beings living in modern society, this behavior is maladaptive and buggy. It leads to us having control circuits which try to keep us unproductive, and when they pick up on things that might make us more productive, they start suppressing our use of those techniques.

Furthermore, the control circuits are stupid. They are occasionally capable of being somewhat predictive, but they are fundamentally just doing some simple pattern-matching, oblivious to deeper subtleties. They may end up reacting to wholly wrong inputs. Consider the example of developing a phobia for a particular place, or a particular kind of environment. Something very bad happens to you in that place once, and as a result, a circuit is formed in your brain that's designed to keep you out of such situations in the future. Whenever it detects that you are in a place resembling the one where the incident happened, it starts sending error signals to get you away from there. Only that this is a very crude and unoptimal way of keeping you out of trouble - if a car hit you while you were crossing the road, you might develop a phobia for crossing the road. Needless to say, this is more trouble than it's worth.

Another common example might be a musician learning to play an instrument. Learning musicians are taught to practice their instrument in a variety of postures, for otherwise a flutist who's always played his flute sitting down may realize he can't play it while standing up! The reason being that while practicing, he's been setting up a number of control circuits designed to guide his muscles the right way. Those control circuits have no innate knowledge of what muscle postures are integral for a good performance, however. As a result, the flutist may end up with circuits that try to make sure they are sitting down when playing.

This kind of malcalibration extends to higher-level circuits as well. Eby writes:

I know this now, because in the last month or so, I’ve been struggling to identify my “top-level” master control circuits.

And you know what I found they were controlling for? Things like:

* Being “good”
* Doing the “right” thing
* “Fairness”

But don’t be fooled by how harmless or even “good” these phrases sound.

Because, when I broke them down to what subcontrollers they were actually driving, it turned out that “being good” meant “do things for others while ignoring your own needs and being resentful”!

“Fairness”, meanwhile, meant, “accumulate resentment and injustices in order to be able to justify being selfish later.”

And “doing the right thing” translated to, “don’t do anything unless you can come up with a logical justification for why it’s right, so you don’t get in trouble, and no-one can criticize you.”

Ouch!

Now, if you look at that list, nowhere on there is something like, “go after what I really want and make it happen”. Actually doing anything – in fact, even deciding to do anything! – was entirely conditional on being able to justify my decisions as “fair” or “right” or “good”, within some extremely twisted definitions of those words!

So that's the crux of the issue. We are wired with a multitude of circuits designed for controlling our behavior... but because those circuits are often stupid, they end up in conflict with each other, and end up monitoring values that don't actually represent the things they ought to.

While Eby provides few references and no peer-reviewed experimental work to support his case of motivation systems being controlled in this way, I find it to mesh very well with everything I know about the brain. I took the phobia example from a textbook on biological psychology, while the flutist example came from a lecture by a neuroscientist emphasizing the stupidity of the cerebellum's control systems. Building on systems that were originally developed to control motion and hacking them to also control higher behavior is a very evolution-like thing to do. We already develop control systems for muscle behavior starting from the time when we first learn to control our body as infants, so it's very plausible that we'd also develop such mechanisms for all kinds of higher cognition. The mechanism by they work is also fundamentally very simple, making it easy for new circuits to form: a person ends up in an unpleasant situation, causing an emotional subsystem to flood the whole brain with negative feedback, leading to pattern recognizers which were active at the time to start activating the same kind of negative feedback the next time when they pick up on the same input. (At its simplest, it's probably a case of simple Hebbian learning.)

Furthermore, since reading his text, I have noticed several things in myself which could only be described as control circuits. After reading Overcoming Bias and Less Wrong for a long time, I've found myself noticing whenever I have a train of thought that seems to be indicative of a number of certain kinds of cognitive biases. In retrospect, that is probably a control circuit that has developed to detect the general appearance of a biased thought and to alert me about it. The anxiety circuit I already mentioned. A closely related circuit is one that causes me to need plenty of time to accomplish whatever it is that I'm doing - if I only have a couple of hours before a deadline, I often freeze up and end up unable of doing anything. This leads to me being at my most productive in the mornings, when I have a feeling of having the whole day for myself and of not being in any rush. That's easily interpreted as a circuit that looks at the remaining time and sends sending an alarm when the time runs low. Actually, the circuit in question is probably even stupid than that, as the feeling of not having any time is often tied only what the clock is, not to the time when I'll be going to bed. If I get up at 2 PM and go to bed at 4 AM, I have just as much time as if I'd get up at 9 AM and went to bed at 11 PM, but the circuit in question doesn't recognize this.

So, what can we do about conflicting circuits? Simply recognizing them for what they are is already a big step forward, one which I feel has already helped me overcome some of their effects. Some of them can probably be dismantled simply by identifying them, working out their purpose and deciding it to be unnecessary. (I suspect that this process might actually set up new circuits whose function is to counteract the signals sent by the harmful ones. Maybe. I'm not very sure of what the actual neural mechanism might be.) Eby writes:

So, you want to build Desire and Awareness by tuning in to the right qualities to perceive. Then, you need to eliminate any conflicts that come up.

Now, a lot of times, you can do this by simple negotiation with yourself. Just sit and write down all your objections or issues about something, and then go through them one at a time, to figure out how you can either work around the problem, or find another way to get your other needs met.

Of course, you have to enter this process in good faith; if you judge yourself for say, wanting lots of chocolate, and decide that you shouldn’t want it, that’s not going to work.

But it might work, to be willing to give up chocolate for a while, in order to lose weight. The key is that you need to actually imagine what it would be like to give it up, and then find out whether you can be “okay” with that.

Now, sadly, about 97% of the people who read this are going to take that last paragraph and go, “yeah, sure, I’m going to give up [whatever]”, but without actually considering what it would be like to do so.

And those people are going to fail.

And I kind of debated whether or not I should even mention this method here, because frankly, I don’t trust most people’s controllers any further than I can reprogram them (so to speak).

See, I know from bitter experience that my own controllers for things like “being smart” used to make me rationalize this sort of thing, skipping the actual mental work involved in a technique, because “clearly I’m smart enough not to need to do all that.”

And so I’d assume that just “thinking” about it was enough, without really going through the mental experience needed to make it work. So, most of the people who read this, are going to take that paragraph above where I explained the deep, dark, master-level mindhacking secret, and find a way to ignore it.

They’re going to say things like, “Is that all?” “Oh, I already knew that.” And they’re not going to really sit down and consider all the things that might conflict with what they say they want.

If they want to be wealthy, for example, they’re almost certainly not going to sit down and consider whether they’ll lose their friends by doing so, or end up having strained family relations. They’re not considering whether they’re going to feel guilty for making a lot of money when other people in the world don’t have any, or for doing it easily when other people are working so hard.

They’re not going to consider whether being wealthy or fit or confident will make them like the people they hate, or whether maybe they’re really only afraid of being broke!

But all of them will read everything I’ve just written, and assume it doesn’t apply to them, or that they’ve already taken all that into account.

Only they haven’t.

Because if they had, they would have already changed.

That's a pretty powerful reminder not to ignore your controllers. When you've been reading this, some controller that tries to keep you from doing things has probably already picked up on the excitement some emotional system might now be generating... meaning that you might be about to stumble upon a technique that might actually make you more productive... causing signals to be sent out to suppress attempts to even try it out. Simply acknowleding its existence isn't going to be enough - you need to actively think things out, identify different controllers within you, and dismantle them.

I feel I've managed to avoid the first step, of not doing anything even after becoming aware of the problem. I've been actively looking at different control circuits, some of which have plagued me for quite a long time, and I at least seem to have managed to overcome them. My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it. It feels that the best way to counteract that is to try to consciously set up new circuits dedicated to the task of monitoring for the presence of new circuits, and alarming me of their presence. In other words, keep actively looking for anything that might be a mental control circuit, and teach myself to notice them.

(And now, Eby, please post any kind of comment here so that we can vote it up and give you your fair share of this post's karma. :))

New Comment
159 comments, sorted by Click to highlight new comments since: Today at 10:34 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Let me clarify where I do and do not agree with PJ Eby, since we've been involved in some heated arguments that often seem to go nowhere.

I accept that the methods described here could work, and intend to try them myself.

I accept that all of the mechanisms involved in behavior can be restated in the form of a network of feedback loops (or a computer program, etc.).

I accept that Eby is acting as a perfect Bayesian when he says "Liar!" in response to those who claim they "gave it a try" and it didn't work. To the extent that he has a model, that is what it obligates him to believe, and Eliezer Yudkowsky has extensively argued that you should find yourself questioning the data when it conflicts with your model.

So what's the problem, then?

I do not accept that these predictions actually follow from, or were achieved through the insights of, viewing humans as feedback control systems. The explanations here for behavioral phenomena look like commonsense reasoning that is being shoehorned into controls terminology by clever relabeling. (ETA: Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're a... (read more)

Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?

You don't. As it says in the report I wrote, I've been teaching most of these things for years.

I ran the standard check to see if a model is actually constraining expectations by asking pjeby what it rules out, and, more importantly, why PCT says you shouldn't observe such a phenomenon. And I still don't have an answer.

And I'm still confused by what it is you expect to see, that I haven't already answered. AFAIK, PCT doesn't say that no behavior is ever generated except by control systems, it just says that control systems are an excellent model for describing how living systems generate behavior, and that we can make more accurate predictions about how a living system will behave if we know what variables it's controlling for.

Since the full PCT model is Turing complete, what is it exactly that you are asking be "ruled out"?

Personally, I'm more interested in the things PCT rules in -- that is, the things it predicts that other models don't, such as the different timescales for "giving up" and s... (read more)

3SilasBarta15y
Then why does it get you there faster? If someone had long ago proposed to you that the body operates as a network of negative feedback controllers, would that make you more quickly reach the conclusion that "I should rationally think through the reasons I'm afraid of something", as opposed to, say, blindly reflecting on your own conscious experience? Yes, and that's quite a monster "if". So far, I haven't seen you identify -- in the rationalist sense -- a "variable being controlled". That requires you to be able to explain it in terms "so simple a computer could understand it". So far, no one can do that for any high-level behavior. For example, judging sexiness of another person. To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format. Only then am I able to set up a model that shows an error signal which can drive behavior. (Aside: note that the above can be rephased as saying that you need to find the person's "invariants" of sexiness, i.e., the features that appear the same in the "sexiness" dimension, even despite arbitrary transformations applied to the sense data, like rotation, scaling, environment changes, etc. Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.) But all of these tasks are just as difficult as the original problem! Now I'm running into "inferential distance frustration", as I'd have to explain the basics of technical explanation, what it means to really explain something, etc., i.e. all those posts by Eliezer Yudkowsky, starting back from his overcomingbias.com posts. Bu
3pjeby15y
Of course not; the paths between Theory and Practice are not symmetrical. In the context of my work, the usefulness of a theory like this is it provides me with a metaphorical framework to connect practical knowledge to. Instead of teaching all the dozens of principles, ideas, aphorisms, etc. that I have as isolated units, being able to link each one ot a central metaphor of controllers, levels, etc. makes communication and motivation easier. To be perfectly fair, PCT would be useful for this purpose even if it were not a true or semi-true theory. However, all else being equal, I'd rather have something true that fits my practical observations, and PCT fits more of my practical observations than anything else. And I believe it is, in fact, true. I'm merely stating the above so as to make it clear that if I thought it were only a metaphor, I would have no problem with saying, "it's just a metaphor that aids education and motivation in applying certain practical observations by giving them a common conceptual framework." I still don't see your point about this. Any model has to do the same thing, doesn't it? So how is this a flaw of PCT? And on a practical level, I just finished a webinar where we spent time breaking down people's references for things like "Preparedness" and "Being a good father" and showing how to establish controllable perceptions for these things that could be honored in more than the breach. (For example, if your perceptual definition of "being a good father" is based solely on the actions of your kids rather than your actions, then you are in for some pain!) IOW, I don't actually see a lot of problem with reference breakdowns and even reference design, at the high-level applications for which I'm using the theory. Would I have a hard time defining "length" or "color" in terms of its referents? Sure, but I don't really care. Powers does a good job of explaining what's currently known about such invariants, and pointing to what research still
4SilasBarta15y
Actually, yes, I have downloaded their demos, expecting to be wowed, but then fell over laughing. Specifically, this one. It claims to be able to learn to recognize simple black/white 16x16x pixel images using HTM and saccading the images around. But then I gave it a spin, had it learn the images, and then tested it by drawing one of the figures with a very, very slight rotation, which completely screwed up its ability to identify it. Not impressive. No, what you have shown is that you learned of PCT and HTM, and then you believe you improved in your work. As per my two previous comments in this thread, I can (edited phrasing) accept both of those claims and still doubt the more interesting claims, specifically, that the model actually did help, rather than you merely thinking it did because you could rephrase your intuitive, commonsense reasoning in the model's terminology. I could also doubt that your ability to help people improved. Please pay attention. I am R-ingTFM, and I even complained that one of the Powers demos understated the strength of their point about feedback control. I already told you I'm going to try your advice. I've read several of the pdfs you've linked, including TheSelfHelpMyth.pdf linked here, and will read several more, and probably even buy Behavior. (Though I couldn't get the freebie you mentioned to work because the website crapped out after I entered my info). I am making every effort to consider this model. But it is simply not acceptable of you to act like the only alternatives are to repeat hundreds of pages, or speak in dumbed-down blackbox terminology. You can e.g. summarize the chain of useful, critical insights that get me from "it's a network of feedback controllers" to a useful model, so I know which part I'd be skeptical of and which parts assume the solution of problems I know to be unsolved, so I know where to direct my attention.
2pjeby15y
I'm not clear on whether you took this bit from their docs into account: That is, I'm not clear whether the steps you're describing include training on rotations or not. No, I gave you one specific prediction that PCT makes: higher-level controllers operate over longer time scales than low-level ones. This prediction is not a part of any other model I know of. Do you know of another model that makes this prediction? I only know of models that basically say that symptom substitution takes time, with no explanation of how it occurs. This doesn't have anything to do with whether I believe that prediction to be useful; the prediction is still there, the observation that people do it is still there, and the lack of explanation of that fact is still there, even if you remove me from the picture entirely. I can only do that if I understand specifically what it is you don't get -- and I still don't. For example, I don't see why the existence of unsolved problems is a problem, or even remotely relevant, if all the other models we have have to make the same assumption. From my POV, you are ignoring the things that make PCT useful: namely that it actually predicts as normal, things that other current behavioral models have to treat as special cases or try to handwave out of existence. It's not that PCT is "simpler" than stimulus-response or "action steps" models, it's that it's the simplest model that improves on our ability to make correct predictions about behavior. Your argument seems to be, "but PCT requires us to gather more information in order to make those predictions". And my answer to that is, "So what? Once you have that information, you can make way better predictions." And it's not that you could just feed the same information into some other model and get similar predictions - the other models don't even tell you what experiments to perform to get yes-or-no answers. To put it another way, to the extent that PCT requires you to be more precise or gather mo
4SilasBarta15y
But that's a pretty basic transformation, and if they could handle it, they would have done so. In any case, the rotation was very slight, and was only one of many tests I gave it. It didn't merely assign a slightly lower probability to the correct answer, it fell off the list entirely. Consider how tiny the pictures are, this is not encouraging. No, you misunderstand: my complaint is that PCT requires us to solve problems of equal or greater difficulty than the initial problem being solved. To better explain what I mean, I gave you the example with the literal tape-and-table Turing machine. Watch what happens when I make your same point, but in advocacy of the "Turing machine model of human behavior". "I've discovered a great insight that helps unify my research and better assist people with their problems. It's to view them as a long, sectioned tape with a reader and state record, which [explanation of Turing machine]. This model is so useful because all I have to do is find out whether people have 1's rather than 0's in places 5000-5500 on their tape, and if they do, I just have to change state 4000 to erase rather than merely move state! This helps explain why people have trouble in their lives, because they don't erase bad memories." See the problems with my version? 1) Any model of a human as a Turing machine would be way more complex than the phenomenon I'm trying to explain, so the insight it gives is imaginary. 2) Even given a working model, the mapping from any part of the TM model to the human is hideously complex. 3) "Finding someone's 1's and 0's" is near impossible because of the complexity of the mapping. 4) The analogy between erasing memories and erasure operations is only superficial, and not indicative of the model's strength. 5) Because I obviously could not have a TM model of humans, I'm not actually getting my insight from the model, but from somewhere else. And points 1-5 are exactly what I claim is going on with you and PCT. Nevert
0pjeby15y
I'm sorry, but I'm still utterly baffled by your comments, since your proposed "purely neuronal" model is more analogous to the Turing machine. It sounds a bit like the part you're missing is the PCT experimental design philosophy, aka the Test -- a way of formulating and testing control hypotheses at arbitrary levels of the hierarchy. To test "sexiness" or some other high-level value, it is not necessary to completely specify all its lower-level components, unless of course the goal of your experiment is to identify those components. We don't need, for example, to break down how object invariance happens to be able to do an experiment where a rat presses a bar! We assume the rat can identify the bar and determine whether it is currently pressed. The interesting part is what other things (like food, mate availability, shock-avoidance, whatever) that you can get the rat to control by pressing a bar. (At least, at higher levels.)
2SilasBarta15y
So? I agree that the "purely neuronal" model would be really complex (though not as complex as the Turing machine would be). I just brought it up in order to show how a model doesn't "need to have a sexiness comparator anyway", so you do have to justify the simplicity gained when you posit that there is one. But if you don't specify all of the lower level components, then your controls explanation is just a restating of the problem, not a simplifying of it. The insight you claim you are getting from it is actually from your commonsense reasoning. Indeed, virtually every insight you "explain" by PCT, you got some other way. Sure, but that's because you don't need to account for the rat's ability to identify the bar in a wide variety of contexts and transformations, which is the entire point of looking for invariants.
-2pjeby15y
Kindly explain what "commonsense reasoning" explains the "symptom substitution" phenomenon in hypnosis, and in particular, explains why the duration of effect varies, using any model but PCT.
1SilasBarta15y
While I can look up "symptom substitution", I'll to know more specifically what you mean by this. But I'd have to be convinced that PCT explains it first in a way that doesn't smuggle in your commonsense reasoning. Now, if you want examples of how commonsense reasoning leads to the same conclusions that are provided as examples of the success of PCT, that I already have by the boatload. This whole top-level post is an example of using commonsense reasoning but attributing it to PCT. For example, long before I was aware of the concept of a control system, or even feedback (as such) I handled my fears (as does virtually everyone else) by thinking through what exactly it is about the feared thing that worries me. Furthermore, it is obvious to most people that if you believe obstacles X, Y, and Z are keeping you from pursuing goal G, you should think up ways to overcome X, Y, and Z, and yet Kaj here presents that as something derived from PCT.
-1pjeby15y
Specifically, find a "commonsense" explanation that explains why symptom substitution takes time to occur, without reference to PCT's notion of a perception averaged over time.
0CronoDAS15y
Googling "symptom substitution" lead me to a journal article that argued that people have tried and failed to find evidence that it happens...
0pjeby15y
That's Freudian symptom substitution, and in any case, the article is splitting hairs: it says that if you stop a child sucking its thumb, and it finds some other way to get its needs met, then that doesn't count as "symptom substitution". (IOW, the authors of the paper more or less defined it into nonexistence, such that if it exists and makes sense, it's not symptom substitution!) Also, the paper raises the same objection to the Freudian model of symptom substitution that I do: namely, that there is no explanation of the time frame factor. In contrast, PCT unifies the cases both ruled-in and ruled out by this paper, and offers a better explanation for the varying time frame issue, in that the time frame is governed by the perceptual decay of the controlled variable.

I purchased Behavior: The Control of Perception and am reading it. Unless someone else does so first, I plan to write a review of it for LW. A key point is that at least part of PCT is actually right. The lowest level controllers, such as those controlling tendon tension, are verifiably there. So far as I can see so far, real physical structures corresponding pretty closely to second and third level controllers also exist and have been pointed to by science. I haven't gotten further than this yet, but teasers within the book indicate that (when the book was written of course) there is good evidence that some fifth level control systems exist in particular places in the brain, and thus fourth level somewhere. Whether it's control systems (or something closish to them) all the way up? Dunno. But the first couple levels, not yet into the realm of traditional psychology or whatnot, those definitely exist in humans. And animals of course. The description of the scattershot electrodes in hundreds of cats experiment was pretty interesting. :)

That said, you're absolutely right, there should be some definite empirical implications of the theory. For example due to the varying length of the ... (read more)

6Eliezer Yudkowsky15y
Please add PCT to the wiki as Jargon and link there when this term, whatever it means, is used for the first time in a thread. It is not in the first 10 Google hits.
1SoullessAutomaton15y
It seems jimrandomh has taken the time to do so; the wikipedia article should be helpful. In defense of people using the acronym without definition, though, it seemed fairly obvious if you look at the wikipedia disambig page for the acronym in question.
4SoullessAutomaton15y
As a general-purpose prior assumption for systems designed by evolutionary processes, reusing or adapting existing systems is far more likely than spontaneous creation of new systems. Thus, if it can be demonstrated that a model accurately represents low-level hierarchical systems, this is reasonably good evidence in favor of that model applying all the way to the top levels as opposed to other models with similar explanatory power for said upper levels.

My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it.

You don't need to be quite that paranoid. PCT's model of "reorganization" presumes that it is associated with "intrinsic error" -- something we generally perceive as pain, stress, fear, or "pressure".

So if you are experiencing a conflict between controllers that will result in rewiring you, you should be able to notice it as a gradually increasing sense of pressure or stress, at which point you can become aware of the need to resolve a conflict in your goals.

Remember: your controllers are not alien monsters taking you over; they are you, and reflect variables that at some point, you considered important to control for. They may have been set up erroneously or now be obsolete, but they are still yours, and to let go of them therefore requires actual reflection on whether you still need them, whether there is another way to control the variable, whether the variable should be slightly redefined, etc.

1Kaj_Sotala15y
Ah, yeah. After thinking it through for a while, I realized you were right. At its bottom, it's a question of me (or whoever who's suffering from the problem) not really wanting to change and also not wanting acknowledge this. Not a malevolent demon invisibly rewriting reality whenever things don't go the way it likes.
3pjeby15y
It's not even that; it's just that unless you make connections between what you want and how to get it, you're just going to end up with whatever slop of a controller structure you previously managed to throw together well enough to just barely work... for your circumstances at the time. And to get an improved control structure, you have to be willing to look at what's already there, not just throw in a bunch of optimistic new stuff and expect it to work. Most likely, things are the way they are for good reasons, and if your changes don't take those reasons into account, you will end up with conflicts and relapse. Of course, as long as you take the relapse as feedback that something else in the control structure needs modification, you'll be fine. It's the interpretation of a relapse as meaning that you lack sufficient willpower or something like that, that creates problems.

I have an alternative theory for why some self-help methods that at first seem to work, eventually don't.

You were excited. You wanted to believe. You took joy in every confirmation. But either you couldn't maintain the effort, or the method became routine, and it seems you have rather less than you first thought.

The next revelation will change EVERYTHING.

PCT is the first thing I've encountered that seems like it can make real headway in understanding the brain. Many thanks to PJ, Kaj and the others who've written about it here.

I notice that all of the writings about controllers I've seen so far assume that the only operations controllers can perform on each other are to set a target, push up and push down. However, there are two more natural operations with important consequences: damping and injecting noise. Conversely, a controller need not measure only the current value of other controllers, but can als... (read more)

So Vassar was right, we have reached a crisis. A self-help sales pitch with allegations of first-percentile utility right here on LW. This gets my downvote on good old Popperian grounds.

You say this stuff helps with akrasia? However hot your enthusiasm burns, you don't get to skip the "controlled study" part. Come back with citations. At this point you haven't even ruled out the placebo effect, for Bayes' sake!

However hot your enthusiasm burns, you don't get to skip the "controlled study" part.

While I agree with some of what you're saying, it isn't like "cached thoughts" or many of Eliezer's other classics come with references to controlled studies, either. Like Robin Hanson pointed out in response to my own critique of evpsych:

claims can be "tested" via almost any connection they make with other claims that connect etc. to things we see. This is what intellectual exploration looks like.

No, Eby's article didn't have direct references to empirical work establishing the connection between PCT and akrasia, but it did build on enough existing work about PCT to make the connection plausible and easy to believe. If this were a peer-reviewed academic journal, that wouldn't be enough, and it'd have to be backed with experimental work. But I see no reason to require LW posts to adhere to the same standard as an academic journal - this is also a place to simply toss out interesting and plausible-seeming ideas, so that they can be discussed and examined and somebody can develop them further, up to the point of gathering that experimental evidence.

8jimrandomh15y
The term "placebo effect" was coined to refer to phsychological effects intruding on non-psychological studies. In this case, since the desired effect is purely psychological, it's meaningless at best and misleading at worst. There is no self-help advice equivalent to a sugar pill. The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical. So, if you have an experimental procedure, go ahead and suggest it. Absent that, the only available data comes from self-experimentation and anecdotes.
7cousin_it15y
What if you're wrong? What if the most effective anti-procrastination technique is tickling your left foot in exactly the right manner, and this works regardless of whether you believe in its efficacy, or even know about it? That (predicated on a correct theory of human motivation) is the kind of stuff we're looking for. You're saying that there's no neutral (non-positive and non-negative) self-help advice? That's a pretty weird statement to make. Some advice is good, some is bad; why do you suspect a gap at zero? Failing all else, you could refrain from telling the subjects that the study is about self-control and anti-procrastination, just tell them to blindly follow some instructions and measure the effects covertly. No, I have no experimental protocol ready yet, but have the impudence to insist that we as a community should create one or shut up.
4Nick_Tarleton15y
You don't know what "we" are looking for. There is no one thing "we" are looking for. Some of us may be interested in plausible, attested-to self-help methods, even without experimental support.
3Vladimir_Nesov15y
Without experimental support is fine. But without extraordinary support isn't. Something must make the plausibility of a particular thing stand out, because you can't be interested in all the 1000 of equally plausible things unless you devote all your time to that.
1wedrifid15y
I certainly agree with the 'create one' part of what you're saying. Not so much the 'shut up'. Talking about the topic (and in so doing dragging all sorts of relevant knowledge from the community) and also self experimenting has its use. Particularly in as much as it can tell us whether something is worth testing. I do note that there are an awful lot of posts here (and on Overcoming Bias) which do not actually have controlled studies backing them. Is there a reason why Kaj's post requires a different standard to be acceptable? (And I ask that non-rhetorically, I can see reasons why you may reasonably do just that.)
4wedrifid15y
It would seem ethically acceptable to give groups advice selected from common social norms. For example, give one group some "Getting Things Done", another group nothing at all, a third some instruction on calculus (irrelevant but still high status attention and education), a fifth a drill sergeant motivational yelling at and the fourth group gets PJEby's system.
1thomblake15y
1. One example of a control group in a psychological study (can't find reference): researchers compared freudian psychoanalysis to merely sitting there and listening. 2. sugar has physiological effects, so you can't really assume a sugar pill is neutral with no side-effects
0wedrifid15y
And when you are testing the psychological effects of urea based salts you can't really assume lithium salts are neutral with no side-effects.
0Vladimir_Nesov15y
Is it how the real studies view the situation?
6Nick_Tarleton15y
To what end do you not get to skip it? Others may legitimately have lower standards for something being interesting, as Kaj said, or for a technique being worth a try. Honestly, it sounds more like you're trying to take down Kaj for getting uppity and violating the norms of Science, than like you're trying to contribute to finding truth or usefulness.
1thomblake15y
I'm afraid you have Popper all turned around. According to Popper, one should make claims that are testable, and then it's the job of (usually other) scientists to perform experiments to try to tear them apart. If you're a Popperian and you disagree, go ahead and perform the experiment. If your position is that the relevant claim isn't testable, that's a different complaint entirely.
5Annoyance15y
You're supposed to try to tear apart your own claims, first. Making random but testable assertions for no particular reason is not part of the methodology.
0cousin_it15y
Yes, I'm a Popperian. Yes, people should make testable claims and other people should test them. That's how everything is supposed to work. All right so far. As to the nature of my complaint... Here's a non-trivial question: how do we rigorously test Kaj and Eby's assertions about akrasia? I took Vassar's words very seriously and have been trying to think up an experiment that would (at least) properly control for the belief effect, but came up empty so far. If I manage to solve this problem, I'll make a toplevel post about that.
0wedrifid15y
Why is it so difficult? Even a head to head test between PJ's magic and an arbitrarily selected alternative would provide valuable information. Given the claims made for, as you pointed out, first percentile utility, it seems that just a couple of tests against arbitrary alternatives should be expected to show drastic differences and at least tell us whether it is worth thinking harder.
[-][anonymous]15y30

One particular feature of the mind that PCT explains neatly is the mind's tendency to reject attempts to will oneself to do an unpleasant action. In fact it is often the case that the harder you try, the harder the mind resists. Aaron Swartz calls this the mental force field and that's just how it often feels like.

What eventually resolves the conflict is not that you are finally able to will yourself to do the action, but usually some sort of context or reference point switch. At day-job, this is typically some kind of realization that you really need to ... (read more)

This article is quite long. As general feedback, I won't usually bother reading long articles unless they summarise their content up front with an abstract, or something similar. This post starts with more of a teaser. A synopsis at the end would be good as well: tell me three times.

1pjeby15y
FWIW, the original article on Kaj's blog is formatted in a way that makes it much easier to read/skim than here.
1Cyan15y
I don't mind the length; I second the "tell me three times".
0thoughtfulape15y
An observation: PJeby if you really have a self help product that does what it says on the tin for anyone who gives it a fair try, I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact. To that end I would suggest that offering your product to Eliezer Yudkowsky for free or even paying him to try it in the form of a donation to his singularity institute would be more effective than the back and forth that I see here. It should be possible to establish an mutually satisfactory set of criteria of what constitutes 'really trying it' beforehand to avoid subsequent accusations of bad faith.
3pjeby15y
What makes you think that that's my goal?
2thoughtfulape15y
Pjeby: If your goal isn't to convince the less wrong community of the effectiveness of your methodology then I am truly puzzled as to why you post here. If convincing others is not your goal, then what is?
1pjeby15y
Helping others.
6Alicorn15y
Do you expect anyone to benefit from your expertise if you can't convince them you have it?
6pjeby15y
Either someone uses the information I give or they don't. One does not have to be "convinced" of the correctness of something in order to test it. But whether someone uses the information or not, what do I or my "expertise" have to do with it?
1arundelo15y
Someone is more likely to spend the time and effort to test something if they think it's more likely to be correct.
0Vladimir_Nesov15y
It's irrational of people who aren't convinced that the information is useful to use it. Either a tiger eats celery or it doesn't. But the tiger has to be "convinced" that celery is tasty in order to taste it.
0pjeby15y
One of the most frustrating things about dealing with LW is the consistent confusion by certain parties between the terms "correct" and "useful". I said "one does not have to be convinced of the correctness of something in order to test it", and you replied with something about usefulness. Therefore, there is nothing I can say about your response except that it's utterly unrelated to what I said.
2LeBleu15y
You are the one who introduced correctness into the argument. Alicorn said: Feel free to read this as 'convince them your expertise is "useful" ' rather than your assumed 'convince them your expertise is "correct" '. The underlying point is that there is a very large amount of apparently useless advice out there, and many self-help techniques seem initially useful but then stop being useful. (as you are well aware since your theory claims to explain why it happens) The problem is to convince someone to try your advice, you have to convince them that the (probability of it being useful claimed benefit probability of claim being correct) is greater than the opportunity cost of the expected effort to try it. Due to others in the self-help market, the prior for it being useful is very low, and the prior for the claimed benefits equaling the actual benefits is low. You also are running into the prior that if someone is trying to sell you something, they are probably exaggerating its claims to make a sale. Dishonest salespeople spoil the sales possibilities for all the honest ones. If you can convince someone with a higher standing in the community than you to test your advice and comment on the results of their test, you can raise individual's probability expectations about the usefulness (or correctness) of your advice, and hence help more people than you otherwise would have. P.S. I did go to your site and get added to your mailing list. However, even if your techniques turn out positively for me, I don't think I have any higher standing in this community than you do, so I doubt my results will hold much weight with this group.
4pjeby15y
Actually, I'm also running into a bias that merely because I have things to sell, I'm therefore trying to sell something in all places at all times... or that I'm always trying to "convince" people of something. Indeed, the fact that you (and others) seem to think I need or even want to "convince" people of things is a symptom of this. Nobody goes around insisting that say, Yvain needs to get some high-status people to validate his ideas and "convince" the "community" to accept them! If I had it all to do over again, I think I would have joined under a pseudonym and never let on I even had a business.
0Technologos15y
You are certainly right that "one does not have to be convinced of the correctness of something in order to test it." But as you also said, immediately prior, "Either someone uses the information I give or they don't." If we test information that we do not have reason to believe is useful, then we have a massive search space to cover. Much of the point of LW is to suggest useful regions for search, based on previous data. So no, correctness is not a necessary condition of usefulness. But things that are correct are usually rather useful, and things that are not correct are less so. To the extent that you or your expertise are reliable indicators of the quality of your information, they help evaluate the probability of your information being useful, and hence the expected benefit of testing it. Perhaps some parties on LW are actually confused by the distinction between truth and utility. I do not suspect Vladimir_Nesov is one of them.
1pjeby15y
Really? With what probability? Or to put it another way: how were people were to start and put out fires for millennia before they had a correct theory of fire? Work metals without a correct atomic or molecular theory? Build catapults without a correct theory of gravity? Breed plants and animals without a correct theory of genetics? In the entire history of humanity, "Useful" is negatively correlated with "Correct theory"... on a grand scale. Sure, having a correct theory has some positive correlation with "useful", but there's usually a ton more information you need besides the correct theory to get to "useful", and more often, the theory ends up being derived from something that's already "useful" anyway.
3Cyan15y
That's a shockingly poor argument. Who can constrain the future more effectively: someone who knows the thermodynamics of combustion engines, or someone who only knows how to start fires with a flint-and-steel and how to stop them with water? Someone who can use X-ray crystallography to assess their metallurgy, or someone who has to whack their product with a mallet to see if it's brittle? Someone who can fire mortars over ranges requiring Coriolis corrections (i.e., someone with a correct theory of mechanics) or someone who only knows how to aim a catapult by trial and error? Someone who can insert and delete bacterial genes, or someone who doesn't even know germ theory? Someone who actually knows how human cognition works on all scales, or someone with the equivalent of a set of flint-and-steel level tools and a devotion to trial and error?
3Sideways15y
'Correctness' in theories is a scalar rather than a binary quality. Phlogiston theory is less correct (and less useful) than chemistry, but it's more correct--and more useful!--than the theory of elements. The fact that the modern scientific theories you list are better than their precursors, does not mean their precursors were useless. You have a false dichotomy going here. If you know of someone who "knows how human cognition works on all scales", or even just a theory of cognition as powerful as Newton's theory of mechanics is in its domain, then please, link! But if such a theory existed, we wouldn't need to be having this discussion. A strong theory of cognition will descend from a series of lesser theories of cognition, of which control theory is one step. Unless you have a better theory, or a convincing reason to claim that "no-theory" is better than control theory, you're in the position of an elementalist arguing that phlogiston theory should be ignored because it can't explain heat generated by friction--while ignoring the fact that while imperfect, phlogiston theory is strictly superior to elemental theory or "no-theory".
3Cyan15y
You've misunderstood my emphasis. I'm an engineer -- I don't insist on correctness. In each case I've picked above, the emphasis is on a deeper understanding (a continuous quantity, not a binary variable), not on truth per se. (I mention correctness in the Coriolis example, but even there I have Newtonian mechanics in mind, so that usage was not particularly accurate.) My key perspective can be found in the third paragraph of this comment. I'm all for control theory as a basis for forming hypotheses and for Seth Roberts-style self-experimentation.
2pjeby15y
As best I can tell, you agree that what I said is true, but nonetheless dispute the conclusion... and you do so by providing evidence that supports my argument. That's kind of confusing. What I said was: And you gave an argument that some correct things are useful. Bravo. However, you did not dispute the part where "useful" almost always comes before "correct"... thereby demonstrating precisely the confusion I posted about. Useful and correct are not the same, and optimizing for correctness does not necessarily optimize usefulness, nor vice versa. That which is useful can be made correct, but that which is merely correct may be profoundly non-useful. However, given a choice between a procedure which is useful to my goals (but whose "theory" is profoundly false), or a true theory which has not yet been reduced to practice, then, all else about these two pieces of information being equal, I'm probably going to pick the former -- as would most rational beings. (To the extent you would pick the latter, you likely hold an irrational bias... which would also explain the fanboy outrage and downvotes that my comments on this subject usually provoke here.)
1Cyan15y
I did not simply argue that some correct things are useful. I pointed out that every example of usefulness you presented can be augmented beyond all recognition with a deeper understanding of what is actually going on. Let me put it this way: when you write, "how were people were to start and put out fires for millennia..." the key word is "start": being satisfied with a method that works but provides no deep understanding is stagnation. Ever seeking more useful methods without seeking to understand what is actually going on makes you an expert at whatever level of abstraction you're stuck on. Order-of-magnitude advancement comes by improving the abstraction. I would also pick the former, provided my number one choice was not practical (perhaps due to time or resource constraints). The number one choice is to devote time and effort to making the true theory practicable. But if you never seek a true theory, you will never face this choice. ETA: I'll address: by saying that you are arguing against, and I am arguing for:
0Vladimir_Nesov15y
Deep theory has profound long-term impact, but is useless for simple stuff.
0Cyan15y
What is considered simple stuff is itself a function of that profound long-term impact.
1Technologos15y
I agree with Cyan, but even more basically, the set of correct beliefs necessarily includes any and all useful beliefs, because anything that is useful but incorrect can be derived from correct beliefs as well (similar to Eliezer's Bayesians vs. Barbarians argument). So, probabilistically, we should note that P(Useful|Correct)>P(Useful|Incorrect) because the space of correct beliefs is much smaller than the space of all beliefs, and in particular smaller than the space of incorrect beliefs. More importantly, as Sideways notes, more correct beliefs produce more useful effects; we don't know now whether we have a "correct" theory of genetics, but it's quite a bit more useful than its predecessor.
5pjeby15y
You still don't get it. Correct beliefs don't spring full-grown from the forehead of Omega - they come from observations. And to get observations, you have to be doing something... most likely, something useful. That's why your math is wrong for observed history - humans nearly always get "useful" first, then "correct". Or to put it another way, in theory you can get to practice from theory, but in practice, you almost never do.
0Technologos15y
Let's assume that what you say is true, that utility precedes accuracy (and I happen to believe this is the case). That does not in any way change the math. Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?
2pjeby15y
It doesn't matter if you have an Einstein's grasp of the physical laws, a Ford's grasp of the mechanics, and a lawyer's mastery of traffic law... you still have to practice in order to learn to drive. Conversely, as long as you learn correct procedures, it doesn't matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved. This is why, when one defines "rationality" in terms of strictly abstract mentations and theoretical truths, one tends to lose in the "real world" to people who have actually practiced winning.
0Technologos15y
And I wasn't arguing that definition, nor did I perceive any of the above discussion to be related to it. I'm arguing the relative utility of correct and incorrect beliefs, and the way in which the actual procedure of testing a position is related to the expected usefulness of that position. To use your analogy, you and I certainly have to practice in order to learn to drive. If we're building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades. To the extent that my argument and the one you mention here interact, I suppose I would say that "winning" should include not just individual instances, things we can practice explicitly, but success in areas with which we are unfamiliar. That, I suggest, is the role of theory and the pursuit of correct beliefs.
2pjeby15y
Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong, but it seems to me that robotics has gradually progressed from having lots of complicated theories and sophisticated machinery towards simple control systems and improved sensory perception... and that this progression happened because the theories didn't work in practice. So, AFAICT, the argument that "if you have a correct theory, things will go better" is itself one of those ideas that work better in theory than in practice, because usually the only way to get a correct theory is to go out and try stuff. Hindsight bias tends to make us completely ignore the fact that most discoveries come about from essentially random ideas and tinkering. We don't like the idea that it's not our "intelligence" that's responsible, and we can very easily say that, in hindsight, the robotics theories were wrong, and of course if they had the right theory, they wouldn't have made those mistakes. But this is delusion. In theory, you could have a correct theory before any practice, but in practice, you virtually never do. (And pointing to nuclear physics as a counterexample is like pointing to lottery winners as proof that you can win the lottery; in theory, you can win the lottery, but in practice, you don't.)
3Eliezer Yudkowsky15y
You are wrong. The above is a myth promoted by the Culture of Chaos and the popular media. Advanced modern robots use advanced modern theory - e.g. particle filters to integrate multiple sensory streams to localize the robot (a Bayesian method).
2Technologos15y
And this is even more true when considering elements in the formation of a robot that need to be handled before the AI: physics, metallurgy, engineering, computer hardware design, etc. Without theory--good, workably-correct theory--the search space for innovations is just too large. The more correct the theory, the less space has to be searched for solution concepts. If you're going to build a rocket, you sure as hell better understand Newton's laws. But things will go much smoother if you also know some chemistry, some material science, and some computer science. For a solid example of theory taking previous experimental data and massively narrowing the search space, see RAND's first report on the feasibility of satellites here.
0[anonymous]15y
IAWYC but Procedures are brittle. Theory lets you generalize procedures for new contexts, which you can then practice.
0thomblake15y
I'm not sure I'd grant that unless you can show it mathematically. It seems to me there are infinite beliefs of all sorts, and I'm not sure how their orders compare.
0Technologos15y
A heuristic method that underlies my reasoning: Select an arbitrary true predicate sentence Rab. That sentence almost certainly (in the mathematical sense) is false if an arbitrary c is substituted for b. Thus, whatever the cardinality of the set of true sentences, for every true sentence we can construct infinitely many false sentences, where the opposite is not true. Thus, the cardinality of the set of true sentences is greater than the set of false sentences.
0thomblake15y
I don't think that's as rigorous as you'd like it to be. I don't grant the "almost certainly false" step. Take a predicate P which is false for Pab but true in all other cases. Then, you cannot perform the rest of the steps in your proof with P. Consider that there is also the predicate Q such that Qab is true about half the time for arbitrary a and b. How will you show that most situations are like your R? I'm also not sure your proof really shows a difference in cardinality. Even if most predicates are like your R, there still might be infinitely many true sentences you can construct, even if they're more likely to be false.
1Technologos15y
It's definitely not rigorous, and I tried to highlight that by calling it a heuristic. Without omniscience, I can't prove that the relations hold, but the evidence is uniformly supportive. Can you name such a predicate other than the trivial "is not" (which is guaranteed for be true for all but one entity, as in A is not A) which is true for even a majority of entities? The best I can do is "is not describable by a message of under N bits," but even then there are self-referential issues. If the majority of predicates were like your P and Q, then why would intelligence be interesting? "Correctness" would be the default state of a proposition and we'd only be eliminating a (relatively) small number of false hypotheses from our massive pool of true ones. Does that match either your experience or the more extensive treatment provided in Eliezer's writings on AI? If you grant my assertion that Rab is almost certainly false if c is substituted for b, then I think the cardinality proof does follow. Since we cannot put the true sentences in one-to-one correspondence with the false sentences, and by the assertion there are more false sentences, the latter must have a greater (infinite?) cardinality than the former, no?
1JGWeissman15y
The cardinality of the sets of true and false statements is the same. The operation of negation is a bijection between them.
0Technologos15y
You're right. I was considering constructive statements, since the negation of an arbitrary false statement has infinitesimal informational value in search, but you're clearly right when considering all statements.
0thomblake15y
If by "almost certainly false" you mean that say, 1 out of every 10,000 such sentences will be true, then no, that does not entail a higher order of infinity.
-1Technologos15y
I meant, as in the math case, that the probability of selecting a true statement by choosing one at random out of the space of all possible statements is 0 (there are true statements, but as a literal infinitesimal). It's possible that both infinities are countable, as I am not sure how one would prove it either way, but that detail doesn't really matter for the broader argument.
0Technologos15y
See the note by JGWeissman--this is only true when considering constructively true statements (those that carry non-negligible informational content, i.e. not the negation of an arbitrary false statement).
0derekz15y
Which is it? I think all the further you can go with this line of thought is to point out that lots of things are useful even if we don't have a correct theory for how they work. We have other ways to guess that something might be useful and worth trying. Having a correct theory is always nice, but I don't see that our choice here is between having a correct theory or not having one.
4pjeby15y
Both. Over the course of history: Useful things -> mostly not true theories. True theory -> usually useful, but mostly first preceded by useful w/untrue theory.
0pwno15y
Aren't true theories defined by how useful they are in some application?
0Cyan15y
Perhaps surprisingly, statistics has an answer, and that answer is no. If in your application the usefulness of a statistical model is equivalent to its predictive performance, then choose your model using cross-validation, which directly optimizes for predictive performance. When that gets too expensive, use the AIC, which is equivalent to cross-validation as the amount of data grows without bound. But if the true model is available, neither AIC nor cross-validation will pick it out of the set of models being considered as the amount of data grows without bound.
-1JustinShovelain15y
define: A theory's "truthfulness" as how much probability mass it has after appropriate selection of prior and applications of Bayes' theorem. It works as a good measure for a theory's "usefulness" as long as resource limitations and psychological side effects aren't important. define: A theory's "usefulness" as a function of resources needed to calculate its predictions to a certain degree of accuracy, the "truthfulness" of the theory itself, and side effects. Squinting at it, I get something roughly like: usefulness(truthfulness, resources, side effects) = truthfulness * accuracy(resources) + messiness(side effects) So I define "usefulness" as a function and "truthfulness" as its limiting value as side effects go to 0 and resources go to infinity. Notice how I shaped the definition of "usefulness" to avoid mention of context specific utilities; I purposefully avoided making it domain specific or talking about what the theory is trying to predict. I did this to maintain generality. (Note: For now I'm polishing over the issue of how to deal with abstracting over concrete hypotheses and integrating the properties of this abstraction with the definitions)
3jimrandomh15y
Your definition of usefulness fails to include the utility of the predictions made, which is the most important factor. A theory is useful if there is a chain of inference from it to a concrete application, and its degree of usefulness depends on the utility of that application, whether it could have been reached without using the theory, and the resources required to follow that chain of inference. Measuring usefulness requires entangling theories with applications and decisions, whereas truthfulness does not. Consequently, it's incorrect to treat truthfulness as a special case of usefulness or vise versa.
2pjeby15y
Thank you - that's an excellent summary.
0JustinShovelain15y
From pwno: "Aren't true theories defined by how useful they are in some application?" My definition of "usefulness" was built with the express purpose of relating the truth of theories to how useful they are and is very much a context specific temporary definition (hence "define:"). If I had tried to deal with it directly I would have had something uselessly messy and incomplete, or I could have used a true but also uninformative expectation approach and hid all of the complexity. Instead, I experimented and tried to force the concepts to unify in some way. To do so I stretched the definition of usefulness pretty much to the breaking point and omitted any direct relation to utility functions. I found it a useful thought to think and hope you do as well even if you take issue with my use of the name "usefulness".
-3Vladimir_Nesov15y
Actions of high utility are useful. Of a set of available actions, the correct action to select is the most useful one. A correct statement is one expressing the truth, or probabilistically, an event of high probability. In this sense, a correct choice of action is one of which it is a correct statement to say that it is the most useful one. It's beside the point actually, since you haven't shown that your info is either useful or correct.
0[anonymous]15y
FYI, The Others) is a group of fictional characters who inhabit the mysterious island in the American television series Lost.
0Cyan15y
pjeby will be more likely to notice this proposition if you post it as a reply to one of his comments, not one of mine.
-3Vladimir_Nesov15y
Nope. The fact that you, personally, experience winning a lottery, doesn't support a theory that playing a lottery is a profitable enterprise.
3conchis15y
What? If the odds of the lottery are uncertain, and your sample size is actually one, then surely it should shift your estimate of profitability. Obviously a larger sample is better, and the degree to which it shifts your estimate will depend on your prior, but to suggest the evidence would be worthless in this instance seems odd.
0Vladimir_Nesov15y
It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars. The tenth decimal place doesn't really matter.
1Vladimir_Golovin15y
I wonder what's your definition of 'profit'. True story: when I was a child, I "invested" about 20 rubles in a slot machine. I won about 50 rubles that day and never played slot machines (or any lottery at all) again since then. So: * Expenses: 20 rubles. * Income: 50 rubles. * Profit: 30 rubles. Assuming that we're using a dictionary definition of the word 'profit', the entire 'series of transactions' with the slot machine was de-facto profitable for me.
2Vladimir_Nesov15y
It's obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn't count, only if you show that you were likely to win a million dollars (even if you didn't).
3conchis15y
The only way I can make sense of your comment is to assume that you're defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don't know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.
2Vladimir_Nesov15y
That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it's not. That's one of the traps of woo: you often can't efficiently demonstrate that it's effective, and through intuition probably related to conservation of expected evidence you insist that if you don't have a better method to show its effectiveness, the best available method should be enough, because it's ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn't carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it's effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.
1conchis15y
I'm afraid I'm struggling to connect this to your original objections. Would you mind clarifying? ETA: By way of attempting to clarify my issue with your objection, I think the lottery example differs from this situation in two important ways. AFAICT, the uselessness of evidence that a single person has won the lottery is a result of: 1. the fact that we usually know the odds of winning the lottery are very low, so evidence has little ability to shift our priors; and 2. that in addition to the evidence of the single winner, we also have evidence of incredibly many losers, so the sum of evidence does not favour a conclusion of profitability. Neither of these seem to be applicable here.
3Vladimir_Nesov15y
The analogy is this: using speculative self-help techniques corresponds to playing a lottery, in both cases you expect negative outcome, and in both cases making one more observation, even if it's observation of success, even if you experience it personally, means very little for the estimation of expected outcome. There is no analogy in lottery for studies that support the efficacy of self-help techniques (or some medicine).
3Benquo15y
It sounds like you're saying: 1) the range of conceivably effective self-help techniques is very large relative to the number of actually effective techniques 2) a technique that is negative-expected-value can look positive with small n 3) consequently, using small-n trials on lots of techniques is an inefficient way to look for effective ones, and is itself negative-expected-value, just like looking for the correct lottery number by playing the lottery. In this analogy, it is the whole self-help space, not the one technique, that is like a lottery. Am I on the right track?
1Alicorn15y
I don't think the principle of charity generally extends so far as to make people reinterpret you when you don't go to the trouble of phrasing your comments so they don't sound obviously wrong.
3Vladimir_Nesov15y
If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it's a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It's much more likely that the intended message behind the inapt textual transcription wasn't the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.
3thomblake15y
But if the obvious interpretation of what you said was obviously wrong, then it's your fault, not the reader's, if you're misunderstood. All a reader can go by is the text used to communicate the thought. What we have on this site is text which responds to other text. I could just assume you said "Why yes, thoughtfulape, that's a marvelous idea! You should do that nine times. Purple monkey dishwasher." if I was expected to respond to things you didn't say.
2Vladimir_Nesov15y
My point is that the prior under which you interpret the text is shaped by the expectations about the source of the text. If the text, taken alone, is seen as likely meaning something that you didn't expect to be said, then the knowledge about what you expect to be said takes precedence over the knowledge of what a given piece of text could mean if taken out of context. Certainly, you can't read minds without data, but the data is about minds, and that's a significant factor in its interpretation.
9pjeby15y
This is why people often can't follow simple instructions for mental techniques - they do whatever they already believe is the right thing to do, not what the instructions actually say.
0[anonymous]15y
That's overconfidence, a bias, but so is underconfidence.
0[anonymous]15y
I don't see how that's relevant unless we already agree that this is like a lottery. My reading of conchis's reply to your comment is that conchis doesn't think we should have strong priors in that direction. Why do you think this is a lottery-type situation?

Two questions:

  1. The linked PDF is meant for non-rational, non-high IQ people who need everything in short sentences with relevant words in bold so that they can understand. Can PJ produce something that is a little less condescending to read, and is suited to the more intelligent reader? For example, less marketing, more scientific scepticism.

  2. How do I get onto PJ's mailing list that Kaj speaks of?

3pjeby15y
1. See this comment. 2. Given your statement #1, why would you want to be on a mailing list of "non-rational, non-high IQ" people? ;-) (I'm joking, of course; I have many customers who read and enjoy OB and LW, though I don't think any have been top-level posters. Interestingly enough, my customers are so well-read that I usually receive more articles on recent research from them as emailed, "hey didja see"s, than I come across directly or see on LW!)
0[anonymous]15y
More articles than you see on LW? That's absurd!
2pjeby15y
I usually see more articles about recent scientific research from my paying customers than I encounter via LW postings. Or more precisely, and to be as fair as possible, I remember seeing more articles emailed to me from my customers about relevant research of interest to me than I remember discovering via LW... or such memories are at any rate easier to recall. Less absurd now? ;-)
3Vladimir_Nesov15y
That's called "irony", hinting to the fact that not a whole lot of articles are cited on LW, too few to warrant it a mention as a measure for the quantity of articles. Routine research browsing makes such quantity irrelevant, the only benefit might come from a mention of something you didn't think existed, because if you thought it existed, you'd be able to look it up yourself. P.S. I deleted my comment (again) before seeing your reply, thought it's too mindless.
1Kaj_Sotala15y
I think I got on the mailing list here. Alternatively, it could've been a result of giving my e-mail addy on this page.

I found the article painful reading. Things like the section entitled "Desire minus Perception equals Energy" very rapidly make me switch off.

I found the article painful reading.

I've heard this sort of statement repeatedly about pjeby's writing style, from different people, and I have a theory as to why. It's a timing pattern, which I will illustrate with some lorem ipsum:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec pharetra varius nisl, quis interdum lectus porta vel...

Main point!

Nullam sit amet risus nibh. Suspendisse ut sapien et tellus semper scelerisque.

The main points are set off from the flow of the text by ellipses and paragraph breaks. This gives them much more force, but also brings to mind other works that use the same timing pattern. Most essays don't do this, or do it exactly once when introducing the thesis. On the other hand, television commercials and sales pitches use it routinely. It is possible that some people have built up an aversion to this particular timing pattern, by watching commercials and not wanting to be influenced by them. If that's the problem, then when those people read it they'll feel bothered by the text, but probably won't know why, and will attribute it to whatever minor flaws they happen to notice, even if unrelated. People who only watch DVDs and internet ... (read more)

9Eliezer Yudkowsky15y
This is a fascinating suggestion and might well be correct. Certainly, my inability to read more than a paragraph of PJ Eby's writing definitely has something to do with it "sounding like a sales pitch". May be a matter of word choice or even (gulp) content too, though.
5derekz15y
I suppose for me it's the sort of breathless enthusiastic presentation of the latest brainstorm as The Answer. Also I believe I am biased against ideas that proceed from an assumption that our minds are simple. Still, in a rationalist forum, if one is to not be bothered by dismissing the content of material based on the form of its presentation, one must be pretty confident of the correlation. Since a few people who seem pretty smart overall think there might be something useful here, I'll spend some time exploring it. I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time. If anybody does manage to achieve lasting results that seem like purposeful rewiring, I'm sure we'd all like to hear descriptions of your methods and experience.
4pjeby15y
This is one place where PCT is not as enlightening without adding a smidge of HTM, or more precisely, the memory-prediction framework. The MPF says that we match patterns as sequences of subpattern: if one subpattern "A" is often followed by "B"", our brain compresses this by creating (at a higher layer) a symbol that means "AB". However, in order for this to happen, the A->B correlation has to happen at a timescale where we can "notice" it. If "A" happens today, and "B" tomorrow (for example), we are much less likely to notice! Coming back to your question: most of our problematic controller structures are problematic at too long of a timescale for it to be easily detected (and extinguished). So PCT-based approaches to problem solving work by forcing the pieces together in short-term memory so that an A->B sequence fires off ... at which point you then experience an "aha", and change the intercontroller connections or reference levels. (Part of PCT theory is that the function of conscious awareness may well be to provide this sort of "debugging support" function, that would otherwise not exist.) PCT also has some interesting things to say about reinforcement, by the way, that completely turn the standard ideas upside down, and I would really love to see some experiments done to confirm or deny. In particular, it has a novel and compact explanation of why variable-schedule reinforcement works better for certain things, and why certain schedules produce variable or "superstitious" action patterns.
0derekz15y
Thank you for the detailed reply, I think I'll read the book and revisit your take on it afterward.
3pjeby15y
As SA says, I did not write the article for the LW audience. However, D-P=E is a straightforward colloquial reframing of PCT's "r-p=e" formula, i.e. reference signal minus perception signal equals error, which then gets multiplied by something and fed off to an effector.
1SoullessAutomaton15y
Obviously, it was written with a very different demographic in mind than LW. I imagine many of the people that article was written for would find the material here to be unfriendly, cryptic, and opaque. This is probably a rational approach to marketing on P. J. Eby's part, but it does make it hard for some people here to read his other work.

So we have:

  • A new metaphor to Finally Explain The Brain;

  • "While Eby provides few references and no peer-reviewed experimental work to support his case [...]"

  • A self-help book: "Thinking things Done(tm) The Effortless way to Start, Focus and finally Finish..." (really, I did not make this up).

I'd say some more skepticism is warranted.

8pjeby15y
Not even remotely new; "Behavior: The Control Of Perception" was written in 1973, IIRC. And yes, it's cited by other research, and cites prior research that provides evidence for specific control systems in the brain and nervous system, at several of the levels proposed by Powers. I don't, but "Behavior: The Control Of Perception" has them by the bucket load.
4Roko15y
You are - I think - ignoring the potential value of this information. When assessing how useful a post is, one should consider the product of the weight of evidence it brings to bear with the importance of the information. In this case, PJ Eby and Kaj are telling us something that is more important than - in my estimate - 99% of what your or I have ever read. We should thank them for this, and instead of complaining about lack of evidence or only weak evidence, we should go forth and find more, for example by doing a literature search or by trying the techniques.
3djcb15y
I wasn't saying the post wasn't useful - at least it brought my attention to Richard Kennaway's post on the interesting concept of explaining brain functions in terms of control systems. But, the thing is that every day brings us new theories which have great potential value - if true. But most of them aren't. Given limited time, we cannot pursue each of them. We have to be selective. So, when I open that PDF linked in the first line of the article... that is, to put it mildly, not up to LessWrong-standards. Is that supposed to be 'more important than [...] 99% of what you or I have ever read'? It even ends in a sales pitch for books and workshops. So while Control Theory may be useful for understanding the brain, this material is a distraction at best.
1Roko15y
yes, this is true. I wonder if PJ could produce something rigorous and not-for-idiots?
5pjeby15y
There are lots of PCT textbooks out there; I wrote based on two of them (combined with my own prior knowledge): "Behavior: The Control Of Perception" by William T. Powers, and "Freedom From Stress", by Edward E. Ford. The first book has math and citations by the bucketload, the latter is a layperson's guide to practical PCT applications written by a psychologist.

Wait a second. There's a guy who writes textbooks about akrasia named Will Powers? That's great.

2pjeby15y
"Behavior: The Control of Perception" has very little to say about akrasia actually. The chapter on "Conflict" does a wee bit, I suppose, but only from the perspective of what a PCT perspective predicts should happen when control systems are in conflict. I haven't actually seen a PCT perspective on akrasia, procrastination, or willpower issues yet, apart from my own.
2Vladimir_Nesov15y
If I'm not mistaken, there is a little cottage industry researching it for years. See e.g. Albert Bandura, Edwin A. Locke. (2003). Negative Self-Efficacy and Goal Effects Revisited. (PDF) (it's a critique, but there are references as well).
2pjeby15y
Fascinating. However, it appears that both that paper and the papers it's critiquing are written by people who've utterly failed to understand it, in particular the insight that aggregate perceptions are measured over time... which means you can be positively motivated to achieve goals in order to maintain your high opinion of yourself -- and still have it be driven by an error signal. That is, the mere passage of time without further achievement will cause an increasing amount of "error" to be registered, without requiring any special action. Both this paper and the paper it critiques got this basic understanding wrong, as far as I can tell. (It also doesn't help that the authors of the paper you linked seem to think that materialistic reduction is a bad thing!)
2Alicorn15y
It is in fact so great, that I suspect it might be a pen name.
8Richard_Kennaway15y
It really is his name. I know him personally. (But he is informally known as Bill, not Will.)
1[anonymous]15y
Can you tell him that many of the links on this page are broken? http://www.brainstorm-media.com/users/powers_w/
0[anonymous]15y
Then both are of little relevance. More recent studies and surveys will be closer to the truth.
0Roko15y
Can you name any other theories that have (in your opinion) as a great a potential value to you personally as this one that you read yesterday?
0Vladimir_Nesov15y
And how's that at all important? The info isn't unique, so the progress in its development and application doesn't depend on whether you or I study it. If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time. There is little value in studying it now.
1Roko15y
Firstly, this reasoning presents a tradgedy of the commons scenario. Secondly, acceptance of this kind of theory - if it is true - could take say 20-30 years by the scientific community. You will then hear about it in the media, as will anyone else with half a brain. This seems urgent enough to me that it is worth putting a lot of effort into it.
2SoullessAutomaton15y
Perhaps you could clarify why you feel it is urgent? I agree that if this theory is correct it is of tremendous importance--but I'm not sure I see why it is more urgent than any other scientific theory. The only thing I can see is the "understanding cognition in order to build AI" angle and I'm not sure that understanding human cognition specifically is a required step in that.
1Vladimir_Nesov15y
I was literally asking about what in particular makes this topic so important as to qualify it as "something that is more important than - in my estimate - 99% of what your or I have ever read" (and doubting that anything could). You gave only a meta-reply, saying that if anything important was involved and I chose to ignore it, my strategy would not be a good one. But I don't know that it's important, and it's a relevant fact to consider when selecting a strategy. It's decision making under uncertainty. Mine is a good strategy a priori: 99 times out of 100 when in fact info is dross, I make room the the sure shots.
2Roko15y
Well, it seems to me that the most important knowledge a person can be given is knowledge that will improve their overall productivity and improve the efficiency with which they achieve their goals. This piece (by Kaj) claims to have found a possible mechanism which prevents humans from applying self-help techniques in general. This knowledge is effectively a universal goal-attainment improver. What were the 100 last pieces of text you read? Some technical documents about static program analysis, some other LW posts, maybe some news or wikipedia articles, etc. It seems to me that none of these would come close to the increased utility that this piece could offer you - if it is correct.
0Vladimir_Nesov15y
The info I have gives me good confidence in the belief that studying PCT won't help me with procrastination (as I mentioned, it was out there for a lot of time without drastically visible applications of this sort, plus I skimmed some highly-cited papers via google scholar, but I can't be confident in what I read because I didn't grasp the outline of the field given how little I looked). The things I study and think about these days are good math, tools for better understanding of artificial intelligence. Not terribly good chances for making useful progress, but not woo either (unlike, say, a year ago, much worse two years ago).
0Vladimir_Nesov15y
By the way, PJ Eby mentions a relevant fact: PCT was introduced more than 30 years ago.
2pjeby15y
From the second edition of B:CP , commenting on changes in the field since it was first written:
0Vladimir_Nesov15y
Sure, there are lots of mentions of the terms, in particular "control system", as something that keeps a certain process in place, guarding it against deviations, sometimes overreacting and swinging the process in the opposite direction, sometimes giving in under the external influence. This is all well and good, but this is an irrelevant observation, one that has no influence on it being useful for me, personally, to get into this. If it's feasible for me to develop a useful anti-procrastination technique based on this whatever, I expect that these techniques would already be developed, and their efficacy demonstrated. Given that no such thing conclusively exist (and people try, and this stuff is widely known!), I don't expect to succeed either. I might get a chance if I study the issue very carefully for a number of years, as it'd place me in the same conditions as other people who studied it carefully for many years (in which case I don't expect to place too much effort into a particular toy classification, as I'd be solving the procrastination problem not PCT death spiral strengthening problem), but that's a different game, irrelevant to the present question.
2pjeby15y
That's not why I referenced the quote, it was to address the, "so if it came out 30 years ago, why hasn't anything happened yet?" question. i.e., many things have happened. That is, the general trend in the life sciences is towards discovering negative-feedback continuous control at all levels, from the sub-cellular level on up. Actually, PCT shows why NO "anti-procrastination" technique that does not take a person's individual controller structure into account can be expected to work for very long, no matter how effective it is in the short run. That is, in fact, the insight that Kaj's post (and the report I wrote that inspired it) are intended to convey: that PCT predicts there is no "silver bullet" solution to akrasia, without taking into account the specific subjective perceptual values an individual is controlling for in the relevant situations. That is: no single, rote anti-procrastination technique will solve all problems for all people, nor even all the problems of one person, even if it completely solves one or more problems for one or more people. This seems like an important prediction, when made by such a simple model! (By contrast, I would say that Freudian drives and hypnotic "symptom substitution" models are not actually predicting anything, merely stating patterns of observation of the form, "People do X." PCT provides a coherent model for how people do it.)
0Vladimir_Nesov15y
Rote, not-rote, it doesn't really matter. A technique is a recipe for making the effect happen, whatever the means. If no techniques exist, if it's shown that this interpretation doesn't give a technique, I'm not interested, end of the story. The exact quote is "If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time", by which I meant applications to procrastination in particular.
1pjeby15y
To most people, a "technique" or "recipe" would involve a fixed number of steps that are not case-specific or person-specific. At the point where the steps become variable (iterative or recursive), one would have an "algorithm" or "method" rather than a "recipe". PCT effectively predicts that it is possible for such algorithms or methods to exist, but not techniques or recipes with a fixed number of steps for all cases. That still strikes me as a significant prediction, since it allows one to narrow the field of techniques under consideration - if the recipe doesn't include a "repeat" or "loop until" component, it will not work for everything or everyone.
-1Vladimir_Nesov15y
The statement of results needs to be clear. There are no results, there might be results given more research. It's not knowably applicable as yet. You may try it at home, but you may whistle to the wind as well. My usage of "technique" was appropriate, e.g. surgery is also very much patient-dependent; you cut out a cancer from wherever it is in a particular patient, not only in rigid pre-specified places. Since I made my meaning clear in the context, and you understood it, debating it was useless.
0Vladimir_Nesov15y
Which is fishy, given there is large literature on interpretation of behavior in terms of control systems. Just look at google scholar. But forming a representative sample of these works with adequate understanding of what they are about would take me, I think, a couple of days, so I'd rather someone else more interested in the issue do that.
2djcb15y
There is also a large literature on understanding the brain in terms of chaos theory, cellular automata, evolution, .... , and all of those can shed light on some aspects. The same is definitely true for control systems theory. The trouble comes when extrapolating this to universal hammers or to the higher cognitive levels; the literature I could find seems mostly about robotics. Admittedly, I did not search very thoroughly, but then again, life is short and if the poster wants to convince me, the burden of proof lies not on my side.
4jimrandomh15y
This statement strikes me as false. Evolution says things about what the brain does, and what it ought to do, but nothing about how it does it. Chaos theory and cellular automata are completely unrelated pieces of math. Everything else is either at the abstraction level of neurons, or at the abstraction level of "people like cake"; PCT is the only model I am aware of which even attempts to bridge the gap in between. Reality does not care who has the burden of proof, and it does not always provide proof to either side.
3Kaj_Sotala15y
Neural Darwinism?
0Vladimir_Nesov15y
In name only, and probably woo.
3Vladimir_Nesov15y
If I'm only willing to expend a certain amount of effort for gaining understanding of a given aspect of reality, then I won't listen to any explanation that requires more effort than that. Preparing a good explanation that efficiently communicates a more accurate picture of that aspect of reality is the burden of proof in question, a quite reasonable requirement in this case, where the topic doesn't appear terribly important.
2djcb15y
I don't see anything 'false' about the statement. I simply stated some other fields that have been used to explain aspects of the brain as well, and that, while PCT may be a useful addition, I have seen no evidence yet that it is 'life changing'. I enjoy reading LW for all the bright people and new ideas things to learn. In this case however, I was a bit disappointed, mainly because of the self-help-fluff. There are enough places for that kind of material already, I think. Of course, I cannot demand anything, it's just some (selfish?) concern for LW's S/N-ratio.
0pjeby15y
FWIW, Hawkins's HTM model (described in "On Intelligence") makes another fair stab at it, and has many similar characteristics to some of PCT's mid-to-high layers, just from a slightly different perspective. HTM (or at least the "memory-prediction framework" aspect of it) also makes much more specific predictions about what we should expect to find at the neuroanatomy level for those layers. OTOH, PCT makes more predictions about what we should see in large-scale human behavioral phenomena, and those predictions match my experience quite well.