(crossposted from Astral Codex Ten)

I.

What is willpower?

Five years ago, I reviewed Baumeister and Tierney's book on the subject. They tentatively concluded it's a way of rationing brain glucose. But their key results have failed to replicate, and people who know more about glucose physiology say it makes no theoretical sense.

Robert Kurzban, one of the most on-point critics of the glucose theory, gives his own model of willpower: it's a way of minimizing opportunity costs. But how come my brain is convinced that playing Civilization for ten hours has no opportunity cost, but spending five seconds putting away dishes has such immense opportunity costs that it will probably leave me permanently destitute? I can't find any correlation between the subjective phenomenon of willpower or effort-needingness and real opportunity costs at all.

A tradition originating in psychotherapy, and ably represented eg here by Kaj Sotala, interprets willpower as conflict between mental agents. One "subagent" might want to sit down and study for a test. But maybe one subagent represents the pressure your parents are putting on you to do well in school so you can become a doctor and have a stable career, and another subagent represents your own desire to drop out and become a musician, and even though the "do well in school" subagent is on top now, the "become a musician" subagent is strong enough to sabotage you by making you feel mysteriously unable to study. This usually ends with something about how enough therapy can help you reconcile these subagents and have lots of willpower again. But this works a lot better in therapy books than it does in real life. Also, what childhood trauma made my subagents so averse to doing dishes?

I've come to disagree with all of these perspectives. I think willpower is best thought of as a Bayesian process, ie an attempt to add up different kinds of evidence.

II.

My model has several different competing mental processes trying to determine your actions. One is a prior on motionlessness; if you have no reason at all to do anything, stay where you are. A second is a pure reinforcement learner - "do whatever has brought you the most reward in the past". And the third is your high-level conscious calculations about what the right thing to do is.

These all submit "evidence" to your basal ganglia, the brain structure that chooses actions. Using the same evidence-processing structures that you would use to resolve ambiguous sense-data into a perception, or resolve conflicting evidence into a belief, it resolves its conflicting evidence about the highest-value thing to do, comes up with some hypothesized highest-value next task, and does it.

I've previously quoted Stephan Guyenet on the motivational system of lampreys (a simple fish used as a model organism). Guyenet describes various brain regions making "bids" to the basal ganglia, using dopamine as the "currency" - whichever brain region makes the highest bid gets to determine the lamprey's next action. "If there's a predator nearby", he writes "the flee-predator region will put in a very strong bid to the striatum".

The economic metaphor here is cute, but the predictive coding community uses a different one: they describe it as representing the "confidence" or "level of evidence" for a specific calculation. So an alternate way to think about lampreys is that the flee-predator region is saying "I have VERY VERY strong evidence that fleeing a predator would be the best thing to do right now." Other regions submit their own evidence for their preferred tasks, and the basal ganglia weighs the evidence using Bayes and flees the predator.

This ties the decision-making process into the rest of the brain. At the deepest level, the brain isn't really an auction or an economy. But it is an inference engine, a machine for weighing evidence and coming to conclusions. Your perceptual systems are like this - they weigh different kinds of evidence to determine what you're seeing or hearing. Your cognitive systems are like this, they weigh different kinds of evidence to discover what beliefs are true or false. Dopamine affects all these systems in predictable ways. My theory of willpower asserts that it affects decision-making in the same way - it's representing the amount of evidence for a hypothesis.

III.

In fact, we can look at some of the effects of dopaminergic drugs to flesh this picture out further.

Stimulants increase dopamine in the frontal cortex. This makes you more confident in your beliefs (eg cocaine users who are sure they outrun that cop car) and sometimes perceptions (eg how some stimulant abusers will hallucinate voices). But it also improves willpower (eg Adderall helping people study). I think all of these are functions of increasing the (apparent) level of evidence attached to "beliefs". Since the frontal cortex disproportionately contains the high-level conscious processes telling you to (eg) do your homework, the drug artificially makes these processes sound "more convincing" relative to the low-level reinforcement-learning processes in the limbic system. This makes them better able to overcome the desire to do reinforcing things like video games, and also better able to overcome the prior on motionlessness (which makes you want to lie in bed doing nothing). So you do your homework.

Antipsychotics decrease dopamine. At low doses of antipsychotics, patients might feel like they have a little less willpower. At high doses, so high we don't use them anymore, patients might sit motionless in a chair, not getting up to eat or drink or use the bathroom, not even shifting to avoid pressure sores. Now not only can the frontal cortex conscious processes not gather up enough evidence overcome the prior on motionlessness even the limbic system instinctual processes (like "you should eat food" and "you should avoid pain") can't do it. You just stay motionless forever (or until your doctor lowers your dose of antipsychotics).

In contrast, people on stimulants fidget, pace, and say things like "I have to go outside and walk this off now". They have so much dopamine in their systems that any passing urge is enough to overcome the prior on motionlessness and provoke movement. If you really screw up someone's dopamine system by severe methamphetamine use or obscure side effects of swinging around antipsychotic doses, you can give people involuntary jerks, tics, and movement disorders - now even random neural noise is enough to overcome the prior.

(a quick experiment: wiggle your index finger for one second. Now wave your whole arm in the air for one second. Now jump up and down for one second. Now roll around on the floor for one second. If you're like me, you probably did the index finger one, maybe did the arm one, but the thought of getting up and jumping - let alone rolling on the floor - sounded like too much work, so you didn't. These didn't actually require different amounts of useful resources from you, like time or money or opportunity cost. But the last two required moving more and bigger muscles, so you were more reluctant to do them. This is what I mean when I say there's a prior on muscular immobility)

IV.

I think this theory matches my internal experience when I'm struggling to exert willpower. My intellectual/logical brain processes have some evidence for doing something ("knowing how the education system works, it's important to do homework so I can get into a good college and get the job I want"). My reinforcement-learner/instinctual brain processes have some opposing argument ("doing your homework has never felt reinforcing in the past, but playing computer games has felt really reinforcing!"). These two processes fight it out. If one of them gets stronger (for example, my teacher says I have to do the homework tomorrow or fail the class) it will have more "evidence" for its view and win out.

It also explains an otherwise odd feature of willpower: sufficient evidence doesn't necessarily make you do something, but overwhelming evidence sometimes does. For example, many alcoholics know that they need to quit alcohol, but find they can't. They only succeed after they "hit bottom", ie things go so bad that the evidence against using alcohol gets "beyond a reasonable doubt". Alcoholism involves some imbalance in brain regions such that the reinforcing effect of alcohol is abnormally strong. The reinforcement system is always more convinced in favor of alcohol than the intellectual system is convinced against it - until the intellectual evidence becomes disproportionately strong even more than the degree to which the reinforcement system is disproportionately strong.

Why don't the basal ganglia automatically privilege the intellectual/logical processes, giving you infinite willpower? You could give an evolutionary explanation - in the past, animals were much less smart, and their instincts were much better suited to their environment, so the intellectual/logical processes were less accurate, relative to the reinforcement/instinctual processes, than they are today. Whenever that system last evolved, it was right to weight them however much it weighted them.

But maybe that's giving us too much credit. Even today, logical/intellectual processes can be pretty dumb. Millions of people throughout history have failed to reproduce because they became monks for false religions; if they had just listened to their reinforcement/instinctual processes instead of their intellectual/logical ones, they could have avoided that problem. The moral law says we should spend our money saving starving children instead of buying delicious food and status goods for ourselves; our reinforcement/instinctual processes let us tell the moral law to f#@k off, keeping us well-fed, high-status, and evolutionarily fit. Any convincing sophist can launch an attack through the intellectual/logical processes; when they do, the reinforcement/instinctual processes are there to save us; Heinrich argues that the secret of our success is avoiding getting too bogged down by logical thought. Too bad if you have homework to do, though.

Does this theory tell us how to get more willpower? Not immediately, no. I think naive attempts to "provide more evidence" that a certain course of action is good will fail; the brain is harder to fool than people expect. I also think the space of productivity hacks has been so thoroughly explored that it would be surprising if a theoretical approach immediately outperformed the existing atheoretical one.

I think the most immediate gain to having a coherent theory of willpower is to be able to more effectively rebut positions that assume willpower doesn't exist, like Bryan Caplan's theory of mental illness. If I'm right, lack of willpower should be thought of as an imbalance between two brain regions that decreases the rate at which intellectual evidence produces action. This isn't a trivial problem to fix!

The lines here are perfectly straight - feel free to check with a ruler. Can you force yourself to perceive them that way? If not, it sounds like you can’t always make your intellectual/logical system overrule your instincts, which might make you more sympathetic to people with low willpower.
New Comment
29 comments, sorted by Click to highlight new comments since:

One interesting datum about willpower (which I’ve observed repeatedly in many people and contexts; not sure if it’s officially documented anywhere) is that it’s much easier to take a fully scripted action than to take an action that requires creatively filling in details.

For example, suppose several people are out trying to do “rejection therapy” (a perhaps-dubious game in which folks make requests of strangers that are likely to be rejected, e.g. “Can I listen to the walkman you’re listening to for a moment?” or “Can we trade socks?”). Many many people who set out to ask a stranger such a question will… “decide” not to, once the stranger is actually near them. However, people who have written down exactly which words they plan to stay in what order to exactly which stranger, with no room for ambiguity, are (anecdotally but repeatedly) more likely to actually say the words. (Or to do many other difficult habits, I think.)

(I originally noticed this pattern in undergrad, when there was a study group I wanted to leave, but whose keeper I felt flinchy about disappointing by leaving. I planned to leave the group and then … didn’t. And the next week, planned again to leave the group and then didn’t. And the third week, came in with a fully written exact sentence, said my scripted sentence, and left.)

Further examples:

  • It’s often easier to do (the dishes, or exercise, or other ‘difficult’ tasks) if there’s a set time for it.
  • Creativity-requiring tasks are often harder to attempt than more structured/directions-following-y tasks (e.g. writing poetry, especially if you’re “actually trying” at it; or attempting alignment research in a more “what actually makes sense here?” way and a less “let me make deductions from this framework other people are using” way; or just writing a blog post vs critiquing one).

— I’ve previously taken the above observations as evidence for the “subparts of my mind with differing predictions” view — if there are different bits of my (mind/brain) that are involved in e.g. assembling the sentence, vs saying the sentence, then if I need to figure out what words to say I’ll need whatever’s involved in calling the “assemble sentences” bit to also be on board, which is to say that more of me will need to be on board.

I guess you could also try this from the “Bayesian evidence” standpoint. But I’m curious how you’d do it in detail. Like, would you say the prior against “moving muscles” extends also for “assembling sentences”?

I suppose that any moment when you have to choose "A or B" is actually a moment when your choices are "A or B or give up". So the more such moments, the greater the chance of giving up.

Which seems like the reason behind one-click shopping (each click is a moment of choice "click or give up").

To handle this, it would probably work to extend the "motionless" prior to a more abstract "status quo" prior. It's not necessarily related to physical movement (it could even be the opposite if it were a sports group instead of a study group in your example) but rather an aversion to expending effort or experiencing potential undesired consequences. The prior says, "Things are fine. It's not worth going out of the way to change that."

With this extended version, creativity-requiring tasks are analogous to physically-demanding tasks. With pre-scripted tasks, it's not as demanding in the moment.

Interestingly there was a recent neuroscience paper that basically said "our computational model of the brain includes this part about mental effort but we have no damn idea why anything should require mental effort, we put it in our model because obviously that's a thing with humans but no theory would predict it":

Unlike the principles of hierarchy and world models, a cost principle was introduced into ACC models primarily for empirical rather than computational reasons. Empirically, the deployment of high-level control over task-execution appears to incur a cost that, phenomenologically, is experienced as effortful and is therefore aversive, and hence is minimized according to a cost-benefit tradeoff [34]. Such a cost was implicit in the seminal response conflict model of ACC, which incorporated a self-regulating control mechanism that increased control only when it was needed [7]. More recent models of ACC have explicitly accounted for the cost of control in regulating effortful behaviors (e.g., [35,36]). For example, in line with recent behavioral evidence [37], the HRL model of ACC regulates control at multiple levels of hierarchy, disengaging when control is not required to maintain a high average reward rate [15].

Yet despite the empirical evidence, this principle provides no obvious benefit to the agent: selfregulating control would appear to be both computationally [38] and evolutionarily [39] maladaptive. Accordingly, it is unclear whether the cost of control is a “bug” or a “feature” of the system. On one side, the bug view proposes that the cost reflects biophysical constraints. For example, self-control could consume a limited glucose supply, though this hypothesis has been vigorously disputed [40]. The exercise of control could also temporarily deplete neurotransmitter levels such as dopamine [36], and/or contribute to the buildup in the brain of toxic waste products like amyloid-beta [38]. In these cases, the relaxation of control would prevent against these negative outcomes.

Alternatively, the feature view proposes computational constraints to control. For example, it has been argued that to accommodate opportunity costs incurred when the value of an alternative task exceeds that of the current task, the relaxation of control facilitates a task switch [41]. However, there is no obvious reason why opportunity costs should impair task performance before the switch. Likewise, capacity constraints associated with difficult neural computations, such as interfering task representations [42], do not address why control over any single task should vary over time. Another possibility is that, when control is not required because the task is very easy, control can actually impair performance by slowing it down [43]. However, this explanation does not address why control sometimes wanes on difficult tasks wherein control is always advantageous.

Taken together, these considerations indicate that ACC self-regulates control levels in order to minimize a concomitant cost, the source of which remains a vexing question in cognitive neuroscience.

Under the bidding system theory, if the non-winning bids still have to pay out some fraction of the amount bid even when they lose, then bidding wars are clearly costly. Even when the executive control agent is winning all the bids, resources are being drained every auction in some proportion to how strongly other agents are still bidding. This seems to align with my own perceptions at first glance and explains how control wanes over time.

I've come to disagree with all of these perspectives.

I'm not sure whether your model actually differs substantially from mine. :-) Or at least not the version of my model articulated in "Subagents, akrasia, and coherence in humans". Compare you:

I've previously quoted Stephan Guyenet on the motivational system of lampreys (a simple fish used as a model organism). Guyenet describes various brain regions making "bids" to the basal ganglia, using dopamine as the "currency" - whichever brain region makes the highest bid gets to determine the lamprey's next action. "If there's a predator nearby", he writes "the flee-predator region will put in a very strong bid to the striatum".

The economic metaphor here is cute, but the predictive coding community uses a different one: they describe it as representing the "confidence" or "level of evidence" for a specific calculation. So an alternate way to think about lampreys is that the flee-predator region is saying "I have VERY VERY strong evidence that fleeing a predator would be the best thing to do right now." Other regions submit their own evidence for their preferred tasks, and the basal ganglia weighs the evidence using Bayes and flees the predator.

with me:

One model (e.g. Redgrave 2007, McHaffie 2005) is that the basal ganglia receives inputs from many different brain systems; each of those systems can send different “bids” supporting or opposing a specific course of action to the basal ganglia. A bid submitted by one subsystem may, through looped connections going back from the basal ganglia, inhibit other subsystems, until one of the proposed actions becomes sufficiently dominant to be taken. [...]

Some subsystems having concerns (e.g. immediate survival) which are ranked more highly than others (e.g. creative exploration) means that the decision-making process ends up carrying out an implicit expected utility calculation. The strengths of bids submitted by different systems do not just reflect the probability that those subsystems put on an action being the most beneficial. There are also different mechanisms giving the bids from different subsystems varying amounts of weight, depending on how important the concerns represented by that subsystem happen to be in that situation. This ends up doing something like weighting the probabilities by utility, with the kinds of utility calculations that are chosen by evolution and culture in a way to maximize genetic fitness on average. Protectors, of course, are subsystems whose bids are weighted particularly strongly, since the system puts high utility on avoiding the kinds of outcomes they are trying to avoid.

The original question which motivated this section was: why are we sometimes incapable of adopting a new habit or abandoning an old one, despite knowing that to be a good idea? And the answer is: because we don’t know that such a change would be a good idea. Rather, some subsystems think that it would be a good idea, but other subsystems remain unconvinced. Thus the system’s overall judgment is that the old behavior should be maintained.

You don't explicitly talk about the utility side, just the probability, but if the flee-predator region says its proposed course of action is "the best thing to do right now", then that sounds like there's some kind of utility calculation also going on. On the other hand, I didn't think of the "dopamine represents the strength of the bid" hypothesis, but combining that with my model doesn't produce any issues as far as I can see.

Expanding a bit on this correspondence: I think a key idea Scott is missing in the post is that a lot of things are mathematically identical to "agents", "markets", etc. These are not exclusive categories, such that e.g. the brain using an internal market means it's not using Bayes' rule. Internal markets are a way to implement things like (Bayesian) maximum a-posteriori estimates; they're a very general algorithmic technique, often found in the guise of Lagrange multipliers (historically called "shadow prices" for good reason) or intermediates in backpropagation. Similar considerations apply to "agents".

Also, you:

I think this theory matches my internal experience when I'm struggling to exert willpower. My intellectual/logical brain processes have some evidence for doing something ("knowing how the education system works, it's important to do homework so I can get into a good college and get the job I want"). My reinforcement-learner/instinctual brain processes have some opposing argument ("doing your homework has never felt reinforcing in the past, but playing computer games has felt really reinforcing!"). These two processes fight it out. If one of them gets stronger (for example, my teacher says I have to do the homework tomorrow or fail the class) it will have more "evidence" for its view and win out.

It also explains an otherwise odd feature of willpower: sufficient evidence doesn't necessarily make you do something, but overwhelming evidence sometimes does. For example, many alcoholics know that they need to quit alcohol, but find they can't. They only succeed after they "hit bottom", ie things go so bad that the evidence against using alcohol gets "beyond a reasonable doubt". Alcoholism involves some imbalance in brain regions such that the reinforcing effect of alcohol is abnormally strong. The reinforcement system is always more convinced in favor of alcohol than the intellectual system is convinced against it - until the intellectual evidence becomes disproportionately strong even more than the degree to which the reinforcement system is disproportionately strong.

Me, later in my post:

Note those last sentences: besides the subsystems making their own predictions, there might also be a meta-learning system keeping track of which other subsystems tend to make the most accurate predictions in each situation, giving extra weight to the bids of the subsystem which has tended to perform the best in that situation. We’ll come back to that in future posts.

This seems compatible with my experience in that, I feel like it’s possible for me to change even entrenched habits relatively quickly - assuming that the new habit really is unambiguously better. In that case, while I might forget and lapse to the old habit a few times, there’s still a rapid feedback loop which quickly indicates that the goal-directed system is simply right about the new habit being better.

Or, the behavior in question might be sufficiently complex and I might be sufficiently inexperienced at it, that the goal-directed (default planning) subagent has always mostly remained in control of it. In that case change is again easy, since there is no strong habitual pattern to override.

In contrast, in cases where it’s hard to establish a new behavior, there tends to be some kind of genuine uncertainty:

  • The benefits of the old behavior have been validated in the form of direct experience (e.g. unhealthy food that tastes good, has in fact tasted good each time), whereas the benefits of the new behavior come from a less trusted information source which is harder to validate (e.g. I’ve read scientific studies about the long-term health risks of this food).
  • Immediate vs. long-term rewards: the more remote the rewards, the larger the risk that they will for some reason never materialize.
  • High vs. low variance: sometimes when I’m bored, looking at my phone produces genuinely better results than letting my thoughts wander. E.g. I might see an interesting article or discussion, which gives me novel ideas or insights that I would not otherwise have had. Basically looking at my phone usually produces worse results than not looking at it - but sometimes it also produces much better ones than the alternative.
  • Situational variables affecting the value of the behaviors: looking at my phone can be a way to escape uncomfortable thoughts or sensations, for which purpose it’s often excellent. This then also tends to reinforce the behavior of looking at the phone when I’m in the same situation otherwise, but without uncomfortable sensations that I’d like to escape.

When there is significant uncertainty, the brain seems to fall back to those responses which have worked the best in the past - which seems like a reasonable approach, given that intelligence involves hitting tiny targets in a huge search space, so most novel responses are likely to be wrong.

[-][anonymous]100

I think the problem with this theory is similar to evolutionary psychology.

Reading it, it makes perfect sense and explains current observations.  But how do you methodically disprove the other competing theories?  How do we winnow down to this as the actual truth?

You need empirical data that we don't yet have a practical means of collecting.  With a system like a neural link, with thousands of electrodes wired to the frontal cortex and data cross correlating it to dopamine levels, we could actually built a simulation from the data and then validate this theory against that simulation.

(evolutionary psychology has a worse problem I suppose - in that case it is making theories about a past that we cannot observe.  with your theory we can hope to collect the data)

(a quick experiment: wiggle your index finger for one second. Now wave your whole arm in the air for one second. Now jump up and down for one second. Now roll around on the floor for one second. If you're like me, you probably did the index finger one, maybe did the arm one, but the thought of getting up and jumping - let alone rolling on the floor - sounded like too much work, so you didn't. These didn't actually require different amounts of useful resources from you, like time or money or opportunity cost. But the last two required moving more and bigger muscles, so you were more reluctant to do them. This is what I mean when I say there's a prior on muscular immobility)

Wow, you got me straight. I did my finger and my arm then stopped.

Though it felt more to me like my reinforcement-learner had strong preferences (i.e. to keep holding my iPad and look at the screen) that overweighed it, than muscular immobility in general having a prior. Often when I'm in the middle of exercise, I feel pretty excited about doing more, or I'll notice someone doing something and I'll run over to help. In those situations my reinforcement learner isn't saying anything like "keep holding the iPad, you like the iPad".

I don't understand what's the point of calling it "evidence" instead of "updating weights" unless brain literally implements P(A|B) = [P(A)*P(B|A)]/P(B) for high level concepts like “it’s important to do homework”. And even then this story about evidence and beliefs doesn't bring anything additional to the explanation with specific weight aggregation algorithm.

An alternative explanation of will-power is hyperbolic discounting. Your time discount function is not exponential, and therefore not dynamically consistent. So you can simultaneously (i) prefer gaining short-term pleasure on the expense of long-term goals (e.g. play games instead of studying) and (ii) take actions to prevent future-you from doing the same (e.g. go to rehab).

This seems simpler, but it doesn't explain why the same drugs that cause/prevent weird beliefs should add/deplete will-power.

"How to weight evidence vs. the prior" is not a free parameter in Bayesianism. What you can have is some parameter controlling the prior itself (so that the prior can be less or more confident about certain things). I guess we can speculate that there are some parameters in the prior and some parameters in the reward function s.t. various drugs affect both of them simultaneously, and maybe there's a planning-as-inference explanation for why the two are entangled.

I mean, there is something of a free parameter which is how strong your prior is over 'hypotheses' vs how much of a likelihood ratio you get from observing 'evidence' if there's a difference between hypotheses and evidence, and you can set your prior joint distribution over hypotheses and evidence however you want.

Maybe this is what you meant by "What you can have is some parameter controlling the prior itself (so that the prior can be less or more confident about certain things)."

Yes, I think we are talking about the same thing. If you change your distribution over hypotheses, or the distribution over evidence implied by each hypothesis, then it means you're changing the prior.

[+][comment deleted]20

Adding to the metaphor here: suppose every day I, a Bayesian, am deciding what to do. I have some prior on what to do, which I update based on info I hear from a couple of sources, including my friend and the blogosphere. It seems that I should have some uncertainty over how reliable these sources are, such that if my friend keeps giving advice that in hindsight looks better than the advice I'm getting from the blogosphere, I update to thinking that my friend is more reliable than the blogosphere, and in future update more on my friend's advice than on the blogosphere's.

This means that if we take this sort of Bayesian theory of willpower seriously, it seems like you're going to have 'more willpower' if in the past the stuff that your willpower advised you to do seemed 'good'. Which sounds like the standard theory of "if being diligent pays off you'll be more diligent" but isn't: if your 'willpower/explicit reasoning module' says that X is a good idea and Y is a terrible idea, but other evidence comes in saying that Y will be great such that you end up doing Y anyway, and it sucks, you should have more willpower in the future. I guess the way this ends up not being what the Bayesian framework predicts is if what the evidence is actually for is the proposition "I will end up taking so-and-so action" - but that's loopy enough that I at most want to call it quasi-Bayesian. Or I guess you could have an uninformative prior over evidence reliability, such that you don't think past performance predicts future performance.

But this works a lot better in therapy books than it does in real life.

I'm still confused by you having this experience, since my (admittedly) anecdotal one is that actually all of that stuff does work roughly as well as the therapy books suggest it does. (Different worlds, I guess. :)) Though with some caveats; from my own experience doing this and talking with others who do find it to work:

  • Often an initial presenting symptom is a manifestation of a deeper one. Then one might fix the initial issue so that the person does get genuine relief, but while that helps, the problem will return in a subtler form later. (E.g. my own follow-up on one particular self-intervention, which found some of the issues to have been healed for good but others to have returned in a different form.)
  • The roots of some issues are just really hard to find and blocked behind many, many defenses that are impossible to get to in any short amount of time.

I'd guess that the people who write the therapy books are being mostly honest when they say that the selection of stories is representative, in that they do get quick results with many clients, so it's fair to say that none of the stories are exceptionally rare in nature... but the authors also do want to promote the technique, so they don't go into as much detail on the cases that were long and messy and complicated and for those reasons wouldn't be very pedagogically useful to cover in the book anyway. So then they kind of gloss over the fact that even with these techniques, it can still take several years to get sufficiently far.

I do also think that - as you speculate in that post - therapist quality is definitely a thing. At least a therapist should be capable of being open and non-judgmental towards their clients, and it's not hard to find all kinds of horror stories about bad therapists who thought that a client's problems were all rooted in the client being polyamorous, or something similar.

Also, what childhood trauma made my subagents so averse to doing dishes?

Could be many reasons. :) Though it doesn't need to be a literal childhood trauma; it can just be an aversion picked up through association.

A typical one would be something like being told to do the dishes as a kid, not liking the way your parents go around ordering you, and then picking up a general negative association with various chores.

Someone else might have an aversion due to an opposite problem, like not being allowed to do many chores on their own and then picking up the model that one gets rewarded better by being dependent on others.

A third person might have had a parent who also really disliked doing the dishes themselves (for whatever reason) and internalized the notion that this is something to avoid.

(I don't think these are just-so stories; rather they're all variations of patterns I've encountered either in myself or others.)

The second and third one are also easy to rephrase in an equivalent Bayesian framework (in the second one, the mind comes to predict that independence causes bad outcomes; in the third one, it picks up the belief "doing the dishes is bad" through social learning). For the first one, the prediction is not quite as clear, but at least we know conditioning-by-association to be a thing that definitely happens.

Seems to me that we have two general paradigms for explaining psychological things such as (but not limited to) willpower.

From the perspective of energies, there are some numeric values, which can be increased or decreased by inputs from environment, or by each other. Different models within this paradigm posit different types of values and different relations between them. We can have a simple model with one mysterious value called "willpower"; or we can have a more complicated model with variables such as "glucose", "dopamine", "reward", "evidence", etc. Eating a cake increases the "reward" and "glucose" variables; completing a level on a mobile game increases the "reward" variable; the "reward" variable increases the "dopamine" variable; the "glucose" variable decreases over time as the glucose is processed by the metabolism, etc.

Solutions within this paradigm consist of choosing a variable that can be modified most efficiently, and trying to modify it. We can try increase the "willpower" by yelling at people to use their willpower, or increase "motivation" by giving them a supposedly motivational speech, we can give rewards or punishment, increase "evidence" by explaining how bad their situation is, and then we can increase "glucose" by giving them a cake, or increase "dopamine" by giving them a pill. The idea is that the change of the value will propagate through the system, and achieve the desired outcome.

From the perspective of contents, there are some beliefs or habits or behaviors or whatever, things that can have many shapes (instead of just one-dimensional value), and the precise shape is what determines the functionality. They are more like lines of code in an algorithm, or perhaps like switches; flipping the switch can redirect the existing energies from one pathway to another. A single update, such as "god does not exist", can redirect a lot of existing motivations, and perceived rewards or punishments. A belief of "X helps to achieve Y" or "X does not help to achieve Y" can connect or disconnect the motivation to do X with the desirability of Y.

Solutions within this paradigm consist of debugging the mental contents, and finding the ones that should be changed, for example by reflecting on them, providing new evidence, experimenting with things. Unlike the energy model, which is supposed to be the same in general for all humans (like, maybe some people metabolize glucose faster than others, but the hypothesis "glucose causes willpower" is either true for everyone or false for everyone), the contents model requires custom-made solutions for everyone, because people have different beliefs, habits, and behaviors.

(Now I suspect that some readers are screaming "the first paradigm is called behaviorism, and the second paradigm is called psychoanalysis". If that is your objection, then I'd like to remind you of CBT.)

Yesterday there was the article How You Can Gain Self Control Without "Self-Control", whose theory seems mostly based on the energies model (variables: awareness, motivation, desires, pain tolerance, energy...), but then it suddenly provides an example based on a real person, whose mental content ("suffering is glorious") simply flips the sign in the equation how two energies relate to each other, and anecdotally the outcomes are impressive.

I guess I am trying to say that if you overly focus on gradually increasing the right energies by training or pills, you might miss the actually high-impact interventions, because they will be out of your paradigm.

(But of course, if your energies are in disorder e.g. because you have a metabolic problem, fix that first, instead of looking for a clever insight or state of mind that would miraculously cure you.)

But maybe that's giving us too much credit. Even today, logical/intellectual processes can be pretty dumb. Millions of people throughout history have failed to reproduce because they became monks for false religions; if they had just listened to their reinforcement/instinctual processes instead of their intellectual/logical ones, they could have avoided that problem.

Becoming a monk isn't as much of a reproductive dead end as you paint it here: yes, the individual won't reproduce, but the individual wasn't spontaneously created from a vacuum. The monk has siblings and cousins who will go on to reproduce, and in the context of a society that places high value on religion, the respect the monk's family earns via the monk demonstrating such a costly sacrifice, will help the monk's genes more than the would-be monk himself could do by trying to reproduce.

It's important to view religion not as a result of our cognitive processes being insufficient to reject religion (which is nearly trivial for an intelligent being to do, by design), but as a result of evolution intentionally sabotaging our cognitive processes to better navigate social dynamics, and for this reason I take issue with this being used as an example to support the point you make with it.

I agree with the point of "belief in religion likely evolved for a purpose so it's not that we're intrinsically too dumb to reject them", but I'm not sure of the reasoning in the previous paragraph. E.g. if religion in the hunter-gatherer period wasn't already associated with celibacy, then it's unlikely for this particular causality to have created an evolved "sacrifice your personal sexual success in exchange for furthering the success of your relatives" strategy in the brief period of time that celibacy happened to bring status. And the plentiful sex scandals associated with various organized religions don't give any indication of religion and celibacy being intrinsically connected; in general, being high status seems to make men more rather than less interested in sex.

A stronger argument would be that regardless of how smart intellectual processes are, they generally don't have "maximize genetic fitness" as their goal, so the monk's behavior isn't caused by the intellectual processes being particularly dumb... but then again, if those processes don't directly care about fitness, then that just gives evolution another reason to have instincts sometimes override intellectual reasoning. So this example still seems to support Scott's point of "if they had just listened to their reinforcement/instinctual processes instead of their intellectual/logical ones, they could have avoided that problem".

So this example still seems to support Scott's point of "if they had just listened to their reinforcement/instinctual processes instead of their intellectual/logical ones, they could have avoided that problem".

But my point is that the process that led to them becoming monks was an instinctual process, not an intellectual one, and the "problem" isn't actually one from the point of view of the genes.

Actually upon further thought, I disagree with Scott's premise that this case allows for a meaningful distinction between "instinctual" and "intellectual" processes, so I guess I agree with you.

What's this supposed to be estimating or predicting with Bayes here? The thing you'll end up doing?  Something like this?: 

Each of the 3 processes has a general prior about how often they "win" (that add up to 100%, or maybe the basal ganglia normalizes them). And a bayes factor, given the specific "sensory" inputs related to their specific process, while remaining agnostic about the options of the other process. For example, the reinforcer would be thinking: "I get my way 30% of the time. Also, this level of desire to play the game is 2 times more frequent when I end up getting my way than when I don't (regardless of which of the other 2 won, let's assume, or I don't know how to keep this modular). Similarly, the first process would be looking at the level of laziness, and the last one at the strength of the arguments or sth. 

Then, the basal ganglia does bayes to update the priors given the 3 pieces of evidence, and gets to a posterior probability distribution among the 3 options.

And finally you'll end up doing what was estimated because, well, the brain does what minimizes the prediction error. Is this the weird sense in which the info is mixed with bayes and this is all bayesian stuff?

I must be missing something. If this interpretation was correct, e.g., what would increasing the dopamine e.g. in the frontal cortex be doing? Increasing the "unnormalized" prior for such process? (like, it falsely thinks it wins more often than it does, regardless of the evidence). Falsely bias the bayes factor? (like, it thinks it almost never happens that it feels this convinced of what should happen in the cases when it doesn't end up winning.)

In fact, I can make myself perceive the dark blue lines as straight by holding the phone on which I read this article about a foot and a half from my face. I don’t know if this is partly because my contact lenses aren’t on at the moment. As I gradually move the screen away, the lines appear to rotate into a horizontal arrangement.

In general, I am more sympathetic to the idea that a well-conceived and consistently executed, theory-based approach to productivity hacks will yield fruit that an atheoretical haphazard approach can’t achieve, and for this reason. Between the lack of incentives to help others be more productive, limited bandwidth to explore for ourselves, scientific inadequacies in aligning evidence with reason, and the constant improvements in technology, culture, and scientific understanding, there’s a lot of new resources to draw upon.

This theory seems to explain all observations but I am not able to figure out what it doesn't explain in day to day life.

Also, for the last picture the key lies in looking straight at the grid and not the noise then you can see the straight lines, although it takes a bit of practice to reduce your perception to that.

This reminds me of lukeprog's post on motivation (I can't seem to find it though...; this should suffice). Your model and TMT kind of describe the same thing: you have more willpower for things that are high value and high expectancy. And the impulsiveness factor is similar to how there are different "kinds" of evidence: eg you are more "impulsive" to play video games or not move, even though they aren't high value+expectation logically.