Oh! The metaphor I've been using with my clients for the thing I think you're pointing at is reputation.
If the mind is a group (in this case a group of pattern predictors, but please also imagine it as a group of people), then ask yourself: How does a group of people (with no dictator) make a decision?
Well, they talk. They make bids.
Can one person use "willpower" and force the group to make a decision a particular way? Yes, if they make a strong enough bid and the rest of the group lets them. Why would the rest of the group let them? Reputation. But if they do that too many times with poor results, they lose their reputation and won't be able to dictate the group anymore. "Willpower" lost.
I suspect this happens in the mind among pattern predictors, too. (I believe @Kaj_Sotala has written about this somewhere wrt Global Workspace Theory? I found this tweet in the meantime.) If a certain part of your mind lose reputation with the others parts, that part will lose reputation and won't be able to make competitive bids anymore. That part's "willpower" has decreased.
(I believe @Kaj_Sotala has written about this somewhere wrt Global Workspace Theory? I found this tweet in the meantime.)
There's at least this bit from "Subagents, akrasia, and coherence in humans":
One model (e.g. Redgrave 2007, McHaffie 2005) is that the basal ganglia receives inputs from many different brain systems; each of those systems can send different “bids” supporting or opposing a specific course of action to the basal ganglia. A bid submitted by one subsystem may, through looped connections going back from the basal ganglia, inhibit other subsystems, until one of the proposed actions becomes sufficiently dominant to be taken.
The above image from Redgrave 2007 has a conceptual image of the model, with two example subsystems shown. Suppose that you are eating at a restaurant in Jurassic Park when two velociraptors charge in through the window. Previously, your hunger system was submitting successful bids for the “let’s keep eating” action, which then caused inhibitory impulses to be sent to the threat system. This inhibition prevented the threat system from making bids for silly things like jumping up from the table and running away in a panic. However, as your brain registers the new situation, the threat system gets significantly more strongly activated, sending a strong bid for the “let’s run away” action. As a result of the basal ganglia receiving that bid, an inhibitory impulse is routed from the basal ganglia to the subsystem which was previously submitting bids for the “let’s keep eating” actions. This makes the threat system’s bids even stronger relative to the (inhibited) eating system’s bids.
Soon the basal ganglia, which was previously inhibiting the threat subsystem’s access to the motor system while allowing the eating system access, withdraws that inhibition and starts inhibiting the eating system’s access instead. The result is that you jump up from your chair and begin to run away. Unfortunately, this is hopeless since the velociraptor is faster than you. A few moments later, the velociraptor’s basal ganglia gives the raptor’s “eating” subsystem access to the raptor’s motor system, letting it happily munch down its latest meal.
But let’s leave velociraptors behind and go back to our original example with the phone. Suppose that you have been trying to replace the habit of looking at your phone when bored, to instead smiling and directing your attention to pleasant sensations in your body, and then letting your mind wander.
Until the new habit establishes itself, the two habits will compete for control. Frequently, the old habit will be stronger, and you will just automatically check your phone without even remembering that you were supposed to do something different. For this reason, behavioral change programs may first spend several weeks just practicing noticing the situations in which you engage in the old habit. When you do notice what you are about to do, then more goal-directed subsystems may send bids towards the “smile and look for nice sensations” action. If this happens and you pay attention to your experience, you may notice that long-term it actually feels more pleasant than looking at the phone, reinforcing the new habit until it becomes prevalent.
To put this in terms of the subagent model, we might drastically simplify things by saying that the neural pattern corresponding to the old habit is a subagent reacting to a specific sensation (boredom) in the consciousness workspace: its reaction is to generate an intention to look at the phone. At first, you might train the subagent responsible for monitoring the contents of your consciousness, to output moments of introspective awareness highlighting when that intention appears. That introspective awareness helps alert a goal-directed subagent to try to trigger the new habit instead. Gradually, a neural circuit corresponding to the new habit gets trained up, which starts sending its own bids when it detects boredom. Over time, reinforcement learning in the basal ganglia starts giving that subagent’s bids more weight relative to the old habit’s, until it no longer needs the goal-directed subagent’s support in order to win.
Now this model helps incorporate things like the role of having a vivid emotional motivation, a sense of hope, or psyching yourself up when trying to achieve habit change. Doing things like imagining an outcome that you wish the habit to lead to, may activate additional subsystems which care about those kinds of outcomes, causing them to submit additional bids in favor of the new habit. The extent to which you succeed at doing so, depends on the extent to which your mind-system considers it plausible that the new habit leads to the new outcome. For instance, if you imagine your exercise habit making you strong and healthy, then subagents which care about strength and health might activate to the extent that you believe this to be a likely outcome, sending bids in favor of the exercise action.
On this view, one way for the mind to maintain coherence and readjust its behaviors, is its ability to re-evaluate old habits in light of which subsystems get activated when reflecting on the possible consequences of new habits. An old habit having been strongly reinforced reflects that a great deal of evidence has accumulated in favor of it being beneficial, but the behavior in question can still be overridden if enough influential subsystems weigh in with their evaluation that a new behavior would be more beneficial in expectation.
Some subsystems having concerns (e.g. immediate survival) which are ranked more highly than others (e.g. creative exploration) means that the decision-making process ends up carrying out an implicit expected utility calculation. The strengths of bids submitted by different systems do not just reflect the probability that those subsystems put on an action being the most beneficial. There are also different mechanisms giving the bids from different subsystems varying amounts of weight, depending on how important the concerns represented by that subsystem happen to be in that situation. This ends up doing something like weighting the probabilities by utility, with the kinds of utility calculations that are chosen by evolution and culture in a way to maximize genetic fitness on average. Protectors, of course, are subsystems whose bids are weighted particularly strongly, since the system puts high utility on avoiding the kinds of outcomes they are trying to avoid.
The original question which motivated this section was: why are we sometimes incapable of adopting a new habit or abandoning an old one, despite knowing that to be a good idea? And the answer is: because we don’t know that such a change would be a good idea. Rather, some subsystems think that it would be a good idea, but other subsystems remain unconvinced. Thus the system’s overall judgment is that the old behavior should be maintained.
This makes some interesting predictions re: some types of trauma: namely, that they can happen when someone was (probably even correctly!) pushing very hard towards some important goal, and then either they ran out of fuel just before finishing and collapsed, or they achieved that goal and then - because of circumstances, just plain bad luck, or something else - that goal failed to pay off in the way that it usually does, societally speaking. In either case, the predictor/pusher that burned down lots of savings in investment doesn't get paid off. This is maybe part of why "if trauma, and help, you get stronger; if trauma, and no help, you get weaker".
Maybe, but that also requires that the other group members were (irrationally) failing to consider that the “attempt could've been good even if the luck was bad”.
In human groups, people often do gain (some) reputation for noble failures (is this wrong?)
Sure - I can believe that that's one way a person's internal quorum can be set up. In other cases, or for other reasons, they might be instead set up to demand results, and evaluate primarily based on results. And that's not great or necessarily psychologically healthy, but then the question becomes "why do some people end up one way and other people the other way?" Also, there's the question of just how big/significant the effort was, and thus how big of an effective risk the one predictor took. Be it internal to one person or relevant to a group of humans, a sufficiently grand-scale noble failure will not generally be seen as all that noble (IME).
Parts of human mind are not little humans. They are allowed to be irrational. It can't be rational subagents all the way down. Rationality itself is probably implemented as subagents saying "let's observe the world and try to make a correct model" winning a reputational war against subagents proposing things like "let's just think happy thoughts".
But I can imagine how some subagents could have less trust towards "good intentions that didn't bring actual good outcomes" than others. For example, if you live in an environment where it is normal to make dramatic promises and then fail to act on them. I think I have read some books long ago claiming that children of alcoholic parents are often like that. They just stop listening to promises and excuses, because they have already heard too many of them, and they learned that nothing ever happens. I can imagine that they turn this habitual mistrust against themselves, too. That "I tried something, and it was a good idea, but due to bad luck it failed" resembles too much the parent saying how they had the good insight that they need to stop drinking, but only due to some external factor they had to drink yet another bottle today. Shortly, if your environment fails you a lot, as a response you can become unrealistically harsh on yourself.
Another possible explanation is that different people's attention is focused on different places. Some people pay more attention to the promises, some pay more attention to the material results, some pay more attention to their feelings. This itself can be a consequence of the previous experience with paying attention to different things.
I wouldn’t say the subsconscious calibrating on more substantial measures of success, such has “how happy something made me” or “how much status that seems to have brought” is irrational. What you’re proposing, it seems to me, is calibrating only on how good of an idea it was from the predictor part / System 2. Which gets calibrated, I would guess, when the person analyses the situation? But if the system 2 is sufficiently bad, calibrating on pure results is a good idea to shield against pursuing some goal, the pursuit of which yields nothing but evaluations of System 2, that the person did well. Which is bad, if one of the end goals of the subconscious is “objective success”.
For example, a situation I could easily imagine myself to have been in: Every day I struggle to go to bed, because I can’t put away my phone. But when I do, at 23:30, I congratulate myself - it took a lot of effort, and I did actually succeed in giving myself enough time to sleep almost long enough. If I didn’t recalibrate rationally, and “me-who-uses-internal-metrics-of-success” were happy with good effort every day, I’d keep doing it. All while real me would get fed up soon, and get a screen blocker app to turn on at 23:00 every day to sleep well every day at no willpower cost. (+- the other factors and supposing phone after 23 isn’t very important for some parts of me)
In machine-learning terms, this is the difference between model-free learning (reputation based on success/failure record alone) and model-based learning (reputation can be gained for worthy failed attempts, or lost for foolish lucky wins).
Under that definition you end up saying that what are usually called ‘model-free’ RL algorithms like Q-learning are model-based. E.g. in Connect 4, once you’ve learned that getting 3 in a row has a high value, you get credit for taking actions that lead to 3 in a row, even if you ultimately lose the game.
I think it is kinda reasonable to call Q-learning model-based, to be fair, since you can back out a lot of information about the world from the Q-values with little effort.
Ah, yeah, sorry. I do think about this distinction more than I think about the actual model-based vs model-free distinction as defined in ML. Are there alternative terms you'd use if you wanted to point out this distinction? Maybe policy-gradient vs ... not policy-gradient?
Not sure. I guess you also have to exclude policy gradient methods that make use of learned value estimates. "Learned evaluation vs sampled evaluation" is one way you could say it.
Model-based vs model-free does feel quite appropriate, shame it's used for a narrower kind of model in RL. Not sure if it's used in your sense in other contexts.
Oh wow, this is almost exactly how I model my internal mind. I didn't realize it was a real thing other people has arrived at. Is there a name for this?
I got the bidding idea from Kaj, and “if the mind is a group” is my preferred metaphor/simplification of multi-agent models of mind (writing about this soon). This metaphor naturally implies reputation, as I realized yesterday while working with a client. I don't know if there’s a name for the reputation idea; it may be original
Reminds me of Internal Family Systems, which has a nice amount of research behind it if you want to learn more.
Thanks! Interesting read. I have one question though, so let me know if I'm following you properly: the moment you go "credibility broke", how could you study which things are / aren't bottom up motivating? Wouldn't the task of self-examining require spare "willpower" on its own?
I suspect the mechanisms that allow you to do these things (self reflection) are the same that require this living willpower. If you've used that, you can only do the basic stuff that your visceral process decides: watching netflix and eating junk food, for example.
A way out of my argument could be that there are different willpower-consuming processes and each one has their own "bank account". Getting burnt out from work doesn't make you burn out from playing tennis, and getting burnt out from playing tennis, doesn't get you burnt out from playing music. This makes the model more complex, though.
Yes, this is a good point, relates to why I claimed at top that this is an oversimplified model. I appreciate you using logic from my stated premises; helps things be falsifiable.
It seems to me:
I wish the above was more coherent/model-y.
- Somehow people who are in good physical health wake up each day with a certain amount of restored willpower. (This is inconsistent with the toy model in the OP, but is still my real / more-complicated model.)
This fits in with opportunity cost-centered and exploration-exploitation -based views of willpower. Excessive focus on any one task implies that you are probably hitting diminishing returns while accumulating opportunity costs for not doing anything else. It also implies that you are probably strongly in "exploit" mode and not doing much exploring. Under those models, accumulating mental fatigue acts to force some of your focus to go to tasks that feel more intrinsically enjoyable rather than duty-based, which tends to correlate with things like exploration and e.g. social resource-building. And your willpower gets reset during the night so that you could then go back to working on those high-opportunity cost exploit tasks again.
I think those models fit together with yours.
“Living willpower” is willpower that is consciously understood as a bet on unknowns: “I don’t know whether this project will pay off, but I am betting my finite credibility on it anyhow.”
This feels related to the idea of Slack that Scott Alexander writes about here in SSC.
He gives this example towards the end:
7. Ideas. These are in constant evolutionary competition – this is the insight behind memetics. The memetic equivalent of slack is inferential range, aka “willingness to entertain and explore ideas before deciding that they are wrong”.
Willpower and money are both ways to create slack for yourself and others so that you can explore ideas/projects with an uncertain payoff.
I often talk about w/ clients burnout as your subconscious/parts 'going on strike' because you've ignored them for too long
I never made the analogy to Atlas Shrugged and the live money leaving the dead money because it wasn't actually tending to the needs of the system, but now you've got me thinking
also, this 'subconscious parts going on strike' theory makes slightly different predictions than the 'is it good for the whole system/live' theory
for instance, i predict that you can have 'dead parts' that e.g. give people social anxiety based on past trauma, even though it's no longer actually relevant to their current situation.
and that if you override this social anxiety using 'live willpower' for a while, you can get burnout, even though the willpower is in some sense 'correct' about what would be good for the overall flourishing of the system given the current reality.
There is an ACX article on "trapped priors", which in the Ayn Rand analogy would be... uhm, dunno.
The idea is that a subagent can make a self-fulfilling prophecy like "if you do X, you will feel really bad". You use some willpower to make yourself do X, but the subagent keeps screaming at you "now you will feel bad! bad!! bad!!!" and the screaming ultimately makes you feel bad. Then the subagent says "I told you so" and collects the money.
The business analogy could be betting on company internal prediction market, where some employees figure out that they can bet on their own work ending up bad, and then sabotage it and collect the money. And you can't fire them, because HR does not allow you to fire your "best" employees (where "best" is operationalized as "making excellent predictions on the internal prediction market").
in my model that happens through local updates, rather than a global system
for instance, if i used my willpower to feel my social anxiety completely (instead of the usual strategy of suppression) while socializing, i might get some small or large reconsolidation updates to the social anxiety, such that that part thinks it's needed in less situations or not at all
alternatively, the part that has the strategy of going to socialize and feeling confident may gain some more internal evidence, so it wins the internal conflict slightly more (but the internal conflict is still there and causes a drain)
i think the sort of global evaluation you're talking about is pretty rare, though something like it can happen when someone e.g. reaches a deep state of love through meditation, and then is able to access lots of their unloved parts that are downstream TRYING to get to that love and suddenly a big shift happens to whole system simultaneously (another type of global reevaulation can take place through reconsolidating deep internal organizing principles like fundamental ontological constraints or attachment style)
"Global evaluation" isn't exactly what I'm trying to posit; more like a "things bottom-out in X currency" thing.
Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions. This isn't because the whole economy gets together to evaluate their projects. It's because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.
I think the analog happens in me/others: I'll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.
ah that makes sense
in my mind this isn't resources flowing to elsewhere, it's either:
Related: The monkey and the machine by Paul Christiano. (Bottom-up processes ~= monkey. Verbal planner ~= deliberator. Section IV talks about the deliberator building trust with the monkey.)
A difference between this essay and Paul's is that this one seems to lean further towards "a good state is one where the verbal planner ~only spends attention on things that the bottom-up processes care about", whereas Paul's essay suggests a compromise where the deliberator gets to spend a good chunk of attention on things that the monkey doesn't care about. (In Rand's metaphor, I guess this would be like using some of your investment returns for consumption. Where consumption would presumably count as a type of dead money, although the connotations don't feel exactly right, so maybe it should be in a 3rd bucket.)
Ten-year-old me had an objection to the idea of "willpower" on principle. Obviously, "Willpower" is the process by which people get themselves to do unpleasant things. I don't want to do unpleasant things. Therefore, having as little Willpower as possible will minimize the unpleasant things I end up doing.
Another way I've found myself with a lack of ability to motivate myself seems related to the post's original thesis. Up until I finally graduated college, my typical use of "willpower"-based motivation would be to do something I'd rather not have to do (usually homework) in order to avoid consequences that were supposed to be worse than doing the unpleasant thing. Unfortunately, this led to a bad feedback loop. My brain would predict that homework would be less fun than video games, I'd do it anyway, it would indeed be less fun than video games, and the lesson my brain would learn would be "pay less attention to that voice screaming that undone homework leads to doom" instead of "good, we successfully avoided the problem of undone homework". Eventually, doing homework became so aversive that I actually did stop caring about what might happen if I stopped doing it...
What should happen is that you occasionally fail to do homework and instead play video games. Then there are worse negative consequences as predicted. And then your verbal planner gets more credit and so you have more willpower.
Yeah, except that sometimes I'm weirdly insensitive to punishments and other threats. For some reason, my brain often (mistakenly?) concludes that doing the thing that would let me avoid the punishment is impossible, and I just shut down completely instead of trying to comply.
As I once wrote before:
Guy with a gun: I'm going to shoot you if you haven't changed the sheets on your bed by tomorrow.
Me: AAH I'M GOING TO DIE IT'S NO GOOD I MIGHT AS WELL SPEND THE DAY LYING IN BED PLAYING VIDEO GAMES BECAUSE I'M GOING TO GET SHOT TOMORROW SOMEONE CALL THE FUNERAL HOME AND MAKE PLANS TELL MY FAMILY I LOVE THEM
Guy with a gun: You know, you could always just... change the sheets?
ME: THE THOUGHT HAS OCCURRED TO ME BUT I'M TOO UPSET RIGHT NOW ABOUT THE FACT THAT I'M GOING TO DIE TOMORROW BECAUSE THE SHEETS WEREN'T CHANGED TO ACTUALLY GO AND CHANGE THEM
Also, the "worse consequences" were often projected to happen years in the future: you need good grades to get into a good college and then get a good job, etc. The fear of being homeless years in the future when your money runs out isn't really all that great when the "good" future you can imagine for yourself doesn't seem very appealing either - the idea of having a full-time job horrified ten-year-old me for various reasons, and I never really managed to get over that, to the point where I never did manage to get and keep a "real" job after college. There were years I lived with the constant worry that my parents might one day decide to stop supporting me financially and kick me out of their house...
Could possibly this be what they call ADHD?
(By the way, I seem to have a milder form of this. If the consequences are sufficiently bad or sufficiently near, like when there is deadline, I can make myself do the unpleasant thing. But things that have no external deadline... often just don't happen, sometimes for years.)
Oh, I remember and liked that comment! But I didn't remember your username. I have a bit more information about that now, but I'll write it there.
From the model in this article, I think the way this should work in the high-willpower case is that your planner gets credit aka willpower for accurate short-term predictions and that gives it credit for long-term predictions like "if I get good grades then I will get into a good college and then I will get a good job and then I will get status, power, sex, children, etc".
In your case it sounds like your planner was predicting "if I don't get good grades then I will be homeless" and this prediction was wrong, because your parents supported you. Also it was predicting "if I get a good job then it will be horrifying", which isn't true for most people. Perhaps it was mis-calibrated and overly prone to predicting doom? You mention depression in the linked comment. From the model in this article, someone's visceral processes will respond to a mis-calibrated planner by reducing its influence aka willpower.
I don't mean to pry. The broader point is that improving the planner should increase willpower, with some lag while the planner gets credit for its improved plans. The details of how to do that will be different for each person.
I did have an "internship" right after college for a few months and was completely miserable during it. The other problem was that one thing I valued highly was free time, and regardless of how much money and status a 40 hour a week job gives you, that's still 40 hours a week in which your time isn't free! There are very few jobs in which, like an Uber driver, you have absolute freedom to choose when and how much to work and the only consequence of not working for a period of time is that you don't get paid - you can't "lose your job" for choosing not to show up. Unfortunately, most jobs that fit that description, such as Uber driver or fiction novel writer, usually pay very poorly.
Yet another problem is that I feel like applying for jobs will be futile. Spending time submitting resumes into a metaphorical black hole and never getting any interviews or even a form letter in response, even from grocery stores, has left me in despair and even starting to think about job hunting consistently and reliably makes me start to feel incredibly depressed.
one thing I valued highly was free time, and regardless of how much money and status a 40 hour a week job gives you, that's still 40 hours a week in which your time isn't free!
Yeah, the same here. The harder I work the more money I can get (though the relation is not linear; more like logarithmic), but at this point the thing I want it not money... it is free time!
I guess the official solution is to save money for early retirement. Which requires investing the money wisely, otherwise the inflation eats it.
By the way, perhaps you could have some people check your resume, maybe you are doing something wrong there.
your psyche’s conscious verbal planner “earns” willpower
This seems to assume that there's 1) exactly one planner and 2) it's verbal. I think there are probably different parts that enforce top-down control, some verbal and some maybe not.
For example, exerting willpower to study boring academic material seems like a very different process than exerting willpower to lift weights at the gym.
I think that there is something like:
My model of burnout roughly agrees with both your and @Matt Goldenberg . To add to Matt's "burnout as revolt" model, my hunch is that burnout often involves not only a loss of belief that top-down control is beneficial. I think it also involves more biological changes to the neural variables that determine the effectiveness of top-down versus bottom-up control. Something in the physical ability of the top-down processes to control the bottom-up ones is damaged, possibly permanently.
Metaphorically, it's like the revolting parts don't just refuse to collaborate anymore; they also blow up some of the infrastructure that was previously used to control them.
Perhaps the "decisions" that happen in the brain are often accompanied by some change in hormones (I am thinking about Peterson saying how lobsters get depressed after they lose a fight), so we can't just willpower them away. Instead we need to find some hack that reverts the hormonal signal.
Sometimes just taking a break helps, if the change in hormones is temporary and gets restored to the usual level. Or we can do something pleasant to recharge (eat, talk to friends). Or we can try working with unconsciousness and use some visualization or power poses or whatever.
Curated. I resonated with the metaphors here.
For a long time I was skeptical about things like "your mind(s) can learn to trust yourselve(s)." But it's seemed more true to me over time.
It's hard to disentangle various causes since my life has changed a lot. My own experience is that early on, I got more mileage out of "self-coercive techniques" than "internal alignment/self-trust" style techniques, and this was necessary for getting started on "have enough executive function to really get anything done at all". But over the years I think I have developed more self-trust that has paid off in ways similar to how this post describes.
Ironically, I do not know who to attribute to the notion that 'all problems are credit assignation problems.'
I know that in my intellectual history it was Abram Demski's post The Credit Assignment Problem.
As always, I may not be the intended audience, so please excuse my questions that might be patently obvious to the intended audience.
Am I right in understanding a very simplified version of this model is that if you use willpower too much without deriving any net benefits, eventually you'll suffer 'burnout' which really is just a mistrust of using willpower ever, which may have negative effects on other aspects of your life even where willpower is needed like, say, cleaning your house?
Willpower, as I understand it is another word for 'patience' or 'discipline', variously described as the ability to choose to endure pain (physical or emotional). Whether willpower actually exists is a question I won't get into here, let's assume for the sake of this model it does, and fits the description of the ability to choose to endure pain.
For me this sentence I find especially alien to me:
your psyche’s conscious verbal planner “earns” willpower (earns trust with the rest of your psyche) by choosing actions that nourish your fundamental, bottom-up processes in the long run.
what is the "psyche's conscious verbal planner"? I don't know what this is or what part of my mind, person, identity, totality as a organism or anything really that I can equate this label to. Also without examples of what actions are that nourish (again, would cleaning the house, cooking healthy meals be examples?), that are fundamental and those that aren't, it's even harder to pin down what this is and why you attribute willpower to it.
It appears to have the ability to force one's-self to go on a date, which really makes the "verbal" descriptor confusing since a lot of the processes that are involved in going on a date don't feel like they are verbal, lexical, or take the form of the speaker's native language written or spoken. At least in my experience, a lot of the thoughts, feelings, and motivations behind going on a date are not innately verbal for me and if you asked me "why did you agree to see this person?" - even if I felt no fear of embarrassment explaining my reasons - I'd have a hard time putting that into words. Or the words I'd use would be so impossibly vague ("they seem cool") as to suggest that there was a nonverbal reasoning or motivation.
Would this 'conscious verbal planner' also be the part of my mind and body that searches an online store a week later to see if those shoes I want are on special? Or would you attribute that to a different entity?
Is there an unconscious verbal planner?
When I am thinking very carefully about what I'm saying, but not so minutely that I'm thinking about the correct grammatical use, would the grammar I use be my unconscious verbal planner, while the content of my speech be the conscious verbal planner?
A lot of example, for me, of willpower often are nonverbal and come from guilt. Guilt felt as a somatic or bodily thing. I can't verbalize why I feel guilty, although it verbally equates to the words "should" "must" and even "ought" when used as imperatives, not as modals.
Our conscious thought processes are all the ones we are conscious of. Some of them are verbal, in words, eg thinking about what you want to say before saying it. Some of them are nonverbal, like a conscious awareness of guilt.
Most people have some form of inner monologue, aka self-talk, but not all. It sounds like you may be one of those with limited or no self-talk. Whereas I might think, in words, "I should get up or I'll be late for work", perhaps you experience a rising sense of guilt.
To benefit from this article you'll need to translate it to fit your brain patterns.
I have given you the wrong impression, I assure you I have a very verbal, very longwinged inner monologue which uses a lot of words and sentences. However I wouldn't consider it the sole or perhaps even the chief source of my planning, although sometimes it is involved in how I plan. So when the author says "verbal conscious planner" are there other 'planners' I should be excluding from my personal translation? How would I know?
I'm just wondering if there's a specific reason that the author has referred to it as a VERBAL conscious planner, and if willpower is therefore only applicable to what is verbal? Because as I understand it, especially in the dual-theory of memory which divides memory into Declarative/Explicit Memory and Non-Declarative/Implicit Memory (to which it is easy to draw an analogy between System 1 and 2, or the Elaboration Likelyhood model of attitudinal change) - the verbal is explicit, the non-verbal is vague in this dichotomy.
Why refer to it as a "verbal conscious planner" - why not just say "conscious planner"? Surely the difference isn't haphazard?
"Our conscious thought processes are all the ones we are conscious of."
Could you rephrase this less tautologically? - because now I'm wondering a lot of perhaps irrelevant things such as: is it necessary to be conscious of the content of a thought, or only that a thought is currently being held? What micro-macro level of abstraction is necessary? For example, if I'm deliberating if I should check if a pair of shoes are available on an online store still discounted am I conscious of the thought if I think "shoes on online store" or must I refer to "that pair of red converses on ASOS"?
I just worry that this is perhaps a logocentric view of willpower.
Thanks for the extra information. Like you, my plans and my planning can be verbal, non-verbal, or a mix.
Why refer to it as a "verbal conscious planner" - why not just say "conscious planner"? Surely the difference isn't haphazard?
I can't speak for the author, but thinking of times where I've "lacked willpower" to follow a plan, or noticed that it's "draining willpower" to follow a plan, it's normally verbal plans and planning. Where "willpower" here is the ability to delay gratification rather than to withstand physical pain. My model here is that verbal plans are shareable and verbal planning is more transparent, so it's more vulnerable to hostile telepaths and so to self-deception and misalignment. A verbal plan is more likely to be optimized to signal virtue.
Suppose I'm playing chess and I plan out a mate in five, thinking visually. My opponent plays a move that lets me capture their queen but forgoes the mate. I don't experience "temptation" to take the queen, or have to use "willpower" to press ahead with the mate. Whereas a verbal plan like "I'm still a bit sick, I'll go to bed early" is more likely to be derailed by temptation. This could of course be confounded by the different situations.
I think you raise a great question, and the more I think about it the less certain I am. This model predicts that people who mostly think visually have greater willpower than those who think verbally. Which I instinctively doubt, it doesn't sound right. But then I read about the power of visualization and maybe I shouldn't? Eg Trigger-Action Planning specifically calls out rehearsed visualization as helping to install TAPs.
This reminds me of this post from Gena Gorlin (and the themes in her writing more generally): https://builders.genagorlin.com/p/death-is-the-default
It doesn't quite map on precisely, but this quote seems to capture something you're trying also trying to get at: "On the contrary, remembering that “death is the default” should mobilize us to fight them with everything we’ve got—recognizing that the one thing we’ve got, in the fight against entropy, inertia, and death, is our power of agency."
I also see a parallel to two different conceptions of ethics:
Lastly, I think your observation that healthy processes "must take an active interest in things they don't yet know" is perhaps a recognition that a key component of our finitude is our bounded rationality, and that we must recognize our limitations if we are to live well.
Loved your model of willpower.
How you stretched this idea into what drives us in the first place is brilliant, we tend to value a certain possibility or ideal state and we think a particular tending of a specific process might get to it. But we tend to switch from open process with changing unknown variables to close static process which risks collapse. I think that's core idea right.
Also, I have interest in the idea of how important unknownkness is in relationships especially romance. Where did you get this idea specifically ?
Nice. Yes.
Learning to distinguish the sensation that comes with living willpower or dead willpower is a high ROI practice. Like red/yellow/green lights. Most times, it serves best to heed the message, even if you might run through a red when the situation warrants ...
What’s the payout of this model? I’m highly skeptical of any metaphor from Ayn Rand, so drawing comparisons to her ideas doesn’t add any insight for me. If I’m just not that target audience, that’s cool.
Thanks for asking. The toy model of “living money”, and the one about willpower/burnout, are meant to appeal to people who don’t necessarily put credibility in Rand; I’m trying to have the models speak for themselves; so you probably *are* in my target audience. (I only mentioned Rand because it’s good to credit models’ originators when using their work.)
Re: what the payout is:
This model suggests what kind of thing an “ego with willpower” is — where it comes from, how it keeps in existence:
I find this a useful model.
One way it’s useful:
IME, many people think they get willpower by magic (unrelated to their choices, surroundings, etc., although maybe related to sleep/food/physiology), and should use their willpower for whatever some abstract system tells them is virtuous.
I think this is a bad model (makes inaccurate predictions in areas that matter; leads people to have low capacity unnecessarily).
The model in the OP, by contrast, suggests that it’s good to take an interest in which actions produce something you can viscerally perceive as meaningful/rewarding/good, if you want to be able to motivate yourself to take actions.
(IME this model works better than does trying to think in terms of physiology solely, and is non-obvious to some set of people who come to me wondering what part of their machine is broken-or-something such that they are burnt out.)
(Though FWIW, IME physiology and other basic aspects of well-being also has important impacts, and food/sleep/exercise/sunlight/friends are also worth attending to.)
Thanks for clarifying! Willpower is a tricky concept.
I’ve suffered from depression at times, where getting out of bed felt like a huge exertion of emotional energy. Due to my tenuous control over my focus with ADHD, I often have to repeat in my head what I’m doing so I don’t forget in the middle of it. I’ve also put in 60-hour weeks writing code, both because I’ve had serious deadlines, but also because time disappeared as I got so wrapped up in it. I’ve stayed on healthy diets for years without problem, and had times where slipped back to high sugar foods.
All of these are examples of what people refer to as willpower (or lack there-of). Most of them are from times in my life where I haven’t felt really in control. This is especially true regarding memory. It’s not uncommon for me to realize as I am putting my groceries away that I didn’t get the one item I really needed (and have to go back).
That said, I’m pretty good at grit: I’m willing to put in the work, despite hardships and obstacles. I’m also good at leading by example. I’ll fight the good fight, when needed,
All of these different features of me and my brain, are wrapped up in the concept of willpower. Each of them are a mixture of conscious and unconscious patterns of behavior (including cognitive).
It’s this distinction that makes me look askance at the concept of willpower. It’s too wrapped up in moral judgement.
I wasn’t diagnosed with ADHD until after my son was. I lived with a lot guilt and shame because I interpreted the things I struggled with as a moral failings, because I just lacked the willpower.
Then I saw how many people struggled with the same sorts of things I did. It was really weird learning that so many things I previously would have described as negative personality traits of mine, turned out to be what happens when someone has this quirk in their brain that me and my son have.
Now, I don’t carry that guilt. Now, I know that despite my best efforts, tools, and practices, there are things I’m just going to always struggle with that neurotypical find easy, and that’s okay. Now, I don’t see myself as having low willpower because of them. Now, I better understand the quirks of my brain, and I am better equipped to mitigate my weaknesses, and play into my strengths.
Now, I’m a lot happier and confident. I wish it hadn’t taken 40 years for me to figure things out, but I’m glad my son is free of that shame and guilt.
I feel pretty lucky: when I was a kid, I had knack for patterns and abstraction, a fascination with computers, a family that could actually afford one, and people who could help me when I was stuck, I managed to make my hobby into my profession, and still enjoy it as a hobby.
I totally agree that joy and meaning are a balm to burnout. That and vacations; take more vacations.
I guess what I’m saying is be careful to not stretch your metaphors too far, as the details are messy; however, if it helps you to remember to take care of yourself, find joy, and seek meaning, I’m all for it.
Just some quick guesses:
If you have problems with willpower, maybe you should make your predictions explicit whenever you try to use it. I mean, as a rationalist, you are already trying to be better calibrated, so you could leverage the same mechanism into supporting your willpower. If you predict a 90% success of some action, and you know that you are right, in theory you should feel small resistance. And if you predict a 10% success, maybe you shouldn't be doing it? And it helps you to be honest to yourself.
(This has a serious problem, though. Sometimes the things with 10% chance of success are worth doing, if the cost is small and the potential gain large enough. Maybe in such cases you should reframe it somehow. Either bet on large numbers "if I keep doing X every day, I will succeed within a month", or bet on some different outcome "if I start a new company, there is a 10% chance of financial success, and a 90% chance that it will make a cool story to impress my friends".)
This also suggests that it is futile to use willpower in situations where you have little autonomy. If you try hard, and then an external influence ruins all your plans, and this was all entirely predictable, you just burned your internal credibility.
(Again, sometimes you need at least to keep the appearance of trying hard, even if you have little control over the outcome. For example, you have a job where the boss overrides all your decisions and thereby ruins the projects, but you still need the money and can't afford to get fired. It could help to reframe, to make the bet about the part that is under your control. Such as "if I try, I can make this code work, and I will feel good about being competent", even if later I am told to throw the code away because the requirements have changed again.)
This also reminds me about "goals vs systems". If you think about a goal you want to achieve, then every day (except for maybe the last one) is the day when you are not there yet; i.e. almost every day is a failure. Instead, if you think about a system you want to follow, then every day you have followed the system successfully is a success. Which suggests that willpower will work better if you aim it at following a system, and stop thinking about the goal. (You need to think about the goal when you set up the system, but then you should stop thinking about it and only focus on the system.)
The strategy of "success spiral" could be interpreted as a way to get your credibility back. Make many small attempts, achieve many small successes, then attempt gradually larger things. (The financial analogy is that when you are poor, you need to do business that does not require large upfront investments, and gradually accumulate capital for larger projects.)
I’m generally pretty good. I’m way better at communicating when I am having a problem. Plus, with my meds I made it through the end of a ten-year relationship without falling into a deep depression. I haven’t had a weird melt down in some time.
As you say, I can predict certain things, and prepare. Routines help me not forget essential things. For instance, I don’t brush my teeth until after I have taken my pills so I can immediately tell if I already have.
I use way-stations for cleaning up, like by the door to my room, I have a spot on a table for things that’s I want to take out of the room the next time I go, that way I don’t have to interrupt whatever I am doing at the time. I have stations like this in most rooms of my house.
I also compensate by motivating myself to take care of things immediately. “Later is a lie,” has become sort of mantra for me. When I hear myself saying our thinking “later,” I try to remind myself of that mantra. It works for my son too.
All of my bills are on autopay. I put appointments in my calendar so I don’t have to remember them, and set timers all the time for things. It’s generally better than the reminder apps. My phone is basically the third lobe of my brain. I’ve written routines for when to set my alarm, so I don’t forget to so.
I also work within my limits. For instance, if a project will take more than a weekend, I know I won’t be same to sustain interest in it. I can make exceptions—to varying levels of success—but they have to be things that are really important. I won’t get a pet, and I have just one plant that’s right by my sink.
I don’t generally have problems with willpower; I have problems with the word and concept.
Pets often make their needs quite obvious if you "forget" to take care of them. When my dog wants something from me, he won't leave me alone until I figure out what it is.
They can also be immediately rewarding and stay that way. I wouldn't necessarily recommend a goldfish, but if you're already an animal lover it's hard to become bored with a dog or cat.
Similar here: calendar notifications, special place on the table (phone, keys, etc.).
Also, "inbox" for important documents that need to be filed, which I process once in a few months.
I understand the mechanisms of your metaphor and while they are undeniable they are also, IMO overly simplistic and not helpful as it leaves out the most important element of willpower.
Just like freewill, willpower implies making a decision then using your "freewill" or willpower to act on it. Willpower, as with freewill, is meaningless without making a conscious and preferably well informed decision.
Regardless of living or dead if you do not have reasonably accurate and somewhat complete info aka less wrong-ish its a crapshoot as to the outcome.
I posit that humans in general and Americans and most Europeans too have been so thoroughly and consistently lied to that any discussion of willpower or freewill is moot.
What IS being shut down or weeded out is truth, right and wrong, reality, critical thinking, honesty, and on and on, without which we all loose our ability to make good decisions.
I don't know how our species can overcome this but IOM nothing else matters until we do.
Epistemic status: Toy model. Oversimplified, but has been anecdotally useful to at least a couple people, and I like it as a metaphor.
Introduction
I’d like to share a toy model of willpower: your psyche’s conscious verbal planner “earns” willpower (earns trust with the rest of your psyche) by choosing actions that nourish your fundamental, bottom-up processes in the long run. For example, your verbal planner might expend willpower dragging you to disappointing first dates, then regain that willpower, and more, upon finding you a good long-term romance. Wise verbal planners can acquire large willpower budgets by making plans that, on average, nourish your fundamental processes. Delusional or uncaring verbal planners, on the other hand, usually become “burned out” – their willpower budget goes broke-ish, leaving them little to no access to willpower.
I’ll spend the next section trying to stick this discussion into a larger context, then get back to willpower.
On processes that lose their relationship to the unknown
Good scientists seek truth, craft hypotheses that stick their necks out, and send their ideas forth to die in their stead, while retaining their own ability to change their minds.
Actual humans who aspire to be scientists, on the other hand, often forget that this is the goal, and instead get stuck defending some fixed theory.
I suspect this forgetting is a special case of a general phenomenon: many processes, if they are to remain healthy, must take an active interest in things they don’t yet know. And yet those same processes will tend to collapse into structures that actively resist new data. To list a few other examples:
I hope someone someday writes an essay kind of like the Waluigi essay, that gives a decent, rent-paying model of why processes often collapse in this way. (The tie-in to willpower is that IMO a lot of burnout is downstream of this kind of collapse.)
Ayn Rand’s model of “living money”
I love the model of Rand’s that I’m about to share because it offers a crisp microcosm of the junction between “I want power enough to do what I want, whether or not that fits easily with the world” and “if I don’t attend to how the world is shaped, I’ll cease to be.”
The model is set in an ideal economy – the sort of place libertarians would want to build. (If you think this isn’t possible for humans, imagine a different species and context for which it is possible; I want this as a toy model, or a detailed metaphor.)
In this ideal economy, people can only make money by creating something people want. (Otherwise, those others wouldn't part with their money.) So, if you want money, you basically have to attend to things like "what do people want?" and "how does the world work -- e.g. how do I make metal for train tracks that will hold together well, or bread that will be tasty, or etc.?" Even if you're a bit of an idiot who wants only the temporary illusion that you're creating value, by getting to e.g. boss people around and make what looks like a business even though it's destroying value or something -- even then you need to pay attention, because otherwise you can't get people to do what you want, and can't get those with land or factories or whatever to loan it to you. Unless! Unless you have savings.
Savings are interesting in this model because they give you the temporary ability to get people to mix metals or bake bread or whatever in the way you want them to, whether or not your way makes sense to them (by paying them to do this). On Rand’s model, there’re two superficially similar, but really very different, ways this ability can be used:
Living money: Sometimes an entrepreneur sees a new path to produce real value and makes a bet, spending their savings to set up a factory/research lab/etc. that has a real shot at later making things people want (even though others can't see this yet, and so wouldn't yet want in unless paid in advance). Such an entrepreneur’s money is “living” in the sense that it is part of a homeostatic dance that spends and replenishes, sustains itself over time, and creates more life/value.
Dead money: Other times, a person spends down their savings (and thereby gets people to do things those people don’t independently see value in) while taking no interest in the complex and often not-yet-known-to-us processes by which value is created and destroyed. For example, Rand’s fiction depicts characters who inherit large sums of money without any corresponding virtue. Sometimes these characters say they want to “help people,” but “help people” is for them something of a detached symbol; e.g. one such character who inherits a bank preferentially gives bank loans to people who’re pitiable but unlikely to repay the money (“to help people”) until the bank they are running goes bust and loses the entrusted savings of many; another inherits an energy research firm and makes “compassionate” changes that make it far less likely to create good power plant designs. Such an heir’s money is not a living system; it gets used up via paying people to do things that usually cause destruction. And then it’s gone. There is no sustained or life-aligned dance this money takes part in.
The tendency of “dead money” to burn itself out helps keep the idealized economy in a state where businesses are mostly “living,” are mostly curious about and responsive to the question of what will actually create goodness for the economy’s inhabitants.[1] The bankruptcy of particular businesses and of particular piles of savings is part of what keeps the economy in living relationship with the unknown, instead of letting it get closed off and colonized by stasis. Sort of like how the “death” of falsified scientific hypotheses is part of what keeps the process of science alive despite the tendency-toward-closed-offness of the humans who aspire to be scientists.
An analogous model of “living willpower” and burnout
My toy model of willpower is basically the same:
There’s a bottom-up process within us (as within young children and non-human animals) that tends, if left to its own devices, to do a lot of good things (going on a walk when restless, eating when hungry, gravitating toward conversations or play activities that help us make sense of things, etc.). But as humans we can sometimes do better than this – we can sometimes foresee steps that seem dumb to our more visceral processes, but that pay off in the long-run, including from a visceral perspective.
“Living willpower” is willpower that is consciously understood as a bet on unknowns: “I don’t know whether this project will pay off, but I am betting my finite credibility on it anyhow.” “Dead willpower” is willpower that has forgotten where it comes from, that tries to arbitrarily make up some actions that are “good” to do (in some sense of “good” that is not accountable, not based in the conditions required to sustain life and choice, as with the Rand villain who wanted to "help people"). [2]
Basically: “living willpower” understands itself as accountable to the conditions required for life, consciousness, and choice – while “dead willpower” doesn’t.
On this model, burnout is the analog of running out of savings – burnout is running out of the ability to get your visceral process to follow your suggestions. Burnout is what happens when either living willpower makes enough bad bets despite trying to make good ones until it “goes credibility-broke”, or when dead willpower makes many bad bets while forgetting it is making bets at all.
This has some upside: once a person is credibility-broke, they can more easily study which things are and aren't bottom-up motivating (since those are ~the only actions they can take); and which uses of their tiny willpower do and don't rapidly produce more; and this is sometimes helpful for learning the difference between uses of willpower that form part of the living dance, and uses that don't.
Again, this is within an idealized economy. IMO it is an important question to what extent today's actual economies in e.g. the US cause "dead money" to burn out, vs having pathways whereby dead money can acquire more dead money via regulatory capture or similar without any need to create value.
This is perhaps something of a digression, but my own experience is that when I’m using “dead willpower,” there is in practice a “tiny note of discord”-type sensation that tugs gently at my sleeve, letting me know that there’re some clues I’m not looking at, concerning whether this is what I want to be doing.
Perhaps another word for what I’m calling “dead willpower,” here, is “being dissociated from some of your own caring, or some of your own capacity to care.” IME it often comes with a desire to stay distracted, to continue blocking out that “tiny note of discord.” Also IME, folks living this way are more likely to get hijacked by ideologies (or to invite ideologies to come distract them?) – more likely to e.g. feel as though they “need to” do what (wokism / EA / staying “okay” relative to some middle class notion of career success / whatever) demands, less likely to enjoy and appreciate getting to choose. (But this is all anecdotal with small n, I admit.)