tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.

Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.

The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.

Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"

The task of keeping your various mental impulses working together coherently is called executive functioning. To deal with an "always eat cake" impulse, some may be lucky enough to win by simply reciting "cake isn't really that tasty anyway". A more potent technique is to practice visualizing the cake making you instantaneously grotesque and extremely ill, creating a psychological flinch-away reflex — a behavioral trigger — which will be activated on the sight of cake and intervene on the usual behavior to eat it. But such behavioral triggers can easily fail if they aren't backed up by an agreement between your WorthIt and NotWorthIt sub-agents: if you end up smelling the cake, or trying "just one bite" to be "polite" at your friend's birthday, it can make you all-of-a-sudden-remember how tasty the cake is, and destroy the trigger.

To really be prepared, Bob needs to vaccinate himself against extenuating circumstances. He needs to admit to himself that cake really is delicious, and decide whether it's worth eating without downplaying how very delicious it is. He needs to sit down with the cake, stare at it, smell it, taste three crumbs of it, and then toss it. (If possible, he should give it away. But note that, despite parentally-entrained guilt about food waste, Bob hurting himself with the cake won't help anyone else help themselves with it: starving person eats cake > no one eats cake > Bob eats cake.)

This admission corresponds to having a meeting between WorthIt-Bob and NotWorthIt-Bob: having both sets of emotions present and salient simultaneously allows them to reach a balance decisively. Maybe NotWorthIt-Bob will decide that eating exactly one slice of cake-or-equivalent tasty food every two weeks really is worth it, and keep a careful log to ensure this happens. Maybe WorthIt-Bob will approve of the cake-is-poison meditation techniques and actually change his mind. Maybe Bob will become one person who consistently values his health and appearance over spurious taste sensations.

Or maybe not. But it sure works for me.

New Comment
32 comments, sorted by Click to highlight new comments since:
[-]pjeby230

This.

More specifically, this is what various gurus mean when they talk about integrating parts, accessing the shadow side, taming your inner enemies, and many other metaphorical terms.

It's also what I mean when I say that akrasia equals conflict, and that attempting to overpower conflicts doesn't work; that you have to actually surface your true desires and objections in order to resolve them.

This discussion is triggering an interesting thought. To learn how willpower works in individuals, we should study how groups come to decisions and stick by them.

(Because we're modeling within-individual conflict as between-subagent conflict, and thus making no relevant distinction between individuals and their subagents. It's subagents all the way down.)

So what do we know about the latter?

[-]pjeby130

So what do we know about the latter?

People get along best when their interactions are non-zero-sum.

Which is why, as I said in the comment above, "you have to actually surface your true desires and objections in order to resolve them."

This need, incidentally, is raised quite often in books on sales, negotiation, etc. -- that in order to succeed, you need to find out what the other person really wants/needs (not just what they say they want), and then find a way to give them that, in exchange for what you really want/need (not just what you'd like to get).

In some cases, it may be easier to do this with another person than with yourself, because, as Feynman says, "you are the easiest person to fool." There's also the additional problem that by self-alienating (i.e., perceiving one of your desires as "other", "bad", or "not you") you can make it virtually impossible to negotiate in good faith.

Actually, scratch that. I hate using "negotiate" as a metaphor for this, precisely because it implies an adversarial, non-zero-sum interaction. The other pieces of what you want are not alien beings trying to force you to give something up. They are you, even if you pretend they aren't, and until you see through that, you won't see any of the possibilities for resolving the conflict that get you more of everything you want.

Also, while the other party in a negotiation may not tell you what they really want, even if you ask, in internal conflict resolution you will get an answer if you sincerely ask... especially if you accept all your desires and needs as being truly your own, even if you don't always like the consequences of having those desires or needs.

Well, strictly from the theoretical perspective of rational-agent game theory, we know quite a lot.

  1. Subagents need to communicate so as to coordinate. Cooperation works best when there are no secrets.

  2. On the other hand, it is often in the interests of the individual agents to keep some things secret from other agents. There is a fascinating theory of correlated equilibria and mechanism design to enable the sharing of the information you want to share and the hiding of information you wish to keep secret.

  3. Punishment of one agent by other agents, and threats of punishment, are important in bargaining and in incentivizing adherence to bargains. There is no known way to dispense with threatened punishment, and probably no way to dispense entirely with real punishment. Rational cooperation (justified by reciprocity) cannot be built on any other basis.

To my mind, the idea of modeling mind as a society of autonomous agents is definitely worth exploring. And I see no reason not to treat at least some of those component agents as rational.

Dennett has been on a competitive neuron kick recently. Which would make game theory (or variants of it with applicable assumptions) a central part of understanding how the brain works.

I'm curious what he will come up with.

Rational cooperation (justified by reciprocity) cannot be built on any other basis.

You can get cooperation through kin selection, though. If you are dealing with your brother reciprocity can be dispensed with. Thus the interest in things like showing others your source code.

Yep. Fully agree, assuming you meant twin brother. I originally left the parenthetical qualification out, then added it when I thought what you just now said.

It seems as though a lot of your third point unravels, though.

If you are a machine, you can - under some circumstances - rationally arrange cooperation with other machines without threats of punishment. The procedure involves exhibiting your actual source code (and there are ways of doing that convincingly). No punishment is needed, and it can work even if agents are unrelated, and have different goals.

None of my third point unravels. I was talking about bargaining. Bargaining between rational agents with different goals requires threats, if only threats not to make a bargain - threats not to share source.

You talk about cooperation. Certainly cooperation is possible without threats. But what do you cooperate in doing? You need to bargain so as to jointly decide that.

I'm inclined to ask you what you mean by "threat".

However, rather than do that, please imagine two agents bargaining over the price of something, who are prevented from "threatening" each other by a police man, applying your preferred definition of the term - whatever that may be.

Do you think that the police man necessarily prevents a bargain being reached?

I'm inclined to ask you what you mean by "threat".

I'm inclined to refer you to the standard literature of game theory. I assure you, you will not be harmed by the information you encounter there.

...who are prevented from "threatening" each other by a police man ...

I will at least mention that the definition of "threat" is inclusive enough that a cons table would not always intervene to prevent a threat.

... Do you think that the police man necessarily prevents a bargain being reached?

No, the cons table's intervention merely alters the bargaining position of the players, thus leading to a different bargain being reached. Very likely, though, one or the other of the players will be harmed by the intervention and the other player helped. Whether this shift in results is or is not a good thing is a value judgment that not even the most ideological laissez-faire advocate would undertake without serious misgivings.

If rational bargainers fail to reach agreement, this is usually because their information is different, thus leading each to believe the other is being unreasonable; it is not because one or another negotiating tactic is disallowed.

ETA: Only after posting this did I look back and see why you asked these questions. It was my statement to the effect that "bargaining requires threats". Let me clarify. The subject of bargaining includes the subject of threats. A theory of bargaining which attempts to exclude threats is not a theory of bargaining at all.

First comment on this thing. I've lurked for a few months now. :)

I appreciate what super power you're trying to unveil here and I'd like to contribute a few thoughts from my own experience. I'll try to offer a more robust mechanism for coercing your own behavior: Your memory. I think you touched on it when you gave the example of smelling, touching, and staring at the cake followed by throwing it away. You stopped there, however, not citing what about that ritual coerces future behavior. Memory is what makes that happen.

Memory of how good the cake tastes is what makes you want more of it in the first place. Similarly the memory of rejecting the cake and surviving, especially memories of rejecting the cake and recognizing the rewards (perhaps even because you reward yourself) will coerce your opinion towards rejecting it again in the future. If you can give yourself positive memories about self-denial you will be able to access self-denial again in the future with greater ease. The more positive memories you have with an activity, the easier it is to repeat that activity.

This is what I've noticed in myself anyway. The studies continue..

[-]Aharon120

This sounds good in theory. But in my experience, WorthIt-Bob doesn't usually argue rationally.

He acknowledges the existence of rational arguments of not fulfilling the wish in question (be it eating a cake, delaying work, whatever). He just doesn't care about these rational arguments.

An internal dialogue, as I've had it (slightly paraphrased, of course): NWI-Bob feels the wish to delay work arise. Thought: I acknowledge that delaying work right now would be more fun than not delaying work, but the deadline of the project I'm working on is approaching and if I delay my work now, I will have problems meeting this deadline. Therefor, I should work now and do some leisure activity later on. WI-Bob: Yeah, these arguments are pretty good, and I know it will be bad. However, that lies in the future, and I want to have fun right now. NWI-Bob: That's not a very smart approach. Just continuing to work for 2 more hours will make me happy, and you will achieve happiness in the free time afterwards. WI-Bob: Yes, that sure is an intelligent idea, but I still want to have fun right now.

The dialogue continues in the same vein. Most of the time, it ends with NWI-Bob making a last argument, and WI-Bob not responding because he just doesn't have any arguments besides "I want that right now." However, despite his lack of response, his will often perseveres.

Maybe, but you should consider the possibility that WI-Bob has other reasons that he doesn't articulate consciously: for example, "it's not just that I want to have fun right now, it's that I'm anxious about this particular task, I know I'll feel that anxiety when I'm working on it, and I don't want to face that right now" (to take one example). Bring this into the open, and the negotiation might look quite different. (E.g. are there ways to assuage the anxiety in advance? That might help.)

This sounds good in theory. But in my experience, WorthIt-Bob doesn't usually argue rationally.

I didn't say anything about WorthIt-Bob having to be rational... you've dealt with irrational people before, and you can deal with irrational sub-agents, too. Heck, you can even train your pets, right?

In particular, Orthonomal has some great advice for dealing with people and sub-agents alike: figure out all their feelings on the issue, even the ones they didn't know they had. Then they might turn out more rational than you thought, or you might gain access to the root of their irrationality. Either way, you get a better model for them, and you probably need it.

Thank you both for this advice.

Actually, I haven't consciously modelled other people or myself. The feelings of other people usually seem rather clear to me, and for myself, I dealt with these problems on a case-by-case basis until recently. However, I am not content with the success of this case-by-case approach anymore, so I'm interested in changing that.

Concerning modelling people: How would one go about to do that? Are there any sequences on this topic, or could you recommend a book or something?

Exactly. A prerequisite skill here is sufficient introspection to surface the real motivations for a behavior or impulse (where "real" means the motivations that are cruxy, the ones that change based on the circumstances), instead of your confabulations or your self-model.

For this to work, you need self-awareness. I actually think it must happen (it's probably the only solution) if you're both trying to sustain a difficult effort, and maintain a fair model of reality (both of your own feelings and your position in the world).

People who are already practicing some self-awareness techniques (e.g. Vipassana meditation) would be foolish to not leverage them in the way Academian describes. For everyone else, it would at least make sense to try this approach when you have a strong feeling of inner conflict; where the internal dissenting voices are weak, you can just proceed in the usual way.

This sort of model of how to work with your own psychology seems to have taken off in the rationality community. I certainly rely on it. But I don't actually have strong evidence one way or the other about if this is the best way to do things.

I'm curious, does anyone have either 1) the history of the uptake of these IFS-style models or 2) specific evidence about the effectiveness of this model over others (such as CBT-style "train better more effective TAPs")?

Nietzsche's view of "free will" comes to mind. He suggested that the mind is in reality a complex of different wills and tendencies, and the wills are in conflict as to who should be in charge at any given time, since each will wants to dominate the entire mind completely. When we manage to organise the various wills into a consistent hierarchy, we experience a feeling of being in control of ourselves, regardless of which will came out on top in the hierarchy - we identify with that particular will. This is what he suggests that it really means to have free will, a freedom of inner conflict.

[-][anonymous]30

I really like this framework, and I'm going try to make intentional use of it when evaluating decisions. An issue to consider though:

Probable-Future-You is a decided-upon mental construct of Current-You. I'm not saying what Aharon said, that Current-You won't argue rationally. Rather, there may be no argument; Current-You is less likely to conjure up the correct Future-You to begin with!

The necessary fuel to turn that tiny note of discord into a clear mental state is as ninjacolin said, memory. WI-Bob's brain is busy pretending NWI-Bob doesn't exist. If there's a tiny feeling of discord though (memory leak), WI-Bob can pause for a moment, take the outside view, and construct NWI-Bob, reasoning abstractly that NWI-Bob might be the most representative Future-Bob. Then WI-Bob can run an emotional simulation of being NWI-Bob, looking back with regret.

And really, the committee should consist of several future selves. If there's a decision worth considering, then there is more than one outcome to consider. You need to simulate NWI-Bob after each decision. That's the full argument the subagent constructs can make.

Getting that initial discord, being able to quickly leap to the outside view's conclusion, and being able to run simulations fast enough probably takes a bit of preparation. If Monday-you might have to make a decision that Tuesday-you will regret, Sunday-you should make the decision. But since Sunday can't entirely (since Monday is a weak day), Sunday should use rehearse, making use of the privileged weekend read-write access to memory. Otherwise, Bob might start or finish this deliberation while taking his last bite of cake.

One way of making your point is that akrasia is a special case of the Hazing Problem (which, in turn, is similar if not isomorphic to counterfactual mugging), where you define yourself at different timeslices to be different people, and decide whether you want to favor your current self by "hazing" a future self.

(As you might expect, Drescher makes this analogy in Good and Real, though I don't think he uses the term "akrasia".)

That is, if you regard it as optimal to give in to your akrasia, that is evidence that you will in the future, and have done so in the past -- and you would prefer that past selves had not done so.

Edit: Interestingly enough, right after I read that part of Drescher, I read the first installment of the Se7en graphic novel, which is about the "gluttony sinner", who fails to lose weight because he uses that very same dynamically inconsistent reasoning!

I think you're making this a little more complicated than necessary. Akrasia (ie, dynamic inconsistency) is a result of one self being unduly influenced by immediate consequences. Like Bob overweighting the deliciousness of the cake and underweighting the health consequences when the deliciousness is staring him in the face. The multiple selves don't need to discuss and compromise -- the cake-captivated self is just wrong and the self deciding from a distance is right (or, you know, less wrong). So the saner self should use a commitment device to tie the hands of the recalcitrant future self. I just wrote an article arguing this -- http://messymatters.com/akrasia -- based on a previous LessWrong post, http://lesswrong.com/lw/am/how_a_pathological_procrastinor_can_lose_weight/ .

So in a sense I agree, the two selves need to interact. Just that I think that interaction should take exactly one form: a commitment device whereby the current self constrains the future self.

Under this model what would you consider Leechblock to be? Sabotage? An attack?

Great advice. Thanks. :)

[-][anonymous]00

I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling

This is an important concept. It's kind of weird, as this isn't really new to me, but you phrasing it like that somehow made what's going on much clearer to me. Thank you.

[This comment is no longer endorsed by its author]Reply

Thanks for your post, this technique sounds very useful, as does your description of willpower as dynamic consistency but my personal experience is that I've basically always been able to feel "willpower" or something close to it and in fact have a hard time imagining such an existence.

The only reason I can relate to this claim is that sometimes my state of mind is such that willpower use is barely noticable, and perhaps only because in my default state it is very noticable, so I know what to look for: where forward is my default impetus and all I have to do is steer, or fan the fires.

Off the top of my head, labelled via word association, I might call some of these states (there are probably more and there is some overlap) executing, dueling, drunk, writing and fatalism. writing, dueling, drunk and executing are all distinct. There's some overlap.

But these are specific states of mind that are not my default. My default state is rest. In this state it very clearly and unambiguously takes what I'll call willpower to do things. And the more willpowerey I'm feeling, the better I'm able to do anything.

The point of which is that I'd be suspicious of your claim to have never experienced "willpower" if I hadn't put on various thinking hats which also feel a lot like that. I suspect your default thinking hat is one where willpower may be relatively unnoticable, irrelevant, or even non-existant (if your mind is sufficiently different than mine) and that you're underappreciating how different minds are.

*NOT IRL DUELING. DUELING IN A COMPUTER GAME.

See also: City of Lights

Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.

You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.

[-][anonymous]00

What is the benefit of thinking in terms of sub-agents instead of cognitive processes?

In some of the relevant situations I've observed, I might describe the conflict more naturally as being between two mental processes. Here's an example. I usually eat lunch around noon and thus often think of eating lunch around noon. The other day I woke up later than usual and ate breakfast later than I normally do. When noon rolled around I still felt an urge to eat lunch even though I wasn't hungry. My goal-oriented reasoning process (focused on why I should be eating lunch) was butting heads with my eat-lunch-because-it's-noon heuristic.

Thinking in terms of sub-agents may be a useful way to cluster mental processes and model them as a group, and perhaps leverage our capacity for social reasoning. Are there situations where we should think in terms of individual mental processes and their structure? What heuristics can we use to switch effectively between these perspectives?

I think part of the problem with resolving dynamic inconsistencies is it is human nature to reward bad behavior more than good. Consider as an analogy how the medical systems in most developed countries handle pathological behaviors. Suppose you went to your doctor and told them you ate a low calorie healthy diet, exercised regularly, practiced safe sex and had a good handle on your stress. As a result you were pretty healthy. Your reward from your doctor and your medical insurance company would be NOTHING.

Now suppose you smoked heavily so had COPD, ate like a pig and never exercised so were obese, already had HIV and assorted other sexually transmitted diseases and refused to work due to being stressed out. Society couldn't throw money at you fast enough. You'd get loads of attention from doctors and nurses, a welfare check, support groups eager to have you as a member, free time off, free housing, you name it. As a bonus if anyone made the mistake of asking you to do SOMETHING to show a sense of responsibility you could give any number of excuses for doing nothing.

Being a lazy freeloader is the logical choice in any society that rewards it. Likewise with our various selves the pathological versions are generally rewarded more than the healthy versions. Until and unless we change this we won't have much "willpower" since it really is most logical to be pathological.

The reward for staying healthy is health. The rewards for unhealth are trash. Show of hands: how many people reading this find the unhealth lifestyle that you describe attractive? How many repulsive? 1 vote here for "repulsive".

That's a dangerous (and personally I think incorrect) argument Nerissa if taken to its logical extremes.

Whether being a lazy freeloader is indeed the logical choice as you claim depends on exactly what kind of 'rewards' you're looking for from your life. If all you want is medical attention and welfare support, then maybe it's true, although I'd expect that most people would prefer to exchange the medical attention for health, and the welfare support (presumably equivalent to just scraping by) for a more comfortable, while still easily obtainable, income source

An excellent article Academian; like others have noted, what you describe sounds a lot like the idea of an 'inner balance' - being aware of your differing motivations/desires and attempting to form some kind of peace between them.