tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.
Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.
The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.
Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"
The task of keeping your various mental impulses working together coherently is called executive functioning. To deal with an "always eat cake" impulse, some may be lucky enough to win by simply reciting "cake isn't really that tasty anyway". A more potent technique is to practice visualizing the cake making you instantaneously grotesque and extremely ill, creating a psychological flinch-away reflex — a behavioral trigger — which will be activated on the sight of cake and intervene on the usual behavior to eat it. But such behavioral triggers can easily fail if they aren't backed up by an agreement between your WorthIt and NotWorthIt sub-agents: if you end up smelling the cake, or trying "just one bite" to be "polite" at your friend's birthday, it can make you all-of-a-sudden-remember how tasty the cake is, and destroy the trigger.
To really be prepared, Bob needs to vaccinate himself against extenuating circumstances. He needs to admit to himself that cake really is delicious, and decide whether it's worth eating without downplaying how very delicious it is. He needs to sit down with the cake, stare at it, smell it, taste three crumbs of it, and then toss it. (If possible, he should give it away. But note that, despite parentally-entrained guilt about food waste, Bob hurting himself with the cake won't help anyone else help themselves with it: starving person eats cake > no one eats cake > Bob eats cake.)
This admission corresponds to having a meeting between WorthIt-Bob and NotWorthIt-Bob: having both sets of emotions present and salient simultaneously allows them to reach a balance decisively. Maybe NotWorthIt-Bob will decide that eating exactly one slice of cake-or-equivalent tasty food every two weeks really is worth it, and keep a careful log to ensure this happens. Maybe WorthIt-Bob will approve of the cake-is-poison meditation techniques and actually change his mind. Maybe Bob will become one person who consistently values his health and appearance over spurious taste sensations.
Or maybe not. But it sure works for me.
This discussion is triggering an interesting thought. To learn how willpower works in individuals, we should study how groups come to decisions and stick by them.
(Because we're modeling within-individual conflict as between-subagent conflict, and thus making no relevant distinction between individuals and their subagents. It's subagents all the way down.)
So what do we know about the latter?
Well, strictly from the theoretical perspective of rational-agent game theory, we know quite a lot.
Subagents need to communicate so as to coordinate. Cooperation works best when there are no secrets.
On the other hand, it is often in the interests of the individual agents to keep some things secret from other agents. There is a fascinating theory of correlated equilibria and mechanism design to enable the sharing of the information you want to share and the hiding of information you wish to keep secret.
Punishment of one agent by other agents, and th