Stanford Report has a university public press release about a recent paper [subscription required] in Psychological Science.  The paper is available for free from a website of one of the authors.

The gist is that they find evidence against the (currently fashionable) hypothesis that willpower is an expendable resource.  Here is the leader:

Veronika Job, Carol S. Dweck, and Gregory M. Walton
Stanford University


Abstract:

Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion. We propose that whether depletion takes place or not depends on a person’s belief about whether willpower is a limited resource. Study 1 found that individual differences in lay theories about willpower moderate ego-depletion effects: People who viewed the capacity for self-control as not limited did not show diminished self-control after a depleting experience. Study 2 replicated the effect, manipulating lay theories about willpower. Study 3 addressed questions about the mechanism underlying the effect. Study 4, a longitudinal field study, found that theories about willpower predict change in eating behavior, procrastination, and self-regulated goal striving in depleting circumstances. Taken together, the findings suggest that reduced self-control after a depleting task or during demanding periods may reflect people’s beliefs about the availability of willpower rather than true resource depletion.

(HT: Brashman, as posted on HackerNews.)

New Comment
28 comments, sorted by Click to highlight new comments since:

Here's a theory I have.

There are fundamental costs to starting a task and continuing a task. Setting up the tools you need to begin working has a fixed cost that does not depend on how long you work. So there are good reasons for splitting up your time into long segments spent on particular sorts of work, rather than switching modes rapidly.

In multi-agent systems -- or, as a special case, in dynamically inconsistent "agents" - this effect will be (vastly?) magnified. Say Bob is in a group, and he wants the group to do task B, but the group is leaning towards doing task A. Once the group has started doing A, it will be much harder to convince them to switch, because they will have already paid the starting costs of A. So if Bob wants to make a case for doing B, the best time to do it will be before the group starts doing A. And since Bob knows this, he has an incentive to filibuster for as long as it looks like he will lose the vote.

Thus, multi-agent systems will have a tendency to spend more time doing nothing than dynamically consistent agents.

I'm really surprised that noone mentioned this yet:

I suspect that what people are calling "willpower" is at least two very different things, some of which are a limited resource, and others of which aren't.

Occam's Razor: stats should not be multiplied beyond the space available on one's character sheet.

This paper and its experiments are of higher quality than I'm used to in psychological research.

What's demonstrated: if you prime an excuse for doing poorly, you will do poorly. I think there's already some similar research (different types of excuses, though). They also show that self-reported exhaustion (not just "ego depleting" tasks) leads to a difference in performance that goes in exactly the direction that the subjects are primed to believe (either being reminded of an existing belief, or being tricked into holding it with biased questions).

It surprises me that, of the people who don't claim to expect to flag when fatigued, those who report being exhausted by the depletion task actually make less errors than those who don't. Unless this is just due to warming up their inhibition/vigilance (both the initial and final tests require it) while, it suggests that positive expectations can boost performance, not just that available excuses can harm it.

I like that they demonstrated that errors on IQ problems tracks errors on mundane rule-following, vigilance type tasks, but it's amusing to me that people who believe they'll do worse when fatigued, actually test as smarter (less IQ test errors) when fresh, whereas those primed to believe they won't effectively fatigue improve slightly, but are still worse than the "limited resource" believers' initial performance. This effect is still there, but probably not significant, for the simple but tiresome "willpower" testing (Stroop) task. I assume the "limited"-believers are more engaged by an IQ-proving question, either for signaling or entertainment, compared to the boring Stroop task. Disclaimer: these differences, from figures in pg 5 of the paper. aren't strongly significant (N ~= 50), so maybe I shouldn't conclude anything (the authors don't pin anything on them).

if you prime an excuse for doing poorly, you will do poorly.

This is the most useful sentence I've read today.

I care strongly about winning. When I look back on a day and ask myself what I could have done better, I want answering to be a struggle, and not for lack of imagination. I'm not content to coast through life, so I optimize relentlessly. This sentiment might be familiar to LW readers. I don't know. Maybe.

When a day goes particularly well or poorly, I want to know why, and over the last few years I've picked a few patterns out of my diary. I know some of my success and failure modes, so I can optimize my working environment in my favor.

In the past, I've often been successful even while sleep-deprived. I may be a bit slower, a bit more forgetful, and significantly less creative, but I can still plow through tasks of moderate difficulty. Two months ago, I activated a difficult project, so I resolved to start getting plenty of sleep all the time, then promptly forgot my original reason and associated "well-rested" with "productive on anything". In the last two months, my rate of even moderate success while sleep-deprived has dropped to almost zero. "I was intending to read that book, or watch that show, or play that game eventually, and I'm not going to be efficient today, so it might as well be now", I'll say.

With this dangerous knowledge that I was irrational enough to misuse, I can predict my days into failure.

I'd previously suspected that the notion of limited willpower was something I'd adopted too readily, as it let me be more self-indulgent, so it got positive, short-term feedback. I was still pretty sure that willpower was limited - because that's how I experienced it - but my prior was weak.

I read this three days ago, and updated almost immediately. So far I've found that:

  • I have increased self-control since reading this abstract.
  • I spent more thought and energy on long-term goals.
  • I feel more capable than I did three days ago.

Now, clearly, three days isn't long enough for a convincing subjective measure in a self-test. It's certainly less evidence than the study provides. So, I'll return to this post in at least a few more weeks, and see how persistent this effect is.

Anyone else have other relevant experience: similar, contradictory, or enlightening?

Update: Telling myself that I have the will I need to accomplish a task, or update a belief, or take an action that I think is right but feel uneasy about, has turned out to be a pretty effective way to actually gather that will.

"Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion."

Instead of taking this research at face value, shouldn't we wait for new experiments to verify their findings? I am afraid of people just accepting this experiment, and ignoring the many OTHER experiments that said otherwise.

As a personal anecdote, I have never felt anything that I was inclined to call "willpower depletion". As a teenager, I decided that "willpower" was just a loaded term/metaphor for dynamic consistency, and that calling it "willpower" was harmful to the way people thought about themselves as agents. I decided that other people's feeling of "willpower depletion" was nothing more than sensing oneself in transition from one value system to another.

But claims that the theorized "executive system", a cognitive system whose function is almost by definition to maintain dynamic consistency, was seated in the prefrontal cortex and needed more glucose than other brain functions, made me consider that maybe "willpower" is in fact an appropriate term... but I still never actually felt anything like a "depleting resource", which I found confusing.

So I'll be less confused again if the belief dependency you mention is correct, and causal. In any case, I hope it is, so that people can achieve better dynamic consistency by not thinking of it as "expendable". I'm at least one example consistent with that theory.

With respect, I've always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person's behavior, realize that it doesn't follow a time-invariant utility function, and say "Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!" But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. ("Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...")

On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can't measure it, and you were to just look at my actions, this is what you'd say: "Look, some days he cleans his room and some days he doesn't even though the benefit--a room clean for about 1 day--is the same. When he doesn't clean his room, and you ask him why, he says he just really didn't feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!" But the real reason is not that my utility function is varying. It's that I find cleaning my room soothing on some days, whereas other days it's torture.

Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents.

Utility theory is a normative theory of rationality; it's not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do.

Look, some days he cleans his room and some days he doesn't even though the benefit--a room clean for about 1 day--is the same.

This is a common confusion about the what dynamic inconsistency really means, although I'm now noticing that Wikipedia doesn't explain it so clearly, so I should give an example:

Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend.

Thursday-self says: Oh, but now that it's Thursday and I'm annoyed, I don't think it's worth it anymore.

This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It's a straight-up contradiction of preferences among outcomes. There's no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it's designed to normatively govern.

ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.

Thank you for introducing me to dynamic (in)consistency, that is extremely helpful for resolving my understanding of will power. I have similar experiences as you describe: except in occasions of grave illness, I experience my will as infinite, and any appearance of depletion/giving up seems to arise from a divided will (eg wanting to keep riding until 2 hours is up vs wanting to be able to move without pain tomorrow)

Dynamic consistency seems like an incredibly worthwhile area to self-improve in.

How would one go about improving in that area? I can't see a straightforward way to do it.

I've never heard of willpower depletion. I've heard people say that they don't have enough willpower, but not that they're out of willpower. Surely willpower is a long-term stat like CON, not an diminishable resource like HP.

I've never thought that I've had much willpower (possibly a nocebo effect originally generalised from a few early cases?). But on those occasions where I have used my willpower, this has always made subsequent uses easier. I can't imagine using it up.

But on those occasions where I have used my willpower, this has always made subsequent uses easier. I can't imagine using it up.

Maybe you leveled up.

I've never heard of willpower depletion....Surely willpower is a long-term stat like CON, not an diminishable resource like HP.

In fact, previous research has shown that it is a lot like HP in many situations. See the citations near the beginning of the article.

Yeah, I see that now, but it's still very weird to me. And the new article seems to explain why: I think of willpower as like CON, so for me it's like CON. Others think of it as HP, so for them it's like HP. I just didn't realise that there was anybody like those others before!

For me it's more like the limit break meter. When things get bad enough, it comes.

On the other hand, sometimes it's like a combo meter - the more I do, the more I keep doing.

Maybe it's more that I have a constant amount, and the challenge I'm facing varies in time.

It is usually better to read textbooks in an area before reading papers (particularly just-published one) in the area. Can anyone recommend a textbook that covers this material or material that would tend to help one understand this material?

I don't know of a textbook treatment, but Baumeister, Vohs, and Tice (2007) (pdf) is a brief review article by the leading proponents of the limited resource theory which summarizes the theory and the research that supports it.

Baumeister, R.F., Vohs, K.D., & Tice, D.M. (2007). The strength model of self-control. Current Directions in Psychological Science, 16, 351–355.

I agree with you in general, and would especially like to hear from some LW psychologists. I think this field is pretty new, though, and not heavily dependent on any canon.

How do they know it's not the persons idiosyncratic "availability of willpower" after a demanding task that shapes idiosyncratic beliefs about willpower?

I wondered that too. But they covered that objection in the paper. Study #2. They manipulated people's beliefs about willpower by administering a "push poll", and then tested willpower depletion.

Excellent point.

However, it seems reasonable to me that push polling about someone's future behavior will lead them to act consistently with the signal they just sent in the poll - like in Cialdini's Influence, where people are polled on whether they like to go to opera, or give charitably, by some attractive person they want to impress, and then after affirming are ambushed with a sales pitch (they thought it was an innocent poll but are trapped by their answers). So it seems reasonable to assume that those who were push-polled into believing they will become either sloppier, or more accurate, with fatigue, would act consonantly.

But I don't think this objection is likely the whole story. The simplest explanation is that people's stated expectations of their performance do shape their performance - the power of positive thinking, and obviously, negative. (possibly unvoiced/persistent expectations as well as explicitly declared, although of course it's nearly impossible to measure such things surreptitiously).

So it seems reasonable to assume that those who were push-polled into believing they will become either sloppier, or more accurate, with fatigue, would act consonantly.

Right. I'm not convinced that priming people so close to a task tells us much about their actual beliefs in general, and how they will behave outside a lab: it just tells us what people believe they believe. It's like the quick fix people get after motivational seminars that fades away.

The manipulation didn't measure the effect of beliefs; it measured the effect of the cognitive affirmation of a belief. That's not really a measure of truly "implicit"

The results of this study could still be explained by some third variable underlying both self-control and implicit beliefs about self-control (e.g. Conscientiousness, sleep deprivation, akrasia, perception of a task as difficult...).

Study 4 does shed some light on this problem:

Next, we tested the reverse causal relationship—from self-regulation at T2 to implicit theories at T3. Implicit theories at T3 were regressed on T2 self-regulation, controlling for T2 implicit theories. There was no significant relationship between any T2 self-regulatory variable and T3 implicit theories, ΔFs(1, 38) < 1.30

Basically, your self-control doesn't predict your beliefs a month later. This potentially rules out some stable third variable, like Conscientiousness. But it doesn't rule out other fluctuating third variables. For sleep deprivation, for instance, we would not primarily expect your sleep last month to influence both your self-control and your beliefs about your self-control now; it's your sleep this month that matters most. Same thing with stress, workload, etc...

The study concludes:

Taken together, the results suggest that in some cases, ego depletion may result not from a true lack of resources after an exhausting task, but from people’s beliefs about their resources.

This study does suggest that beliefs about willpower may have some sort of effect (at least, if you are motivated/demotivated or engaging in affirmations), but it's very weak evidence. The lab manipulation they attempted of people's beliefs about their self-control is consistent with the author's hypotheses, but not terribly convincing. The longitudinal study could still be explained by third underlying causal variables. I still find the willpower depletion hypothesis plausible, at least for some people, on some types of tasks.

UPDATE: I found this observation in the notes:

A possible alternative explanation is that people with a nonlimited resource theory have better self-control than people with a limited resource theory. However, a pilot study (N = 65) did not find a negative relationship between a limited-resource theory and trait self-control (Schwarzer, Diehl, & Schmitz, 1999), r = .17, p > .20.

This does seem to rule out self-control as an underlying third variable. Though it's also a bit strange in light of the results of the current study: if resource theory isn't related to self-control, why is it predicting performance on tasks like the Stroop test that supposedly measure self-control? Maybe it's Jonathan Graehl's hypothesis of making excuses to exercise less self-control than one is capable of. Or maybe "self-control" is being operationalized in different ways.

Unfortunately, I can't find the German study they are referring to through they link in the references; all I can find is the scale it used.

[-][anonymous]00

Will power is a dumb concept. If you look at motivation and sense of agency they together constitute the term - it's the concept of gaps, if you will.

Just as a sidenote: I remember reading somewhere that the "exerting willpower depletes glucose reserves in the brain" theory is bogus. It was proven that there is no change in glucose levels in the brain after exerting willpower.

Exerting willpower requires thought, which requires glucose. Your statement needs to contrast exerting willpower with some other mental activity.