Willpower: not a limited resource?
Stanford Report has a university public press release about a recent paper [subscription required] in Psychological Science. The paper is available for free from a website of one of the authors.
The gist is that they find evidence against the (currently fashionable) hypothesis that willpower is an expendable resource. Here is the leader:
Veronika Job, Carol S. Dweck, and Gregory M. Walton
Stanford University
Abstract:Much recent research suggests that willpower—the capacity to exert self-control—is a limited resource that is depleted after exertion. We propose that whether depletion takes place or not depends on a person’s belief about whether willpower is a limited resource. Study 1 found that individual differences in lay theories about willpower moderate ego-depletion effects: People who viewed the capacity for self-control as not limited did not show diminished self-control after a depleting experience. Study 2 replicated the effect, manipulating lay theories about willpower. Study 3 addressed questions about the mechanism underlying the effect. Study 4, a longitudinal field study, found that theories about willpower predict change in eating behavior, procrastination, and self-regulated goal striving in depleting circumstances. Taken together, the findings suggest that reduced self-control after a depleting task or during demanding periods may reflect people’s beliefs about the availability of willpower rather than true resource depletion.
(HT: Brashman, as posted on HackerNews.)
Strategies for dealing with emotional nihilism
I asked a question in the discussion section a little bit ago and got very productive responses. What follows is mostly a paraphrase of people's comments.
From time to time, like Pierre, I don't care. I get emotionally nihilistic. I find myself doing things that are morally awful in the conventional meaning of the word: procrastinating, sneaking other people's food out of the communal fridge, being casually unkind and unhelpful, breaking promises. I don't doubt that these are awful things to do. I figure any moral theory worth its salt will condemn them -- except the moral theory "I don't care," which sometimes seems strangely compelling.
What I want to know is: what goes through people's heads when they're motivated not to be awful? What could you tell someone as a reason not to be awful? If you are, in fact, not awful, why aren't you awful? What do you think, or feel, when you care about things? What would you tell someone who claims "I just don't care" if you wanted to get her to care? What would you tell yourself, in your nihilistic moments?
The (more) trivial utility function
Nihilism feels like a utility function where everything is set to the value zero. Landing that job offer or school admission letter? That's worth nothing. Making someone smile? Worth nothing. Being in good physical shape? Worth nothing. Living according to moral values? Worth nothing. Nothing is fun, or appealing, or worth looking forward to.
Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model
Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences
In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the "alien parasite”) trying to take over from an unsuspecting unconscious.
Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.
I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.
The Signaling Theory of Consciousness
This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.
So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers "Only admirable things like compassion and honor, of course!" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.
This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?
Applying Behavioral Psychology on Myself
In which I attempt to apply findings from behavioral psychology to my own life.
Behavioral Psychology Finding #1: Habituation
The psychological process of "extinction" or "habituation" occurs when a stimulus is administered repeatedly to an animal, causing the animal's response to gradually diminish. You can imagine that if you were to eat your favorite food for breakfast every morning, it wouldn't be your favorite food after a while. Habituation tends to happen the fastest when the following three conditions are met:
- The stimulus is delivered frequently
- The stimulus is delivered in small doses
- The stimulus is delivered at regular intervals
Source is here.
Applied Habituation
I had a project I was working on that was really important to me, but whenever I started working on it I would get demoralized. So I habituated myself to the project: I alternated 2 minutes of work with 2 minutes of sitting in the yard for about 20 minutes. This worked.
Defeating Ugh Fields In Practice
Unsurprisingly related to: Ugh fields.
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That's right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion - and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent.
A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there's the question of whether you already took the pill. Because if you take it twice in one day, you'll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.
As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds "like a game;" he couldn't wait to check the dispenser to see if he'd won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn't surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.
Antagonizing Opioid Receptors for (Prevention of) Fun and Profit
Related to: Ugh Fields, Are Wireheads Happy?
In his post Ugh Fields, Roko discussed "temporal difference learning", the process by which the brain propagates positive or negative feedback to the closest cause it can find for the feedback. For example, if he forgets to pay his bills and gets in trouble, the trouble (negative feedback) propagates back to thoughts about bills. Next time he gets a bill, he might paradoxically have even more trouble paying it, because it's become associated with trouble and negative emotions, and his brain tends to unconsciously flinch away from it.
He links to the associated Wikipedia article:
The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.
Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when exposed to the juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. This mimics closely how the error function in TD is used for reinforcement learning.
So if I understand this right, the monkey hears a bell and is unimpressed, having no expectation of reward. Then the monkey gets some juice that tastes really good and activates (opioid dependent?) reward pathways. The dopamine system is pretty surprised, and broadcasts that surprise back to all the neurons that have been especially active recently, most notably the neurons that activated upon hearing the bell. These neurons are now more heavily associated with the dopamine system. So the next time the monkey hears a bell, it has a greater expectation of reward.
And in this case it doesn't matter, because the monkey can't do anything about it. But if it were a circus monkey, and its trainer was trying to teach it to do a backflip to get juice, the association between backflips and juice would be pretty useful. As long as the monkey wanted juice, merely entertaining the plan of doing a backflip would have motivational value that promotes the correct action.
The Sinclair Method is a promising technique for treating alcoholics that elegantly demonstrates these pathways by sabotaging them.
a meta-anti-akrasia strategy that might just work
For ages I've been trying to wrap my mind around meta thinking - not "what is the best way to do something", but "how do I find out which way is any good?" Meta thinking has many applications, and I am always surprised when I find a new context it can be applied to. Anti-akrasia might be such a context.
The idea I am about to present came to me a few month ago and I used it to finally overcome my own problem with procrastination. I'll try to present it here as well as I can, in the hope that it might be of use to someone. If so, I am really curious what other people come up with using this technique.
If akrasia is a struggle, continue reading.
Where I come from:
Procrastination was a big topic for me. I spent ages reading stuff, watching videos, thinking, collecting stuff and what not, but very little on actual action. One thing I did read was productivity blogs and books. I assume that some or even many of the posters here share that problem with me. I am familiar with the systems - I even gave a lecture once on GTD - but I struggled to get my own stuff out the door. It surely wasn't for a lack of knowledge, but simply for a lack of doing.
The method used consists of two layers.
(I) the meta concept used to develop a personal system
(II) the highly personalized system I came up with while applying (I)
Eluding Attention Hijacks
Do my taxes? Oh, no! It’s not going to be that easy. It’s going to be different this year, I’m sure. I saw the forms—they look different. There are probably new rules I’m going to have to figure out. I might need to read all that damn material. Long form, short form, medium form? File together, file separate? We’ll probably want to claim deductions, but if we do we’ll have to back them up, and that means we’ll need all the receipts. Oh, my God—I don’t know if we really have all the receipts we’d need, and what if we didn’t have all the receipts and claimed the deductions anyway and got audited? Audited? Oh, no—the IRS—JAIL!!
And so a lot of people put themselves in jail, just glancing at their 1040 tax forms. Because they are so smart, sensitive, and creative.
—David Allen, Getting Things Done
Intro
Very recently, Roko wrote about ugh fields, “an unconscious flinch we have from even thinking about a serious personal problem. The ugh field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.” Suggested antidotes included PJ Eby’s technique to engage with the ugh field, locate its center, and access information—thereupon dissolving the negative emotions.
I want to explore here something else that prevents us from doing what we want. Consider these situations:
Situation 1
You attack a problem that is at least slightly complex (distasteful or not), but are unable to systematically tackle it step by step because your mind keeps diverging wildly within the problem. Your brain starts running simulations and gets stuck. To make things worse, you are biased towards thinking of the worst possible scenarios. Having visualized 30 steps ahead, you panic and do nothing. David Allen's quote in the introduction of this post illustrates that.
Situation 2
You attack a problem of any complexity—anything you need to get done—and your mind keeps diverging to different directions outside the problem. Examples:
a. You decide you need to quickly send an important email before an appointment. You log in. Thirty minutes later, you find yourself watching some motivational Powerpoint presentation your uncle sent you. You stare at the inbox and can't remember what you were doing there in the first place. You log out without sending the email, and leave late to your appointment.*
b. You're working on your computer and some kid playing outside the window brings you vague memories of your childhood, vacations, your father teaching you how to fish, tilapias, earthworms, digging the earth, dirty hands, antibacterial soaps, swine flu, airport announcements, seatbelts, sexual fantasies with that redheaded flight attendant from that flight to Barcelona, and ... "wait, wait, wait! I am losing focus, I need to get this done." Ten minutes had passed (or was it more?).
Repeat this phenomenon many times a day and you won't have gone too far.
What happened?
While I am aware that situations 1 and 2 are a bit different in nature (anxiety because of “seeing too much into the problem” vs. distraction to other problems), it seems to me that both bear something very fundamental in common. In all those situations, you became less efficient to get things done because your sensitivity permitted your attention to be deviated to easily. You suffered what I shall call an attention hijack.
Pain and gain motivation
Note: this post is basically just summarizing some of PJ Eby's freely available writings on the topic of pain/gain motivation and presenting them in a form that's easier for the LW crowd to digest. I claim no credit for the ideas presented here, other than the credit for summarizing them.
EDIT: Note also Eby's comments and corrections to my summary at this comment.
Eby proposes that we have two different forms of motivation: positive ("gain") motivation, which drives us to do things, and negative ("pain") motivation, which drives us to avoid things. Negative motivation is a major source of akrasia and is mostly harmful for getting anything done. However, sufficiently large amounts of negative motivation can momentarily push us to do things, which frequently causes people to confuse the two.
To understand the function of negative motivation, first consider the example of having climbed to a tree to avoid a predator. There's not much you can do other than wait and hope the predator goes away, and if you move around, you risk falling out of the tree. So your brain gets flooded with signals that suppress activity and tell it to keep your body still. It is only if the predator ends up climbing up the tree that the danger becomes so acute that you're instead pushed to flee.
What does this have to do with modern-day akrasia? Back in the tribal environment, elicting the disfavor of the tribe could be a death sentence. Be cast out by the tribe, and you likely wouldn't live for long. One way to elict disfavor is to be unmasked as incompetent in some important matter, and a way to avoid such an unmasking is to simply avoid doing anything where to consequences of failure would be severe.
You might see why this would cause problems. Sometimes, when the pain level of not having done a task grows too high - like just before a deadline - it'll push you to do it. But this fools people into thinking that negative consequences alone will be a motivator, so they try to psyche themselves up by thinking about how bad it would be to fail. In truth, this is only making things worse, as an increased chance of failure will increase the negative motivation that's going on.
Necessary, But Not Sufficient
There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.
In mechanical domains, we seem to have little problem with the idea that things can be "necessary, but not sufficient". For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so. It has to have fuel, ignition, and compression, and oxygen... each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.
And usually, we don't go around claiming that "fuel" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.
For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity... such as ourselves.
When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems. And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)