Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[meta] New LW moderator: Viliam_Bur

37 Kaj_Sotala 13 September 2014 01:37PM

Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.

Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)

I'm holding a birthday fundraiser

23 Kaj_Sotala 05 September 2014 12:38PM

EDIT: The fundraiser was successfully completed, raising the full $500 for worthwhile charities. Yay!

Today's my birthday! And per Peter Hurford's suggestion, I'm holding a birthday fundraiser to help raise money for MIRI, GiveDirectly, and Mercy for Animals. If you like my activity on LW or elsewhere, please consider giving a few dollars to one of these organizations via the fundraiser page. You can specify which organization you wish to donate in the comment of the donation, or just leave it unspecified, in which case I'll give your donation to MIRI.

If you don't happen to be particularly altruistically motivated, just consider it a birthday gift to me - it will give me warm fuzzies to know that I helped move money for worthy organizations. And if you are altruistically motivated but don't care about me in particular, maybe you still can get yourself to donate more than usual by hacky stuff like someone you know on the Internet having a birthday. :)

If someone else wants to hold their own birthday fundraiser, here are some tips: birthday fundraisers.

[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff

45 Kaj_Sotala 17 August 2014 02:40PM

Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).

At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.

In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.

[moderator action] Eugine_Nier is now banned for mass downvote harassment

99 Kaj_Sotala 03 July 2014 12:04PM

As previously discussed, on June 6th I received a message from jackk, a Trike Admin. He reported that the user Jiro had asked Trike to carry out an investigation to the retributive downvoting that Jiro had been subjected to. The investigation revealed that the user Eugine_Nier had downvoted over half of Jiro's comments, amounting to hundreds of downvotes.

I asked the community's guidance on dealing with the issue, and while the matter was being discussed, I also reviewed previous discussions about mass downvoting and looked for other people who mentioned being the victims of it. I asked Jack to compile reports on several other users who mentioned having been mass-downvoted, and it turned out that Eugine was also overwhelmingly the biggest downvoter of users David_Gerard, daenarys, falenas108, ialdabaoth, shminux, and Tenoke. As this discussion was going on, it turned out that user Ander had also been targeted by Eugine.

I sent two messages to Eugine, requesting an explanation. I received a response today. Eugine admitted his guilt, expressing the opinion that LW's karma system was failing to carry out its purpose of keeping out weak material and that he was engaged in a "weeding" of users who he did not think displayed sufficient rationality.

Needless to say, it is not the place of individual users to unilaterally decide that someone else should be "weeded" out of the community. The Less Wrong content deletion policy contains this clause:

Harrassment of individual users.

If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments. (This has happened extremely rarely.)

Although the wording does not explicitly mention downvoting, harassment by downvoting is still harassment. Several users have indicated that they have experienced considerable emotional anguish from the harassment, and have in some cases been discouraged from using Less Wrong at all. This is not a desirable state of affairs, to say the least.

I was originally given my moderator powers on a rather ad-hoc basis, with someone awarding mod privileges to the ten users with the highest karma at the time. The original purpose for that appointment was just to delete spam. Nonetheless, since retributive downvoting has been a clear problem for the community, I asked the community for guidance on dealing with the issue. The rough consensus of the responses seemed to authorize me to deal with the problem as I deemed appropriate.

The fact that Eugine remained quiet about his guilt until directly confronted with the evidence, despite several public discussions of the issue, is indicative of him realizing that he was breaking prevailing social norms. Eugine's actions have worsened the atmosphere of this site, and that atmosphere will remain troubled for as long as he is allowed to remain here.

Therefore, I now announce that Eugine_Nier is permanently banned from posting on LessWrong. This decision is final and will not be changed in response to possible follow-up objections.

Unfortunately, it looks like while a ban prevents posting, it does not actually block a user from casting votes. I have asked jackk to look into the matter and find a way to actually stop the downvoting. Jack indicated earlier on that it would be technically straightforward to apply a negative karma modifier to Eugine's account, and wiping out Eugine's karma balance would prevent him from casting future downvotes. Whatever the easiest solution is, it will be applied as soon as possible.

EDIT 24 July 2014: Banned users are now prohibited from voting.

[meta] Policy for dealing with users suspected/guilty of mass-downvote harassment?

28 Kaj_Sotala 06 June 2014 05:46AM

Below is a message I just got from jackk. Some specifics have been redacted 1) so that we can discuss general policy rather than the details of this specific case 2) because presumption of innocence, just in case there happens to be an innocuous explanation to this.

Hi Kaj_Sotala,

I'm Jack, one of the Trike devs. I'm messaging you because you're the moderator who commented most recently. A while back the user [REDACTED 1] asked if Trike could look into retributive downvoting against his account. I've done that, and it looks like [REDACTED 2] has downvoted at least [over half of REDACTED 1's comments, amounting to hundreds of downvotes] ([REDACTED 1]'s next-largest downvoter is [REDACTED 3] at -15).

What action to take is a community problem, not a technical one, so we'd rather leave that up to the moderators. Some options:

1. Ask [REDACTED 2] for the story behind these votes
2. Use the "admin" account (which exists for sending scripted messages, &c.) to apply an upvote to each downvoted post
3. Apply a karma award to [REDACTED 1]'s account. This would fix the karma damage but not the sorting of individual comments
4. Apply a negative karma award to [REDACTED 2]'s account. This makes him pay for false downvotes twice over. This isn't possible in the current code, but it's an easy fix
5. Ban [REDACTED 2]

For future reference, it's very easy for Trike to look at who downvoted someone's account, so if you get questions about downvoting in the future I can run the same report.

If you need to verify my identity before you take action, let me know and we'll work something out.

-- Jack

So... thoughts? I have mod powers, but when I was granted them I was basically just told to use them to fight spam; there was never any discussion of any other policy, and I don't feel like I have the authority to decide on the suitable course of action without consulting the rest of the community.

Arguments and relevance claims

25 Kaj_Sotala 05 May 2014 04:49PM
The following once happened: I posted a link to some article on an IRC channel. A friend of mine read the article in question and brought up several criticisms. I felt that her criticisms were mostly correct though not very serious, so I indicated agreement with them.

Later on the same link was posted again. My friend commented something along the lines of "that was already posted before, we discussed this with Kaj and we found that the article was complete rubbish". I was surprised - I had thought that I had only agreed to some minor criticisms that didn't affect the main point of the article. But my friend had clearly thought that the criticisms were decisive and had made the article impossible to salvage.

--

Every argument actually has two parts, even if people often only state the first part. There's the argument itself, and an implied claim of why the argument would matter if it were true. Call this implied part the relevance claim.

Suppose that I say "Martians are green". Someone else says, "I have seen a blue Martian", and means "I have seen a blue Martian (argument), therefore your claim of all Martians being green is false (relevance claim)". But I might interpret this as them saying, "I have seen a blue Martian (argument), therefore your claim of most Martians being green is less likely (relevance claim)". I then indicate agreement. Now I will be left with the impression that the other person made a true-but-not-very-powerful claim that left my argument mostly intact, whereas the other person is left with the impression that they made a very powerful claim that I agreed with, and therefore I admitted that I was wrong.

We could also say that the relevance claim is a claim of how much the probability of the original statement would be affected if the argument in question were true. So, for example "I have seen a blue martian (argument), therefore the probability of 'Martians are green' is less than .01 (relevance claim)", or equivalently, "I have seen a blue martian" & "P(martians are green|I have seen a blue martian) < .01".

If someone says something that I feel is entirely irrelevant to the whole topic, inferential silence may follow.

Therefore, if someone makes an argument that I agree with, but I suspect that we might disagree about its relevance, I now try to explicitly comment on what my view of the relevance is. Example.

Notice that people who are treating arguments as soldiers are more likely to do this automatically, without needing to explicitly remind themselves of it. In fact, for every argument that their opponent makes that they're forced to concede, they're likely to immediately say "but that doesn't matter because X!". Because we like to think that we're not treating arguments as soldiers, we also try to avoid automatically objecting "but that doesn't matter because X" whenever our favored position gets weakened. This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

(Cross-posted from my blog.)

Explanations for Less Wrong articles that you didn't understand

18 Kaj_Sotala 31 March 2014 11:19AM

ErinFlight said:

I'm struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.

Thinking about it, I realized that this might be a common concern. There are probably plenty of people who've looked at various more-or-less technical or jargony Less Wrong posts, tried understanding them, and then given up (without posting a comment explaining their confusion).

So I figured that it might be good to have a thread where you can ask for explanations for any Less Wrong post that you didn't understand and would like to, but don't want to directly comment on for any reason (e.g. because you're feeling embarassed, because the post is too old to attract much traffic, etc.). In the spirit of various Stupid Questions threads, you're explicitly encouraged to ask even for the kinds of explanations that you feel you "should" get even yourself, or where you feel like you could get it if you just put in the effort (but then never did).

You can ask to have some specific confusing term or analogy explained, or to get the main content of a post briefly summarized in plain English and without jargon, or anything else. (Of course, there are some posts that simply cannot be explained in non-technical terms, such as the ones in the Quantum Mechanics sequence.) And of course, you're encouraged to provide explanations to others!

Two arguments for not thinking about ethics (too much)

29 Kaj_Sotala 27 March 2014 02:15PM

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

1: Little expected insight

This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.

One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.

To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.

2: Leads to akrasia

Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.

For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)

The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:

Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”

Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]

Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.

In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

Applying reinforcement learning theory to reduce felt temporal distance

10 Kaj_Sotala 26 January 2014 09:17AM

(cross-posted from my blog)

It is a basic principle of reinforcement learning to distinguish between reward and value, where the reward of a state is the immediate, intrinsic desirability of the state, whereas the value of the state is proportional to the rewards of the other states that you can reach from that state.

For example, suppose that I’m playing a competitive game of chess, and in addition to winning I happen to like capturing my opponent’s pieces, even when it doesn’t contribute to winning. I assign a reward of 10 points to winning, -10 to losing, 0 to a stalemate, and 1 point to each piece that I capture in the game. Now my opponent offers me a chance to capture one of his pawns, an action that would give me one point worth of reward. But when I look at the situation more closely, I see that it’s a trap: if I did capture the piece, I would be forced into a set of moves that would inevitably result in my defeat. So the value, or long-term reward, of that state is actually something close to -9.

Once I realize this, I also realize that making that move is almost exactly equivalent to agreeing to resign in exchange for my opponent letting me capture one of his pieces. My defeat won’t be instant, but by making that move, I would nonetheless be choosing to lose.

Now consider a dilemma that I might be faced with when coming home late some evening. I have no food at home, but I’m feeling exhausted and don’t want to bother with going to the store, and I’ve already eaten today anyway. But I also know that if I wake up with no food in the house, then I will quickly end up with low energy, which makes it harder to go to the store, which means my energy levels will drop further, and so on until I’ll finally get something to eat much later, after wasting a long time in an uncomfortable state.

Typically, temporal discounting means that I’m aware of this in the evening, but nonetheless skip the visit to the store. The penalty from not going feels remote, whereas the discomfort of going feels close, and that ends up dominating my decision-making. Besides, I can always hope that the next morning will be an exception, and I’ll actually get myself to go to the store right from the moment when I wake up!

And I haven’t tried this out for very long, but it feels like explicitly framing the different actions in terms of reward and value could be useful in reducing the impact of that experienced distance. I skip the visit to the store because being hungry in the morning is something that seems remote. But if I think that skipping the visit is exactly the same thing as choosing to be hungry in the morning, and that the value of skipping the visit is not the momentary relief of being home earlier but rather the inevitable consequence of the causal chain that it sets in motion – culminating in hours of hunger and low energy – then that feels a lot different.

And of course, I can propagate the consequences earlier back in time as well: if I think that I simply won’t have the energy to get food when I finally come home, then I should realize that I need to go buy the food before setting out on that trip. Otherwise I’ll again set in motion a causal chain whose end result is being hungry. So then not going shopping before I leave becomes exactly the same thing as being hungry next morning.

More examples of the same:

  • Slightly earlier I considered taking a shower, and realized that if I'd take a shower in my current state of mind I'd inevitably make it into a bath as well. So I wasn't really just considering whether to take a shower, but whether to take a shower *and* a bath. That said, I wasn't in a hurry anywhere and there didn't seem to be a big harm in also taking the bath, so I decided to go ahead with it.
  • While in the shower/bath, I started thinking about this post, and decided that I wanted to get it written. But I also wanted to enjoy my hot bath for a while longer. Considering it, I realized that staying in the bath for too long might cause me to lose my motivation for writing this, so there was a chance that staying in the bath would become the same thing as choosing not to get this written. I decided that the risk wasn't worth it, and got up.
  • If I'm going somewhere and I choose a route that causes me to walk past a fast-food place selling something that I know I shouldn't eat, and I know that the sight of that fast-food place is very likely to tempt me to eat there anyway, then choosing that particular route is the same thing as choosing to go eat something that I know I shouldn't.

Related post: Applied cognitive science: learning from a faux pas.

[link] Why Self-Control Seems (but may not be) Limited

34 Kaj_Sotala 20 January 2014 04:55PM

Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.

Some of the most interesting points:

  • Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
  • The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
  • Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.

The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."

  • Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
  • Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the organism to both focus on activities that will provide it with rewards (exploitation), but also to disengage from them and seek activities which may be even more rewarding (exploration). Feelings such as boredom function to stop the organism from getting too fixated on individual tasks, and allow us to spend some time on tasks which might turn out to be even more valuable.

The explanation of the actual proposed psychological mechanism is good enough that it deserves to be quoted in full:

Based on the tradeoffs identified above, we propose that initial acts of control lead to shifts in motivation away from “have-to” or “ought-to” goals and toward “want-to” goals (see Figure 2). “Have-to” tasks are carried out through a sense of duty or contractual obligation, while “want-to” tasks are carried out because they are personally enjoyable and meaningful [41]; as such, “want-to” tasks feel easy to perform and to maintain in focal attention [41]. The distinction between “have-to” and “want-to,” however, is not always clear cut, with some “want-to” goals (e.g., wanting to lose weight) being more introjected and feeling more like “have-to” goals because they are adopted out of a sense of duty, societal conformity, or guilt instead of anticipated pleasure [53].

According to decades of research on self-determination theory [54], the quality of motivation that people apply to a situation ranges from extrinsic motivation, whereby behavior is performed because of external demand or reward, to intrinsic motivation, whereby behavior is performed because it is inherently enjoyable and rewarding. Thus, when we suggest that depletion leads to a shift from “have-to” to “want-to” goals, we are suggesting that prior acts of cognitive effort lead people to prefer activities that they deem enjoyable or gratifying over activities that they feel they ought to do because it corresponds to some external pressure or introjected goal. For example, after initial cognitive exertion, restrained eaters prefer to indulge their sweet tooth rather than adhere to their strict views of what is appropriate to eat [55]. Crucially, this shift from “have-to” to “want-to” can be offset when people become (internally or externally) motivated to perform a “have-to” task [49]. Thus, it is not that people cannot control themselves on some externally mandated task (e.g., name colors, do not read words); it is that they do not feel like controlling themselves, preferring to indulge instead in more inherently enjoyable and easier pursuits (e.g., read words). Like fatigue, the effect is driven by reluctance and not incapability [41] (see Box 2).

Research is consistent with this motivational viewpoint. Although working hard at Time 1 tends to lead to less control on “have-to” tasks at Time 2, this effect is attenuated when participants are motivated to perform the Time 2 task [32], personally invested in the Time 2 task [56], or when they enjoy the Time 1 task [57]. Similarly, although performance tends to falter after continuously performing a task for a long period, it returns to baseline when participants are rewarded for their efforts [58]; and remains stable for participants who have some control over and are thus engaged with the task [59]. Motivation, in short, moderates depletion [60]. We suggest that changes in task motivation also mediate depletion [61].

Depletion, however, is not simply less motivation overall. Rather, it is produced by lower motivation to engage in “have-to” tasks, yet higher motivation to engage in “want-to” tasks. Depletion stokes desire [62]. Thus, working hard at Time 1 increases approach motivation, as indexed by self-reported states, impulsive responding, and sensitivity to inherently-rewarding, appetitive stimuli [63]. This shift in motivational priorities from “have-to” to “want-to” means that depletion can increase the reward value of inherently-rewarding stimuli. For example, when depleted dieters see food cues, they show more activity in the orbitofrontal cortex, a brain area associated with coding reward value, compared to non-depleted dieters [64].

See also: Kurzban et al. on opportunity cost models of mental fatigue and resource-based models of willpower; Deregulating Distraction, Moving Towards the Goal, and Level Hopping.

View more: Next