Followup toFighting Akrasia:  Incentivising Action

Influenced byGeneralizing From One Example

Previously I looked at how we might fight akrasia by creating incentives for actions.  Based on the comments to the previous article and Yvain's now classic post Generalizing From One Example, I want to take a deeper look at the source of akrasia and the techniques used to fight it.

I feel foolish for not looking at this closer first, but let's begin by asking what akrasia is and what causes it.  As commonly used, akrasia is the weakness-of-will we feel when we desire to do something but find ourselves doing something else.  So why do we experience akrasia?  Or, more to the point, why to we feel a desire to take actions contrary the actions we desire most, as indicated by our actions?  Or, if it helps, flip that question and ask why are the actions we take not always the ones we feel the greatest desire for?

First, we don't know the fine details of how the human brain makes decisions.  We know what it feels like to come to a decision about an action (or anything else), but how the algorithm feels from the inside is not a reliable way to figure out how the decision was actually made.  But because most people can relate to a feeling of akrasia, this suggests that there is some disconnect between how the brain decides what actions are most desirable and what actions we believe are most desirable.  The hypothesis that I consider most likely is that the ability to form beliefs about desirable actions evolved well after the ability to make decisions about what actions are most desirable, and the decision-making part of the brain only bothers to consult the belief-about-desirability-of-actions part of the brain when there is a reason to do so from evolution's point of view.1  As a result we end up with a brain that only does what we think we really want when evolutionarily prudent, hence we experience akrasia whenever our brain doesn't consider it appropriate to consult what we experience as desirable.

This suggests two main ways of overcoming akrasia assuming my hypothesis (or something close to it) is correct:  make the actions we believe to be desirable also desirable to the decision-making part of the brain or make the decision-making part of the brain consult the belief-about-desirability-of-actions part of the brain when we want it to.  Most techniques fall into the former category since this is by far the easier strategy, but however a technique works, an overriding theme of the akrasia-related articles and comments on Less Wrong is that no technique yet found seems to work for all people.

For convenience, here is a list of some of the techniques discussed here and elsewhere in the productivity literature for fighting akrasia that work for some people but not for everyone.

  • Schedule work times
  • Set deadlines
  • Make to do list
  • Create financial consequences for failure
  • Create social consequences for failure
  • Create physical consequences for failure
  • Create existential consequences for failure
  • Create additional rewards for success
  • Set incremental goals
  • Create special environments for working only towards a particular goal

And there are many more tricks and workarounds people have discovered that work for them and some segment of the population. But so far no one has found a Unifying Theory of Akrasia Fighting; otherwise they would have other optimized us all and be rich.  So all we have so far is a collection of techniques that sometimes work for some people, but because most promoters of these techniques are busy trying to other optimize because they generalized from one example, we don't even have a good way to see if a technique will work for any particular individual short of having them try it.

I don't expect us to find a universal solution to fighting akrasia any time soon, and it may require the medical technology to "rewire" or "reprogram" the brain (pick your inapt metaphor).  But what we can do is make things a little easier for those looking for what they can do that will actually work.  In that vein, I've created a survey for the Less Wrong community that will hopefully give us a chance to collect enough data to predict what types of akrasia fighting techniques will work best for which people.  It asks a number of questions about your behaviors and thoughts and then focus on what techniques for fighting akrasia you've tried and how well they worked for you.  My hope is that I can put all of this data together to make some predictions about how likely a particular technique will work for you, assuming I've asked the right questions.

Please feel free to share this survey (and post) with anyone who you think might be interested, even if they would otherwise not be interested in Less Wrong.  The more responses we can get the more useful the data will be.  Thanks!

Take the survey

Footnotes:

1 That is to say, there were statistically regular occasions in the environment of evolutionary adaptation that lead those of our ancestors who consulted the belief-about-desirability-of-actions part of the brain on those occasions when making decisions to reproduce at a higher rate.

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 1:05 PM

I have begun wondering whether claiming to be victim of "akrasia" might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

This is an insufficient explanation. I have on many occasions found myself doing superficially enjoyable but instant-gratification, low effort activities that I actually enjoyed less than some other, delayed-gratification and/or higher effort activity.

But even aside from that, all that's doing is renaming the problem. "How to fight akrasia" becomes "how to align actual preferences and believed preferences" and you're no closer to a solution.

How to fight akrasia" becomes "how to align actual preferences and believed preferences" and you're no closer to a solution.

On the contrary, you're one step closer, in that you can now begin asking what your actual preferences are, and how to get them all met.

(Note that your "believed" preferences are also preferences; they just don't necessarily have the same practical weight as whatever other preferences are interfering with them. The issue isn't real vs. believed, it's preference A vs. preference B, and resolving any perceived conflicts, by thinking through and discarding the cached thoughts built up around them.)

This observation doesn't seem to undermine the "wrong about what we want" view.

Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.

When considering a complex procedure which would get you a bunch of A next week, you reason "I want B, so why would I do something that gets me a bunch of A?" and don't do it. You would only pursue such a complex procedure if you believed that you wanted A.

By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.

Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.

That is to say, the question need not be "how to align actual preferences and believed preferences," it could be "how do we organize a mutually beneficial trade?"

Of course there are other problems---for example, we aren't very well optimized for A, and in particular aren't great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).

That's Bryan Caplan's view. Seems quite plausible to me.

[-][anonymous]15y-40

No.

[-][anonymous]15y50

This suggests two main ways of overcoming akrasia assuming my hypothesis (or >something close to it) is correct: make the actions we believe to be desirable also >desirable to the decision-making part of the brain or make the decision-making part of >the brain consult the belief-about-desirability-of-actions part of the brain when we want >it to.

There is a third way. Why not consider the possibility that the actual decision making part of the brain has its more important job because it is more competent than the belief-about-desirability-of-actions part of the brain? In other works, maybe when you are playing video games but you feel like you should be working, then you really should be playing video games and not working. I almost always feel like I should be more productive than I am being, but I have to wonder if the part of my brain that actually regulates my productivity isn't just more aware of my abilities and limits than the part of my brain that desires to get so much more done.

Some times it is clearly messed up, though, such as when it's craving nicotine.

Although starting to smoke is a total moron thing to do, there may be less irrationality present in pursuing an existing addiction than commonly believed. Someone who craves nicotine has the choice of attempting to ignore the craving, which has all kinds of bad, immediate consequences (irritability, distraction, twitchiness, insomnia, flu symptoms, sore mouth, sore throat, cough, headache, intestinal protestations, etc.) until you keep it up for a good long time, or giving into it, which has (per cigarette) very small and distant negative consequences (tiny increase in risk of assorted diseases which might kill you or not).

It's possible I'm only defending this because I have the exact same problem with chocolate. (I can resist my chocolate cravings, but only if I'm not doing anything else, and so far I don't have a few weeks where I can be afford to be totally out of commission to detox from delicious chocolatey goodness in the hopes that this would make the cravings go away forever.)

I could probably brainstorm several things that might be able to kill chocolate cravings permanently (for example, animals quickly learn to avoid foods that make them sick), but most of them, if they worked, would probably have the side effect of causing you to no longer be able to enjoy "delicious chocolatey goodness" at all.

If I can go a month without eating any chocolate, I stop craving chocolate. So I think it's addictive, but not permanently/physically.

I've read that self-described chocolate addicts don't get any craving relief from flavorless pills with chocolate on the inside, while "white chocolate" that contains no cocoa does have an effect. So whatever makes chocolate addictive doesn't have all that much to do with what happens after it's swallowed.

Theobromine, an analogue of caffeine not found in white chocolate, is definitely psycho-active, though I think it's unclear if it's addictive. I wouldn't be surprised if you got similar results with other drugs. I've certainly heard anecdotes of people switching to decaf by accident and experiencing the morning coffee as the usual "hit" but then feeling withdrawal later in the day or in later days. It's probably just that directly experienced "cravings" are high-level effects not highly tied to the chemical effects of addiction.

I tend to think of it as placebo addiction.

And yes, I drink decaf sometimes for exactly that reason. Interestingly, being conscious of it doesn't seem to reduce the strength of the "ah, I needed that coffee" feeling by much.

At least ostensibly, white chocolate contains cocoa butter.

White chocolate doesn't always contain cocoa butter, and the FDA, like most chocolate connoisseurs, doesn't consider it chocolate because it doesn't contain chocolate liquor.

This is certainly a possibility as we are not rational agents, nor do we contain (reliable) simulators of anything but the most basic rational agents running on our brains. So it may not always be rational to overcome akrasia, although it certainly is sometimes. And, as others have noted, we are not yet dealing with akrasia-fu that makes irreversible changes to our mental states, so we can always backpedal if we discover the results are not what we later decide we really want.

And, as others have noted, we are not yet dealing with akrasia-fu that makes irreversible changes to our mental states, so we can always backpedal if we discover the results are not what we later decide we really want.

The idea that there even could be such an asymmetry is a confusion: if you manage to sort out your preferences so that you do something different than you're doing now, and that turns out to be a mistake, then it's no different than your current preferences turning out to be a mistake!

It's an illusion, in other words, that akrasia actually exists in the first place: you are simply acting on whatever preferences you happen to have at the moment, combined with whatever models you have for how best to achieve them. To do something different, you'd need to adjust either the preferences or the models.

Your current desires are not -- and cannot be -- acausal; they are and must be determined by the laws of physics. If you "overcome akrasia", then that too must be determined by physical cause and effect.

Neither state appears spontaneously out of the void, nor is akrasia a state of failure to follow the laws of physics. Akrasia is merely a state where the conscious model you have of your preferences fails to match your complete, actual preferences.

As long as you try to single out akrasia as if it were some kind of special case, you'll miss the point entirely. There is nothing in the human anatomy you can point to and say, "aha, there's akrasia", because that is merely a label for your confusion regarding what you (as a complete organism) "want".

This is an interesting line of arguments, but I have a strong sense you are missing something here. I don't believe that anyone here is claiming that akrasia is caused by acausal desires or that fighting akrasia is fighting physical reality (at the ontological level), rather that

To do something different, you'd need to adjust either the preferences or the models.

is not so easily achieved. I think your characterization is right when you say

Akrasia is merely a state where the conscious model you have of your preferences fails to match your complete, actual preferences.

and

There is nothing in the human anatomy you can point to and say, "aha, there's akrasia", because that is merely a label for your confusion regarding what you (as a complete organism) "want".

because certainly akrasia isn't being caused by some akrasia organ, but is the result of the interaction between several adaptations that are at odds with each other. Akrasia is a sort of confusion, though, where your brain seems to resolve to act or believe against what you calculate to be the best.

There's an important distinction that you seem to be missing: the brain appears to be designed in such a way that we are able to think and decide on best courses of actions but find ourselves not actually implementing them. This isn't some abstract argument, either; just look at the literature on heuristics and biases and you'll see that even when people are able to sit down and reason through a situation to come to a rational action, they still don't always act on it.

There's an important distinction that you seem to be missing: the brain appears to be designed in such a way that we are able to think and decide on best courses of actions but find ourselves not actually implementing them.

I'm not missing that distinction; what you're missing is that this is an indication that the conscious "decision" being made is based on incomplete information about one's preferences.

That is, if you experience akrasia, this is an indication that your conscious reasoning is flawed. (Your "unconscious" reasoning may also be flawed or based on cached thoughts, but this does not exempt your conscious reasoning from its failure to take into account your actual preferences.)

Thus, saying we should "fight akrasia" is like saying that we should fight the "low fuel" warning light on our car, when what's actually needed is to put gas in! The warning light is doing you a favor, so calling it bad names isn't helping.

It seems problematic to me that your survey asks people to give the same zero rating for techniques they've never tried (and which may or may not be effective) and techniques they've found to be totally ineffective.

It seems much preferable to me to have no answer given for techniques that haven't been tried.

Actually, the directions to the survey state that a 0 is to be marked for those techniques not tried and a 1 is to be marked as the least effective ranking. To quote: "If you have not tried a technique, rate it as a 0. If you have tried a technique, rate its effectiveness from 1 to 5, with 1 being not very effective and 5 being very effective.".

It's a design limitation of Google docs that I had to format it this way, since with radio buttons there's no way to unchoose marking any answer if you accidentally mark a technique you haven't tried.

Oops. I apologize for not reading carefully enough. I was looking at the text on the left-hand side of the 0-5 options ("0 - Haven't tried / 1 - Not very effective") and parsing that as "0 - haven't tried/not very effective".

I guess many other people made the same mistake I did though, since nobody else corrected me, and my comment was voted up and received another comment that presupposed that what I said was true.

Yeah, definitely may not have been clear enough. Given the responses I've gotten so far, though, I think I'll be revising the survey and doing a repost after asking for input from LW on the content.

My guess is that it's a limitation of Google Docs.

The form/survey functionality of Google Docs allows a question to be required or not, so people could just not answer questions related to techniques they've never tried.

Are there any scientific comparative studies on the effectiveness of various existing methods?

For me, a to-do lists/GTD-inspired approach works well, and has improved my productivity quite a bit. I have no idea how this works out in the brain, but maybe it's about adding one more force to some weighted average which determines one's actions. But I suspect this is all at the psychological level, far away from neurons and synapses.

Likely, there's also some evolutionary angle to this - but can we distinguish something that merely sounds plausible from something that is true? Do we have some testable prediction?

[-][anonymous]15y20

My pet hypothesis is that akrasia was adaptive for ancestors who did not want to waste too many resources on an idea that may or may not work. Overall, it seems better to continually "refresh" your list of goals, and sometimes the future refresh is not as rationally calibrated or clashes with the former or future refresh states.

How do we test it? Gee. You've got me. I'm posting this because I hope a person more creative than me might have an idea.

Indeed... that is a bit of a weak point of evolutionary psychology; it's likely that quite some behaviours have an evolutionary background, but it's very hard to proof -- and in quite some cases, you could 'proof' opposite conclusions; as Chomsky said:

"You find that people cooperate, you say, ‘Yeah, that contributes to their genes’ perpetuating.’ You find that they fight, you say, ‘Sure, that’s obvious, because it means that their genes perpetuate and not somebody else’s. In fact, just about anything you find, you can make up some story for it."

This is not to say that EP is wrong, just that things are hard to proof. E.g., compared to biological evolution, there are no fossils of ancient behaviours or intermediate forms and so on.

[-][anonymous]15y20

"Fossils of ancient behaviors" is brilliant. I can't believe I never thought about that, great example.

I'm going to drift a little offtrack for a moment, but I don't expect it to go too off topic. I've read in Wired that they're working on altering the genome of chickens by looking at specific DNA markers and trying to revert them to be more like their reptilian ancestor. I can see technology progressing so that we can use computers to look at possible previous genomes of humans and what kind of psychologies the genome could create.... The only problem here is the complex interactions between humans and the environment, but it would be a great leap forward for evo-psych I think.

I suspect that part of the reason that akrasia is a problem is that there is a discrepancy between how System 1 and System 2 judge the task to be performed, with System 2 judging it desirable and System 1 screaming that it's not at all desirable.

Setting existential consequences for failure to achieve desired actions/goals

Does this mean "promising to kill oneself for failure to achieve desired actions/goals"?

It might, but my thinking here was to encompass all those techniques that people might try in the line of "do it or God should send me to hell" or "do it or I am failing my ancestors" or "do it or the universe suffers".

Probably should include that in the survey description... :) I would never have known, and might even have given an actual answer to that one.

PS: I know this is way too lated... but it's left here for future survey-makers...

My guess is that akrasia could be more effectively fought if this community called it was it is -- laziness -- instead of using a fancy Greek name.

In what way is "I want not to bother doing it" the same as "I have spent huge amounts of money, effort, emotion, social favors and time trying and failing to do it"? The latter is typical in akrasia. Funny way to be lazy, if you ask me.

Procrastination and laziness may be kinds of akrasia, but simply because that are the type most talked about here does not mean that they are an exhaustive description of "weaknesses of will". One example I find easy to bring up is trying to move while we are in pain. There are definite moments where a crisis of will occurs, and if you have a sharp shooting pain in your leg while walking you will either change your movement against your intended direction or overcome that moment and escape the akrasia for a time.

I do, however, suspect that this community would do a better job at fighting akrasia if we did not confound it solely with procrastination and "laziness".

One may keep oneself occupied with unimportant but important-seeming work (checking emails twice an hour, gold-plating etc.) instead of concentrating on implementing one's best judgment. I don't think that laziness is a good word to describe this condition.

Aren't that simple.

Or "procastination".

Saying "akrasia" is signaling.

Good point.

Esp. for the kind of things you can solve with to-do lists and the like, procrastination is a better name. The only reason to use the term akrasia) seems to be that a web-search will give better-quality results...

I personally consider the distinction this way: procrastination is avoiding something, whereas akrasia is doing something other than what you intend. The distinction is useful, in that there are some differences in how to approach fixing them.

It sounds like the distinction you're making is between whether you do or don't intend to not do what you think you should be doing. Is that correct?

It sounds like the distinction you're making is between whether you do or don't intend to not do what you think you should be doing. Is that correct?

The intention distinction has to do with why I'm doing what I'm doing instead of what I "should".

If I'm procrastinating on X, then I'll do anything but X (to avoid it), even if it's otherwise of low value or not very pleasant. The intent is "not do X".

However, If I'm experiencing akrasia, then I might do Y or Z, because I want to do them more. The intention is, "I want to do Y or Z, but it can wait".

we end up with a brain that only does what we think we really want when evolutionarily prudent

Do we have a good understanding of when this is the case and why? Is there a unifying theory of when our opinions are at odds with our decisions?

Since you assume this behavior is partly beneficial, or at the least was beneficial in the ancestral environment, I think we ought to fully understand this question before deciding (as this post implicitly does) that we'd be better off without akrasia entirely and should fight it anywhere it rears its head. Clearly akrasia is harmful sometimes, but is it always harmful, and can we effectively separate the cases where it is harmful from those where it is beneficial? (Effectively wrt. whatever techniques we might use against akrasia.)

Especially since if we somehow magically gained real desire-based control over our actions, our inexperience in living that way might lead to a lot of mistakes at first. It's easy enough to think of examples where we shouldn't immediately act on our desires, but assuming we can handle these more obvious cases, do we know what else the neural mechanisms of akrasia are responsible for?

If Omega were to offer you complete effortless conscious control over all your decisions and actions, as an irrevokable change, would you accept without knowing the answer to the above questions? What if it was PJEby offering an amazing new technique that's worked for 50% of those who tried it and didn't affect the other 50% in any way?

Of course most or all anti-akrasia techniques being proposed are revokable. But when people propose any kind of changes to my subconscious decision making procedures, which is to say my rationality (or lack of it), I get scared about a small chance of ending up worse off than I started. Hence my desire for a deeper understanding before trying to change things. More than likely someone'll point me at a few books that I should read before wildly speculating - I would be grateful for the pointer.

To the best of my knowledge, no, no one has made a systematic study this, probably largely because so much of it falls into the realm of issues tackled by "productivity experts" and other gurus, hence discussing it academically seems to be unpopular because it feels like it's not prestigious enough. But maybe I'm wrong and I just have never, ever seen any of this stuff for some strange reason.

no technique yet found seems to work for all people

No external, rote technique can work for all people. Specifically:

it is possible for such algorithms or methods to exist, but not techniques or recipes with a fixed number of steps for all cases.

And regarding this:

it may require the medical technology to "rewire" or "reprogram" the brain (pick your inapt metaphor).

Do note that this "medical" technology is already available, courtesy of your brain, which already has the ability to "rewire" and "reprogram" itself upon demand. In fact, it is precisely this ability to rewire itself that causes external techniques of the type you describe to fail in the long run. If you bypass your brain's attempts to avoid or obtain something, it will literally begin reprogramming itself to route around your bypass.

The trick is to get this rewiring process to work for you, instead of against you.

What I mean here is that it may require fundamentally changing aspects of how the brain works that are outside the realm of things you can control.