Followup to: Fighting Akrasia: Incentivising Action
Influenced by: Generalizing From One Example
Previously I looked at how we might fight akrasia by creating incentives for actions. Based on the comments to the previous article and Yvain's now classic post Generalizing From One Example, I want to take a deeper look at the source of akrasia and the techniques used to fight it.
I feel foolish for not looking at this closer first, but let's begin by asking what akrasia is and what causes it. As commonly used, akrasia is the weakness-of-will we feel when we desire to do something but find ourselves doing something else. So why do we experience akrasia? Or, more to the point, why to we feel a desire to take actions contrary the actions we desire most, as indicated by our actions? Or, if it helps, flip that question and ask why are the actions we take not always the ones we feel the greatest desire for?
First, we don't know the fine details of how the human brain makes decisions. We know what it feels like to come to a decision about an action (or anything else), but how the algorithm feels from the inside is not a reliable way to figure out how the decision was actually made. But because most people can relate to a feeling of akrasia, this suggests that there is some disconnect between how the brain decides what actions are most desirable and what actions we believe are most desirable. The hypothesis that I consider most likely is that the ability to form beliefs about desirable actions evolved well after the ability to make decisions about what actions are most desirable, and the decision-making part of the brain only bothers to consult the belief-about-desirability-of-actions part of the brain when there is a reason to do so from evolution's point of view.1 As a result we end up with a brain that only does what we think we really want when evolutionarily prudent, hence we experience akrasia whenever our brain doesn't consider it appropriate to consult what we experience as desirable.
This suggests two main ways of overcoming akrasia assuming my hypothesis (or something close to it) is correct: make the actions we believe to be desirable also desirable to the decision-making part of the brain or make the decision-making part of the brain consult the belief-about-desirability-of-actions part of the brain when we want it to. Most techniques fall into the former category since this is by far the easier strategy, but however a technique works, an overriding theme of the akrasia-related articles and comments on Less Wrong is that no technique yet found seems to work for all people.
For convenience, here is a list of some of the techniques discussed here and elsewhere in the productivity literature for fighting akrasia that work for some people but not for everyone.
- Schedule work times
- Set deadlines
- Make to do list
- Create financial consequences for failure
- Create social consequences for failure
- Create physical consequences for failure
- Create existential consequences for failure
- Create additional rewards for success
- Set incremental goals
- Create special environments for working only towards a particular goal
And there are many more tricks and workarounds people have discovered that work for them and some segment of the population. But so far no one has found a Unifying Theory of Akrasia Fighting; otherwise they would have other optimized us all and be rich. So all we have so far is a collection of techniques that sometimes work for some people, but because most promoters of these techniques are busy trying to other optimize because they generalized from one example, we don't even have a good way to see if a technique will work for any particular individual short of having them try it.
I don't expect us to find a universal solution to fighting akrasia any time soon, and it may require the medical technology to "rewire" or "reprogram" the brain (pick your inapt metaphor). But what we can do is make things a little easier for those looking for what they can do that will actually work. In that vein, I've created a survey for the Less Wrong community that will hopefully give us a chance to collect enough data to predict what types of akrasia fighting techniques will work best for which people. It asks a number of questions about your behaviors and thoughts and then focus on what techniques for fighting akrasia you've tried and how well they worked for you. My hope is that I can put all of this data together to make some predictions about how likely a particular technique will work for you, assuming I've asked the right questions.
Please feel free to share this survey (and post) with anyone who you think might be interested, even if they would otherwise not be interested in Less Wrong. The more responses we can get the more useful the data will be. Thanks!
Footnotes:
1 That is to say, there were statistically regular occasions in the environment of evolutionary adaptation that lead those of our ancestors who consulted the belief-about-desirability-of-actions part of the brain on those occasions when making decisions to reproduce at a higher rate.
This observation doesn't seem to undermine the "wrong about what we want" view.
Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.
When considering a complex procedure which would get you a bunch of A next week, you reason "I want B, so why would I do something that gets me a bunch of A?" and don't do it. You would only pursue such a complex procedure if you believed that you wanted A.
By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.
Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.
That is to say, the question need not be "how to align actual preferences and believed preferences," it could be "how do we organize a mutually beneficial trade?"
Of course there are other problems---for example, we aren't very well optimized for A, and in particular aren't great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).