A year and a half ago I wrote a LessWrong post on anti-akrasia that generated some great discussion. Here's an extended version of that post: messymatters.com/akrasia
And here's an abstract:
The key to beating akrasia (i.e., procrastination, addiction, and other self-defeating behavior) is constraining your future self -- removing your ability to make decisions under the influence of immediate consequences. When a decision involves some consequences that are immediate and some that are distant, humans irrationally (no amount of future discounting can account for it) over-weight the immediate consequences. To be rational you need to make the decision at a time when all the consequences are distant. And to make your future self actually stick to that decision, you need to enter into a binding commitment. Ironically, you can do that by imposing an immediate penalty, by making the distant consequences immediate. Now your impulsive future self will make the decision with all the consequences immediate and presumably make the same decision as your dispassionate current self who makes the decision when all the consequences are distant. I argue that real-world commitment devices, even the popular stickK.com, don't fully achieve this and I introduce Beeminder as a tool that does.
(Also related is this LessWrong post from last month, though I disagree with the second half of it.)
My new claim is that akrasia is simply irrationality in the face of immediate consequences. It's not about willpower nor is it about a compromise between multiple selves. Your true self is the one that is deciding what to do when all the consequences are distant. To beat akrasia, make sure that's the self that's calling the shots.
And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device. There are not multiple selves in any real sense. It's just the one true you whose decision-making is sometimes distorted in the presence of immediate consequences, which act like a drug.
This is a great point. But my position is that the use of self-binding accelerates the possible discovery that your dispassionate current self is wrong about what you want. If you believe you want to be a writer but never write then you're in fact not finding out that you hate writing! Eventually you'll concede that your id is telling you something but you might actually be wrong. It might just be a problem of activation energy, for example.
So I still side with the long-term self. Decide what you want from a distance, commit yourself for some reasonable amount of time, then reassess. It's the rationalist way: gather data and test hypotheses (in this case about your own preferences). Would you agree that it's hard for the delusion to persist under that scheme?
Upon rethinking it, I decided that my original position missed the mark somewhat, because it's not clear how "rationality" plays into an id-ego-superego model (which could either match short-term desires, decider, long-term desires, or immoral desires, decider, moral desires- the first seems more useful for this discussion).
It seems to me that rationality is not superego strengthening, but ego strengthening- and the best way to do that is to elevate whoever isn't present at the moment. If your superego wants you to embark on some plan, consult yo... (read more)