This reminds me of a webcomic, where the author justifies his lack of self improvement, and his continual sucking at life:
"Pfft. I'll let Future Scott deal with it. That guy's a dick!"
http://kol.coldfront.net/comic/ (No perma-link; it's comic 192, if new one's been posted since I wrote this.)
When dealing with your future self there's an economic balancing act at play, because Future Self's values will inevitable shift. On the extreme side, if Omega had told Aurini'1989 that if he saves his $10 for ten years, it will grow to the point where he can buy every Ninja Turtle action figure out there, Aurini'1989 would have said, "Yes, but Aurini'1999 won't want Ninja Turtles anymore - however, he will likely value the memory of having played with Ninja Turtles." To hold the Future Self completely hostage to the desires of the present makes as little sense as holding the Present Self hostage to the desires of the future.
It breaks down to a tactical problem (which units do you build first in Civ 4?); I'm glad I spent money on that beer five years ago, because I still find value in the memory. What makes the problem difficult to solve is our fuzzy perceptions. First...
Am I allowed to play my own devil's advocate? Autodevil's advocate, if you will (writing down my ideas often helps me criticize them).
Aurini¹'s premise: Short term examples of Akrasia are due primarily to variability of self. Self¹ and Self² are both pursuing their own interests in a rational manner, it's just that their interests are dissonant.
I still think this is largely the case; most instances of regret are either "Knowing what I know now, I wish I hadn't put all my money in Enron," ie "I based my choices on incorrect data,"; or the other possibility, "I wish I hadn't done that last night, but if you press me, I'll admit that I plan to do it again tonight," the second may be foolish, it may be hypocritical, but it's not Akrasia per se, because the regret is temporary, not existential.
There is a third type, however, which is distinctly counter-rational. Well need an example: getting drunk the night before, and failing to show up to traffic court (thus defaulting on an $X00.00 fine which you could have avoided). All Self(x) where (xn) agree that this was a poor choice. While there are substantial differences of Self over time, and this does not...
Akrasia is the tendency to act against your own long-term interests
No, akrasia is acting against your better judgment. This comes apart from imprudence in both directions: (i) someone may be non-akratically imprudent, if they whole-heartedly endorse being biased towards the near; (ii) we may be akratic by failing to act according to other norms (besides prudence) that we reflectively endorse, e.g. morality.
I just have one question, it's so obvious but I don't remember it being asked anywhere.
Humans and all animals tested use hyperbolic discounting + hacks on top of it to deal with paradoxes. Why hasn't evolution implemented exponential discounting in any animal? Is it technically impossible the way brain works (perhaps by local optimum), or is hyperbolic discounting + hacks better in the real world than exponential discounting?
I think this is a far more fundamental problem than anything else about akrasia.
I used to have a system of implicit moral contract with myself.
I saw my own situation as an iterated prisoner dilemma; any of my future selves could desist against its other selves, negating the hopes and dreams of past selves, depriving further selves of certain prospects, all of that for a short term benefit that would have negative long term consequences. The first to desist would win something on the moment, the others loose their investment or their potential. So I tried to keep to my word and plan ahead.
Not sure if I'm still strong willed enough to affirm I'm working like that. Actually, probably not in most cases.
I found this article both interesting and informative. I definitely plan to spend some time studying picoeconomics.
One interesting effect that I have found in personal productivity efforts is that applying techniques to enforce resolution and overcome passive resistance can change the perceived emotional weighting between alternatives, often quite rapidly.
For example, let's say I'm reading LW instead of writing a term paper. I've made a (probably irrational) decision that the negative emotion of exerting the effort to write the term paper exceeds the n...
You've actually missed a key distinction here: the negative emotion of the incomplete assignment is almost certainly what makes you procrastinate... and you're mistakenly interpreting that negative emotion as being about the writing.
What happens is this: since you feel the unfinished item pressure every time you think about doing the task, you literally condition yourself to feel bad about doing the task. It becomes a cached thought (actually a cached somatic marker) tagging the task with the same unpleasantness as the unpleasantness of it "hanging over you".
So, it's not that the process of writing really bothers you, it's the unfinishedness of the task that's bothering you. However, your logical brain assumes that it means you don't want to write (because it doesn't have any built-in grasp of how emotional conditioning works), and so it looks for logical explanations why the writing would be hard.
When you're busy writing, however, you're not thinking about that unfinishedness, so it doesn't come up -- the somatic marker isn't being triggered. That's not at all the same thing as "shifting the balance".
The actual way to fix this is to make it so you don't fee...
Does hyperbolic discounting mean that the sunk-cost fallacy can be adaptive in certain situations, by "locking in" previous decisions?
Does evolutionary psychology provide an explanation for hyperbolic discounting? I found one explanation at http://www.daviddfriedman.com/Academic/econ_and_evol_psych/economics_and_evol_psych.html#fnB27 but it doesn't seem to apply to the example of preference reversal between sleeping early and staying up.
The problem I have with considering future discounting is that it forces me to formulate a consistent personal identity across time scales longer than a few moments. I've never successfully managed that.
To my horror, my future self has different interests to my present self
Can you describe 'my future self' without any sort of pronoun? If you could do that, the horror might, y'know, go away a bit. Thou art physics, after all.
as surely as if I knew the day a murder pill would be forced upon me.
Not quite as surely, otherwise you'd be taking steps to s...
I wonder if hyperbolic discounting uses the visual processing system? It certainly works like perspective foreshortening.
Excellent article and topic. I suffer from this. My main problem (which is merely an excuse) is that there is a difference between what I think I want to do and what my body and mind actually wants to do when it's doing the things I tell it to do. Multiple selves become evident when this happens. The self that has planned the actions, and the self that - in doing those actions - gives up to do other (more fun) things. My approach is to make successive changes to the self who does those actions that 'I' plan, by trying to implement rules for him to follow. But I find it a constant uphill struggle, because he always outsmarts me.
His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms "picoeconomics".
I haven't read Breakdown of Will, but Thomas Schelling makes a similar proposal in his articles on "egonomics".
On the broader question of how to respond to irrationality, I strongly recommend Jon Elster's chapter (including the references) in Explaining Social Behavior.
I haven't worked out the mathematical details, but qualitatively it seems to me that we discount the future more than expected because we don't know what our desires will be like in the future (but nevertheless want to maximize the happiness of our future selves, whatever that may consist in). This means whatever action I take now to benefit my future self has an extra decrement in (present) utility because of my uncertainty in how much the future self will be benefited. And then there's the higher-order effect that the future self may have turned into ...
If we assume that (a) future discounting is potentially rational, and that (b) to be rational, the relative weightings we give to March 30 and March 31 should be the same whether it's March 29 or Jan 1, does it follow that rational future discounting would involve exponential decay? Like, a half-life?
For example, assuming the half life is a month, a day a month from now has half the weighting of today, and a month from that has half the weighting of that, and so on?
I was reminded of this post by a blog article I've just read: http://youarenotsosmart.com/2010/10/27/procrastination/ - it covers the same topic, but I think it presents it in an easier-to-grasp way for folks who aren't actively trying to be more rational.
Excellent food for thought. I especially loved the point relating the distance of the reward to the actual rewarding process itself, and yes defeating Akrasia is the one thing that is (probably) most relevant to Lesswrong readers. This is because most LW-ers are by nature (probably) smarter than their immediate surroundings and it is not understanding of situations that is holding them back. It then (yes, probably!) boil down to either interpersonal skills and/or Akrasia. And the two are not completely mutually exclusive.
This is , at least at first glance, an important step in the right direction.
Ainslie has written quite a few interesting papers in the meantime: http://picoeconomics.org/articles2.html
I generally object to use of the term rational as a moral pejorative as the way its used in the article. We are all dealing with imperfect information, quantum uncertainty and human identity issues. We may suck at it, but the advice to just be more "rational" is insulting to people who are trying their best in an imperfect world. If you think discounting or valuing the future is so easy that you can bandy about words like rational, then I can show you how to make a killing in the mortgage backed securities market.
Akrasia is the tendency to act against your own long-term interests, and is a problem doubtless only too familiar to us all. In his book "Breakdown of Will", psychologist George C Ainslie sets out a theory of how akrasia arises and why we do the things we do to fight it. His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms "picoeconomics". The foundation is a curious discovery from experiments on animals and people: the phenomenon of hyperbolic discounting.
We all instinctively assign a lower weight to a reward further in the future than one close at hand; this is "discounting the future". We don't just account for a slightly lower probability of recieving a more distant award, we value it at inherently less for being further away. It's been an active debate on overcomingbias.com whether such discounting can be rational at all. However, even if we allow that discounting can be rational, the way that we and other animals do it has a structure which is inherently irrational: the weighting we give to a future event is, roughly, inversely proportional to how far away it is. This is hyperbolic discounting, and it is an empirically very well confirmed result.
I say "inherently irrational" because it is inconsistent over time: the relative cost of a day's wait is considered differently whether that day's wait is near or far. Looking at a day a month from now, I'd sooner feel awake and alive in the morning than stay up all night reading comments on lesswrong.com. But when that evening comes, it's likely my preferences will reverse; the distance to the morning will be relatively greater, and so my happiness then will be discounted more strongly compared to my present enjoyment, and another groggy morning will await me. To my horror, my future self has different interests to my present self, as surely as if I knew the day a murder pill would be forced upon me.
If I knew that a murder pill really would be forced upon me on a certain date, after which I would want nothing more than to kill as many people as possible as gruesomly as possible, I could not sit idly by waiting for that day to come; I would want to do something now to prevent future carnage, because it is not what the me of today desires. I might attempt to frame myself for a crime, hoping that in prison my ability to go on a killing spree would be contained. And this is exactly the behavour we see in people fighting akrasia: consider the alcoholic who moves to a town in which alcohol is not sold, anticipating a change in desires and deliberately constraining their own future self. Ainslie describes this as "a relationship of limited warfare among successive selves".
And it is this warfare which Ainslie analyses with the tools of behavioural economics. His analysis accounts for the importance of making resolutions in defeating akrasia, and the reasons why a resolution is easier to keep when it represents a "bright clear line" that we cannot fool ourselves into thinking we haven't crossed when we have. It also discusses the dangers of willpower, and the ways in which our intertemporal bargaining can leave us acting against both our short-term and our long-term interests.
I can't really do more than scratch the surface on how this analysis works in this short article; you can read more about the analysis and the book on Ainslie's website, picoeconomics.org. I have the impression that defeating akrasia is the number one priority for many lesswrong.com readers, and this work is the first I've read that really sets out a mechanism that underlies the strange battles that go on between our shorter and longer term interests.