I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers. The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me - just as much as if you were to discriminate between blacks and whites.
And there's worse: If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.
But a 5%-per-year discount rate, compounded exponentially, implies that it is worth saving a single person from torture today, at the cost of 168 people being tortured a century later, or a googol persons being tortured 4,490 years later.
People who deal in global catastrophic risks sometimes have to wrestle with the discount rate assumed by standard economics. Is a human civilization spreading through the Milky Way, 100,000 years hence - the Milky Way being about 100K lightyears across - really to be valued at a discount of 10-2,227 relative to our own little planet today?
And when it comes to artificial general intelligence... I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it! I'll build an AGI whose planning horizon cuts out in a thousand years." Such a putative AGI would be quite happy to take an action that causes the galaxy to explode, so long as the explosion happens at least 1,001 years later. (In general, I've observed that most wannabe AGI researchers confronted with Singularity-level problems ponder for ten seconds and then propose the sort of clever programming trick one would use for data-mining the Netflix Prize, without asking if it makes deep sense for Earth-originating civilization over the next million years.)
The discount question is an old debate in economics, I know. I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.
Or to translate this back out of transhumanist discourse: If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.
Maybe it's easier to believe in a temporal discount rate when you - the you of today - are the king of the hill, part of the most valuable class of persons in the landscape of present and future. But you wouldn't like it if there were other people around deemed more valuable than yourself, to be traded off against you. You wouldn't like a temporal discount if the past was still around.
Discrimination always seems more justifiable, somehow, when you're not the person who is discriminated against -
- but you will be.
(Just to make it clear, I'm not advocating against the idea that Treasury bonds can exist. But I am advocating against the idea that you should intrinsically care less about the future than the present; and I am advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management.)
Eliezer, you're make non-exponential discounting out to be worse that it actually is. "Time-inconsistent goals" just means different goals, and do not "force you to wage war against your future selves" more than my having different preferences from you forces us to war against each other. One's (non-exponential discounting) agent-moments can avoid war by conventional methods such as bargains or unilateral commitments enforced by third parties, or by more exotic methods such as application of TDT.
For your specific example, conventional game theory says that since agent_2009 moves later, backward induction implies that agent_2008 should not pay $1 since if he did, his choice would just be reversed by agent_2009. TDT-type reasoning makes this game harder to solve and seems to imply that agent_2008 might have some non-zero bargaining power, but in any case I don't think we should expect that agent_2008 and agent_2009 each end up paying $1.
And of course there's the argument that "Hyperbolic discounting is rational" given that one's opportunities for return often bounce around a great deal.