Obviously there's another sort of discounting that does make sense. If you offer me a choice of a dollar now or $1.10 in a year, I am almost certain you will make good on the dollar now if I accept it, whereas there are many reasons why you might fail to make good on the $1.10. This sort of discounting is rationally hyperbolic, and so doesn't lead to the paradoxes of magnitude over time that you highlight here.
Yes, that discounting makes sense, but it's explicitly not what Eliezer is talking about. His very first sentence:
"I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers."
(Also, I don't see how that example is 'hyperbolic'.)
Put baldly, the main underlying question is : how do you compare the value of (a) a unit of work expended now, today, on the well-being of a person alive, now, today, with the value of (b) the same unit of work expended now, today, for the well-being of 500 potential people who might be alive in 500 years' time, given that units of work are in limited supply. I suspect any attempt at a mathematical answer to that would only be an expression of a subjective emotional preference. What is more, the mathematical answer wouldn't be a discount function, it would...
You might need a time machine to give a better experience to someone long dead, but not to give them more of what they wanted. For example, if they wanted to be remembered and revered, we can do that for them today. But we don't do much of that, for them. So we don't need time machines to see we don't care that much about our ancestors. We do in fact have conflicts across time, where we each time would prefer allocate resources differently. That is why we should try to arrange deals across time, where for example, we agree to invest for the future, and they agree to remember and revere us.
Discount rate takes care of effect your effort can have on the future, relative to effect it will have on present, it has nothing to do with 'intrinsic utility' of things in the future. Future doesn't exist in the present, you only have a model of the future when you make decisions in the present. Your current decisions are only as good as you can anticipate their effect in the future, and process Robin described in his blog post replay is how it can proceed, it assumes that you know very little and will be better off with just passing resources to future folk to take care of whatever they need themselves.
My first reaction is to guess that people now are "worth" more than people in 1600 because they have access to more productivity-enhancing equipment, including life-extending equipment. So a proper accounting would make it more like 6000 people. Furthermore, more productivity from someone in the year 1600 would facilitate exponentially more resources (including life-saving resource) over the time since, saving more than 6000 people. After all, that's why interest exists -- because the forgone opportunity grows exponentially! So, even valuing ...
Robin,
"That is why we should try to arrange deals across time, where for example, we agree to invest for the future, and they agree to remember and revere us." Consider an agent that at any time t does not discount benefits received between t and t+1 year, discounts benefits between t+1 years and t+100 years by half, and does not value benefits realized after t+100 years. If the agent is capable of self-modification, then at any particular time it will want to self-modify to replace the variable 't' with a constant, the time of self-modification,...
Eli said: I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it! I'll build an AGI whose planning horizon cuts out in a thousand years."
I'm not sure if you're talking about me. I have said that I think we need some sort of bounded utility function, but that doesn't mean it has to be an integral of discounted time-slice values.
Peter, it wasn't just you, it was Marcus Hutter's AIXI formalism, and I think at least one or two other people.
Nonetheless, what you proposed was indeed a grave sin. If your own utility function is not bounded, then don't build an AI with a bounded utility function, full stop. This potentially causes infinite damage. Just figure out how to deal with unbounded utility functions. Just deal, damn it.
Of all the forms of human possibility that you could destroy in search of cheap math, going from infinite potential to finite potential has to be one of the worst.
Carl, yes, agents who care little about the future can, if so empowered, do great damage to the future.
I agree with much of the thrust of this post. It is very bad that the causes of discount rates (such as opportunity costs) exist. But your reaction to Carl Shulman's time travel argument leaves me wondering whether you have a coherent position. If a Friendly AI with a nonzero discount rate would conclude that it has a chance of creating time travel, and that time travel would work in a way that would abolish opportunity costs, then I would conclude that devoting a really large fraction of available resources to creating time travel is what a genuine altrui...
There's a typo in your math. 1/(0.95^408) = 1,226,786,652, not 12,226,786,652. But what's a factor of 10 between friends?
That wasn't me; I guess I'll post under this name from now on.
"There is a financial argument against the possibility time travel that was published in a journal (don't have the citation offhand): if it were possible to time-travel, interest rates would be arbitraged to zero."
I'd say this is a special case of the Fermi Paradox for Time Travel. If people can reach us from the future, where are they?
With respect to discount rates: I understand your argument(s) against the discount rate living in one's pure preferences, but what is it you offer in its stead? No discount rate at all? Should one care the same about all time periods? Isn't this a touch unfair for any single person who values internal discount rates? For global catastrophic risk management: should there be no discount rate applied for valuing and modeling purposes? Isn't this the same as modeling a 0% discount rate?
With respect to AI (excuse my naivety): It seems that if a current hum...
Eliezer: Why would you assume that Pete's utility function, or any human's utility function is not bounded (or wouldn't be bounded if humans had utility functions)?
I think there are many serious theoretical errors in this post.
When we say that the interest rate is 5%, that means that in general people would trade $1.05 next year for $1 today. It's basically a fact that they would be willing to do that - if people's real discount rate were lower, they would lend money to the future at a lower interest rate. Eliezer finds it absurd that it's 5% more important to give clean water to a family today than tomorrow, but how is it absurd when this is what consumers are saying they want for themselves as well. It's revealed ...
The answer to 'shut up and multiply' is 'that's the way people are, deal with it'. One thing apparent from these exchanges is that 'inferential distance' works both ways.
Three points in response to Eliezer's post and one of his replies:
* A limited time horizon works better than he says. If an AI wants to put its world into a state desired by humans, and it knows that the humans don't want to live in a galaxy that will be explode in a year, then an AI that closes its books in 1000 years will make sure that the galaxy won't explode one year later.
* An unbounded utility works worse than he says. Recall the ^^^^ operator originally by Knuth (see http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) that was used in the Pas...
Interestingly enough, Schumpeter essentially makes this argument in his Theory of Economic Development. He is against the view that humans have intrinsic discount rates, an innate time preference, which was one of the Austrian school's axioms. He thinks that interest is a phenomenon of economic development - resources need to be withdrawn from their customary usage, to allow entrepreneurs to find new combinations of things, and that requires compensation. Once this alternative use of resources is available, however, it becomes an opportunity cost for all other possible actions, which is the foundation of discount rates.
If an agent with no intrinsic utility discounting still has effective discounting in its instrumental values because it really can achieve exponential growth in such values, would it not still be subject to the same problem of expending all resources on attempting time travel?
[...] an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel [...]
But wouldn't an AI without temporal discounting have an infinite incentive to expend all available resources on attempting to leave the universe to avoid the big freeze? It seems that discounting is a way to avoid Pascal's Mugging scenarios where expected utility can outweigh tiny probabilities. Or isn't it similar to Pascal's Mugging if an AI tries to build a time machine regardless of the possibility of success ...
I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.
Probably not if it knows how hopeless that is - or if it has anything useful to be getting on with.
With discounting, time is of the essence - it is not to be wasted on idle fantasies.
And there's worse: If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.
Eliezer, you're make non-exponential discounting out to be worse that it actually is. "Time-inconsistent goals" just means different goals, and do not "force you to wage war against your future selves" more than my having different preferences from you forces us to war against each other. One's (non-exponential discounting) agent-moments can avoid war by conventional methods such as bargains or unilateral commitments enforced by third parties, or by more exotic methods such as application of TDT.
For your specific example, conventional game theory says that since agent_2009 moves later, backward induction implies that agent_2008 should not pay $1 since if he did, his choice would just be reversed by agent_2009. TDT-type reasoning makes this game harder to solve and seems to imply that agent_2008 might have some non-zero bargaining power, but in any case I don't think we should expect that agent_2008 and agent_2009 each end up paying $1.
If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600
Your choice of an example makes the bullet unduly easy for me to swallow. I had to pretend you had said "to save a random peasant from pneumonia in 1600" instead for my System 1 to get your point.
I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers. The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me - just as much as if you were to discriminate between blacks and whites.
And there's worse: If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.
But a 5%-per-year discount rate, compounded exponentially, implies that it is worth saving a single person from torture today, at the cost of 168 people being tortured a century later, or a googol persons being tortured 4,490 years later.
People who deal in global catastrophic risks sometimes have to wrestle with the discount rate assumed by standard economics. Is a human civilization spreading through the Milky Way, 100,000 years hence - the Milky Way being about 100K lightyears across - really to be valued at a discount of 10-2,227 relative to our own little planet today?
And when it comes to artificial general intelligence... I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it! I'll build an AGI whose planning horizon cuts out in a thousand years." Such a putative AGI would be quite happy to take an action that causes the galaxy to explode, so long as the explosion happens at least 1,001 years later. (In general, I've observed that most wannabe AGI researchers confronted with Singularity-level problems ponder for ten seconds and then propose the sort of clever programming trick one would use for data-mining the Netflix Prize, without asking if it makes deep sense for Earth-originating civilization over the next million years.)
The discount question is an old debate in economics, I know. I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.
Or to translate this back out of transhumanist discourse: If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.
Maybe it's easier to believe in a temporal discount rate when you - the you of today - are the king of the hill, part of the most valuable class of persons in the landscape of present and future. But you wouldn't like it if there were other people around deemed more valuable than yourself, to be traded off against you. You wouldn't like a temporal discount if the past was still around.
Discrimination always seems more justifiable, somehow, when you're not the person who is discriminated against -
- but you will be.
(Just to make it clear, I'm not advocating against the idea that Treasury bonds can exist. But I am advocating against the idea that you should intrinsically care less about the future than the present; and I am advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management.)