Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Against Discount Rates

23 Post author: Eliezer_Yudkowsky 21 January 2008 10:00AM

I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me - just as much as if you were to discriminate between blacks and whites.

And there's worse:  If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.

But a 5%-per-year discount rate, compounded exponentially, implies that it is worth saving a single person from torture today, at the cost of 168 people being tortured a century later, or a googol persons being tortured 4,490 years later.

People who deal in global catastrophic risks sometimes have to wrestle with the discount rate assumed by standard economics.  Is a human civilization spreading through the Milky Way, 100,000 years hence - the Milky Way being about 100K lightyears across - really to be valued at a discount of 10-2,227 relative to our own little planet today?

And when it comes to artificial general intelligence... I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it!  I'll build an AGI whose planning horizon cuts out in a thousand years."  Such a putative AGI would be quite happy to take an action that causes the galaxy to explode, so long as the explosion happens at least 1,001 years later.  (In general, I've observed that most wannabe AGI researchers confronted with Singularity-level problems ponder for ten seconds and then propose the sort of clever programming trick one would use for data-mining the Netflix Prize, without asking if it makes deep sense for Earth-originating civilization over the next million years.)

The discount question is an old debate in economics, I know.  I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.

Or to translate this back out of transhumanist discourse:  If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.

Maybe it's easier to believe in a temporal discount rate when you - the you of today - are the king of the hill, part of the most valuable class of persons in the landscape of present and future.  But you wouldn't like it if there were other people around deemed more valuable than yourself, to be traded off against you.  You wouldn't like a temporal discount if the past was still around.

Discrimination always seems more justifiable, somehow, when you're not the person who is discriminated against -

- but you will be.

(Just to make it clear, I'm not advocating against the idea that Treasury bonds can exist.  But I am advocating against the idea that you should intrinsically care less about the future than the present; and I am advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management.)

Comments (80)

Sort By: Old
Comment author: Paul_Crowley 21 January 2008 11:33:12AM -1 points [-]

Obviously there's another sort of discounting that does make sense. If you offer me a choice of a dollar now or $1.10 in a year, I am almost certain you will make good on the dollar now if I accept it, whereas there are many reasons why you might fail to make good on the $1.10. This sort of discounting is rationally hyperbolic, and so doesn't lead to the paradoxes of magnitude over time that you highlight here.

Comment author: gwern 21 January 2011 10:42:14PM 5 points [-]

Yes, that discounting makes sense, but it's explicitly not what Eliezer is talking about. His very first sentence:

"I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers."

(Also, I don't see how that example is 'hyperbolic'.)

Comment author: Perplexed 21 January 2011 11:11:56PM *  0 points [-]

Also, I don't see how that example is 'hyperbolic'.

Agree. Not hyperbolic.

Assuming, in Paul Crowley's example, that there is a constant rate of failure (conditional on not having already failed), this yields well-behaved exponential discounting, which is relatively paradox-free.

Comment author: waveman 19 March 2014 04:49:42AM 0 points [-]

Good point.

More generally as per the wikipedia article http://en.wikipedia.org/wiki/Hyperbolic_discounting#Criticism exponential discounting is only correct if you are equally certain of the payoffs at all the different times.

More broadly it assumes no model error. Whatever decision model you are using you need to be 100% certain of it to justify exponential discounting.

Nassim Taleb points out that quite a few alleged biases are actually quite rational when taking into account model error and he includes a derivation of why the hyperbolic discounting formula is actually valid in many situations.

Silent Risk Section 4.6 Psychological pseudo-biases under second layer of uncertainty. Draft at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2392310

Comment author: tcpkac 21 January 2008 12:03:33PM 0 points [-]

Put baldly, the main underlying question is : how do you compare the value of (a) a unit of work expended now, today, on the well-being of a person alive, now, today, with the value of (b) the same unit of work expended now, today, for the well-being of 500 potential people who might be alive in 500 years' time, given that units of work are in limited supply. I suspect any attempt at a mathematical answer to that would only be an expression of a subjective emotional preference. What is more, the mathematical answer wouldn't be a discount function, it would be a compounding function, as it would be the result of comparing all the AI units of work available between now and time t in the future, with the units of work required between now and time t to address all the potential needs of humanity and trans-humanity between now and the end of time, which looks seriously like infinity.

Comment author: RobinHanson 21 January 2008 01:51:19PM 7 points [-]

You might need a time machine to give a better experience to someone long dead, but not to give them more of what they wanted. For example, if they wanted to be remembered and revered, we can do that for them today. But we don't do much of that, for them. So we don't need time machines to see we don't care that much about our ancestors. We do in fact have conflicts across time, where we each time would prefer allocate resources differently. That is why we should try to arrange deals across time, where for example, we agree to invest for the future, and they agree to remember and revere us.

Comment author: Vladimir_Nesov2 21 January 2008 02:17:00PM 0 points [-]

Discount rate takes care of effect your effort can have on the future, relative to effect it will have on present, it has nothing to do with 'intrinsic utility' of things in the future. Future doesn't exist in the present, you only have a _model_ of the future when you make decisions in the present. Your current decisions are only as good as you can anticipate their effect in the future, and process Robin described in his blog post replay is how it can proceed, it assumes that you know very little and will be better off with just passing resources to future folk to take care of whatever they need themselves.

Comment author: Silas 21 January 2008 02:18:42PM 2 points [-]

My first reaction is to guess that people now are "worth" more than people in 1600 because they have access to more productivity-enhancing equipment, including life-extending equipment. So a proper accounting would make it more like 6000 people. Furthermore, more productivity from someone in the year 1600 would facilitate exponentially more resources (including life-saving resource) over the time since, saving more than 6000 people. After all, that's why interest exists -- because the forgone opportunity grows exponentially! So, even valuing the people equally may justify sacrificing the 12 million for the one.

But I freely admit I may need to rethink that.

Also,

an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.

There is a financial argument against the possibility time travel that was published in a journal (don't have the citation offhand): if it were possible to time-travel, interest rates would be arbitraged to zero. Realizing this, wouldn't the AI give up on that goal? [/possibly naive]

Comment author: CarlShulman 21 January 2008 02:46:59PM 1 point [-]

Robin,

"That is why we should try to arrange deals across time, where for example, we agree to invest for the future, and they agree to remember and revere us." Consider an agent that at any time t does not discount benefits received between t and t+1 year, discounts benefits between t+1 years and t+100 years by half, and does not value benefits realized after t+100 years. If the agent is capable of self-modification, then at any particular time it will want to self-modify to replace the variable 't' with a constant, the time of self-modification, locking in its preferences over world histories for its future selves. The future selves will then expend all available resources in rushed consumption over the next 100 years. So I would expect the bargaining position of the future to get progressively worse with advancing technology.

Comment author: Peter_de_Blanc 21 January 2008 04:27:15PM 0 points [-]

Eli said: I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it! I'll build an AGI whose planning horizon cuts out in a thousand years."

I'm not sure if you're talking about me. I have said that I think we need some sort of bounded utility function, but that doesn't mean it has to be an integral of discounted time-slice values.

Comment author: Eliezer_Yudkowsky 21 January 2008 06:15:03PM 10 points [-]

Peter, it wasn't just you, it was Marcus Hutter's AIXI formalism, and I think at least one or two other people.

Nonetheless, what you proposed was indeed a grave sin. If your own utility function is not bounded, then don't build an AI with a bounded utility function, full stop. This potentially causes infinite damage. Just figure out how to deal with unbounded utility functions. Just deal, damn it.

Of all the forms of human possibility that you could destroy in search of cheap math, going from infinite potential to finite potential has to be one of the worst.

Comment author: timtyler 16 March 2011 08:11:02PM 1 point [-]

I don't think the human brain's equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited - and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.

Comment author: JoshuaZ 16 March 2011 09:15:29PM 7 points [-]

I don't think the human brain's equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited - and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.

This doesn't have much to do with my preferences. I might experience the same level of negative emotion when thinking about Busy Beaver (10) people being tortured as opposed to Busy Beaver(1000) people being tortured, but I still have a preference for which one I'd like if I have to choose between the two.

Comment author: timtyler 16 March 2011 10:12:15PM -2 points [-]

Some mechanism in your (finite) brain is still making that decision.

Comment author: JoshuaZ 16 March 2011 10:23:59PM *  3 points [-]

Some mechanism in your (finite) brain is still making that decision.

Sure. But I can express a preference about infinitely many cases in a finite statement. In particular, my preferences includes something like the following: given the existence of k sentient, sapient entities, and given i < j <= k, I prefer i entities getting tortured to j entities getting tortured assuming everything else is otherwise identical.

Comment author: Perplexed 16 March 2011 09:24:01PM 0 points [-]

No it is not hypothetical. If you build an AI with unbounded utility functions, yet human utility functions are (mostly) bounded, then you have built a (mostly) unfriendly AI. An AI that will be willing to sacrifice arbitrarily large amounts of current human utility in order to gain the resources to create a wonderful future for hypothetical future humans.

Comment author: timtyler 16 March 2011 09:58:25PM *  -1 points [-]

That's diferent, though. The hypothetical I was objecting to was humans having unbounded utility functions. I think that idea is a case of making things up.

FWIW, I stand by the idea that instrumental discounting means that debating ultimate discounting vs a lack of ultimate discounting mostly represents a storm in a teacup. In practice, all agents do instrumental discounting - since the future is uncertain and difficult to directly influence.

Any debate here should really be over whether ultimate discounting on a timescale of decades is desirable - or not.

Comment author: benelliott 16 March 2011 09:59:36PM 0 points [-]

it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation.

Unbounded is not the same as infinite. The integers are unbounded but no integer is infinite. In the same way I can have a utility function with no upper bound on the values it outputs without it ever having to output infinity.

Comment author: wnoise 16 March 2011 11:07:10PM *  0 points [-]

Dopamine levels and endorphin levels are not utility functions. At best they are "hedons", and even that's not indisputably clear -- there's more to happiness than that.

A utility function is itself not something physical. It is one (often mathematically convenient) way of summarizing an agent's preferences in making decisions. These preferences are of course physical. Note, for instance, that everything observable is completely invariant under arbitrary positive affine transformations. Even assuming our preferences can be described by a utility function (i.e. they are consistent -- but we know they're not), it's clear that putting an upper bound on it would no longer agree with the decisions made by a utility function without such a bound.

Comment author: timtyler 16 March 2011 11:31:42PM *  0 points [-]

Dopamine levels and endorphin levels are not utility functions. At best they are "hedons", and even that's not indisputably clear -- there's more to happiness than that.

Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.

I didn't say dopamine levels and endorphin levels were utility functions. The idea is that they are part of the brain's representation of expected utility - and utility.

Comment author: [deleted] 16 March 2011 11:58:17PM 1 point [-]

You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes? The obvious objection to your argument is that the representation could in principle take one of many different forms, some of which allow us to represent something unbounded by means of something bounded. If that were the case, then the boundedness of dopamine would not imply the boundedness of utility.

If you want an example of how this representation might be done, here's one: if you prefer state A to state B, this is (hypothetically) represented by the fact that if you move from state B to state A your dopamine level is raised temporarily - and after some interval, it drops again to a default level. So, every time you move from a less preferred state to a more preferred state, i.e. from lower utility to higher utility, your dopamine level is raised temporarily and then drops back. The opposite happens if you move from higher utility to lower utility.

Though I have offered this as a hypothetical, from the little bit that I've read in the so-called "happiness" literature, something like this seems to be what actualyl goes on. If you receive good fortune, you get especially happy for a bit, and then you go back to a default level of happiness. And conversely, if you suffer some misfortune, you become unhappy for a bit, and then you go back to a default level of happiness.

Unfortunately, a lot of people seem to draw what I think is a perverse lesson from this phenomenon, which is that good and bad fortune does not really matter, because no matter what happens to us, in the end we find ourselves at the default level of happiness. In my view, utility should not be confused with happiness. If a man becomes rich and, in the end, finds himself no happier than before, I don't think that that is a valid argument against getting rich. Rather, temporary increases and decreases in happiness is how our brains mark permanent increases and decreases in utility. That the happiness returns to default does not mean that utility returns to default.

Comment author: timtyler 17 March 2011 12:03:45AM *  -2 points [-]

You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes?

No. What I actually said was:

The idea is that they [Dopamine levels and endorphin levels] are part of the brain's representation of expected utility - and utility.

I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter's [0,1] utility would be able to simulate humans just fine on digital hardware.

Comment author: [deleted] 17 March 2011 12:16:56AM *  2 points [-]

I didn't say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you've changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained "no".] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying "[Dopamine levels and endorphin levels] are part of the brain's representation of expected utility - and utility." Among other things, this says that dopamine is part of the brain's representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying "the brain's representation of utility", I said, "how utility is represented". I don't see any real difference here - just slightly different wording.

Moreover, the key statement that I am basing my interpretation on is not that, but this:

I don't think the human brain's equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited - and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.

Here you are arguing that the human brain's equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.

I argued that the limitation of dopamine and endorphin levels does not imply that the human brain's equivalent to a utility function is bounded. You have not addressed my argument, only claimed - incorrectly, it would appear - that I had misstated your argument.

Comment author: timtyler 17 March 2011 12:23:34AM *  -1 points [-]

I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.

Comment author: [deleted] 17 March 2011 12:32:22AM *  1 point [-]

You are seriously referring me to your entire oeuvre as a supposed explanation of what you meant in the specific comment that I was replying to?

Comment author: timtyler 17 March 2011 12:58:09AM 0 points [-]

I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism - but it doesn't represent my thinking on the topic very well.

Comment author: wnoise 17 March 2011 08:15:21AM 4 points [-]

Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.

No. You've entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it's still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).

Comment author: timtyler 17 March 2011 10:33:35AM *  -1 points [-]

Not exactly an assumption. We can see - more-or-less - how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion - and pleasure and pain. These are the physical representation of utility, the brain's equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don't shoot off to infinity. They saturate - resulting in pleasure and pain saturation points.

Comment author: FAWS 17 March 2011 10:45:15AM 3 points [-]

I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.

Comment author: rwallace 17 March 2011 01:24:02PM 0 points [-]

They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system's feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.

Comment author: timtyler 17 March 2011 01:27:26AM *  -2 points [-]

This has been rather surreal. I express what seems to me to be a perfectly ordinary position - that the finite human brain is unlikely to represent unbounded utilities - or to go in for surreal utilities - and a bunch of people have opined, that somehow, the brain does represent unboundedly large utilities - using mechanisms unspecified.

When pressed, infinite quantities of time are invoked. Omega is invited onto the scene - to represent the unbounded numbers for the human. Uh...

I don't mean to be rude - but do you folk really think you are being rational here? This looks more like rationalising to me.

Is there any evidence for unbounded human utilities? What would make anyone think this is so?

Comment author: CuSithBell 17 March 2011 01:33:51AM 4 points [-]

Several mechanisms for expressing unbounded utility functions (NOT unbounded utilities) have been explained. The distinction has been explained. Several explicit examples have been provided.

At the very least, you should update a little based on the resistance you're experiencing.

As it stands, it looks like you're not making a good-faith attempt to understand the arguments against your position.

Comment author: timtyler 17 March 2011 01:58:09AM *  0 points [-]

Well, I think I can see the other side. People seem to be thinking that utility in deaths (for example) behaves linearly out to infinity. The way utilitarian ethicists dream about.

I don't think that is how the brain works. Scope insensitivity shows that most humans deal badly with the large numbers involved - in a manner quite consistent with bounded utility. There is a ceiling effect for pain and for various pleasure-inducing drugs. Those who claim to have overcome scope insensitivity haven't really changed the underlying utility function used by the human brain. They have just tried to hack it a little - using sophisticated cultural manipulations. Their brain still uses the same finite utilities and utility functions underneath - and it can still be well-modelled that way.

Indeed, I figure you will get more accurate models that way than if you project out to infinity - more accurately reproducing some types of scope insensitivity, for instance.

Comment author: CuSithBell 17 March 2011 03:09:23AM 1 point [-]

Sorry, I think I'm going to have to bow out at this point. It still looks like you're arguing against fictitious positions (like "unbounded utility functions produce infinite utilities") and failing to deal with the explicit counterexamples provided.

Comment author: RobinHanson 21 January 2008 06:23:14PM 4 points [-]

Carl, yes, agents who care little about the future can, if so empowered, do great damage to the future.

Comment author: Peter_McCluskey2 21 January 2008 06:25:01PM 3 points [-]

I agree with much of the thrust of this post. It is very bad that the causes of discount rates (such as opportunity costs) exist. But your reaction to Carl Shulman's time travel argument leaves me wondering whether you have a coherent position. If a Friendly AI with a nonzero discount rate would conclude that it has a chance of creating time travel, and that time travel would work in a way that would abolish opportunity costs, then I would conclude that devoting a really large fraction of available resources to creating time travel is what a genuine altruist would want. Can you clarify whether you really mean to say that an AI shouldn't devote a lot of resources toward something which would abolish opportunity costs (i.e. give everyone everything they can possibly have)? Of course, it's not clear to me that an AI would believe it has a chance of creating time travel. And it's not clear to me that time travel would be sufficient to abolish opportunity costs, arbitrage interest rates to zero, etc. I sometimes attempt to imagine a version of time travel which would do those things, but my mind boggles before I get close to deciding whether such a version is logically consistent. The only model of time travel I understand well enough to believe it is coherent is the one proposed by David Deutsch, which does not appear powerful enough to abolish opportunity costs or arbitrage interest rates to zero. If you were thinking of this model of time travel, then please clarify why you think it says anything interesting about the existence of discount rates.

Comment author: Steven2 21 January 2008 06:26:38PM 1 point [-]

There's a typo in your math. 1/(0.95^408) = 1,226,786,652, not 12,226,786,652. But what's a factor of 10 between friends?

Comment author: steven04612 21 January 2008 07:30:41PM 9 points [-]

That wasn't me; I guess I'll post under this name from now on.

"There is a financial argument against the possibility time travel that was published in a journal (don't have the citation offhand): if it were possible to time-travel, interest rates would be arbitraged to zero."

I'd say this is a special case of the Fermi Paradox for Time Travel. If people can reach us from the future, where are they?

Comment author: Chris_Jeffords 21 January 2008 10:31:20PM 1 point [-]

With respect to discount rates: I understand your argument(s) against the discount rate living in one's pure preferences, but what is it you offer in its stead? No discount rate at all? Should one care the same about all time periods? Isn't this a touch unfair for any single person who values internal discount rates? For global catastrophic risk management: should there be no discount rate applied for valuing and modeling purposes? Isn't this the same as modeling a 0% discount rate?

With respect to AI (excuse my naivety): It seems that if a current human created AI it would ultimately bias an AI being toward having some type of human traits or incentive mapping. Otherwise we are assuming that the "human-creators" have a knowledge base beyond the understanding of "non-creator-humans" where they could create an AI which had no ties to (or resemblance of) human wants, needs, incentives, values, etc. This seems rather implausible to me.

Without omniscient human-creators, I get the feeling that an AI would be inherently biased toward having human characteristics otherwise why wouldn't the humans creating AI try to "create" themselves in the likeness of an "ideal" AI? Furthermore, in keeping with this theme, do you think humans and an AI would have the same incentive for time travel?

Thank you for your time and consideration.

Comment author: michael_vassar3 22 January 2008 05:53:27AM 2 points [-]

Eliezer: Why would you assume that Pete's utility function, or any human's utility function is not bounded (or wouldn't be bounded if humans had utility functions)?

Comment author: Maxim_Lott 23 January 2008 05:07:05AM 0 points [-]

I think there are many serious theoretical errors in this post.

When we say that the interest rate is 5%, that means that in general people would trade $1.05 next year for $1 today. It's basically a fact that they would be willing to do that - if people's real discount rate were lower, they would lend money to the future at a lower interest rate. Eliezer finds it absurd that it's 5% more important to give clean water to a family today than tomorrow, but how is it absurd when this is what consumers are saying they want for *themselves* as well. It's revealed preference.

That stat and the Bruno one are also misleading because:

1) 5% is too high because it is the nominal interest rate, not the real one.

The first reason why the Giordano Bruno number is misleading is that most of that interest rate is due to inflation. Current inflation is around 3%... so that leaves about 2% of the interest rate that is due to default risk and the inconvenience of having the money tied up. Historically, inflation was probably much higher and the actual return on investment may have been closer to zero percent. It's fine to use the nominal interest rate if we're comparing dollars today to dollars tomorrow, but lives and clean drinking water don't inflate like dollars do (so the interest rate on lives, so to speak, should not include that factor.)

2) "Default risk" is huge. Looking at history in retrospect is unfair.

Looking at things from the perspective of someone in Rome during 1600, a dollar could legitimately have been worth tens of thousands of times more than a promised dollar today. Rome could have been invaded in that time, the cold war could have gone nuclear, your investment company could simply have gone bankrupt or swindled you, etc. In fact, would an investment in Rome made in 1600 still be redeemable today? Would it really survive the period in Italian history labeled on wikipedia as "Foreign domination and unification (16th to 19th c.)" and Mussolini?

Any thoughts?

Comment author: soreff 22 August 2010 03:43:19AM 3 points [-]

how is it absurd when this is what consumers are saying they want for themselves as well. It's revealed preference.

Very much agreed. Perhaps one component is a kind of identity drift. I'm not quite the same person I was a year ago, nor am I quite the same person that I will be a year from now. To say that $1 I get now goes strictly to me, while $1 "I" get a year from now goes 99% to the "me" I am now and 1% to something different seems like a plausible part of the temporal preference.

Comment author: tcpkac 23 January 2008 10:51:39AM 2 points [-]

The answer to 'shut up and multiply' is 'that's the way people are, deal with it'. One thing apparent from these exchanges is that 'inferential distance' works both ways.

Comment author: Tim_Freeman 29 April 2008 11:39:15AM 2 points [-]

Three points in response to Eliezer's post and one of his replies:

*** A limited time horizon works better than he says. If an AI wants to put its world into a state desired by humans, and it knows that the humans don't want to live in a galaxy that will be explode in a year, then an AI that closes its books in 1000 years will make sure that the galaxy won't explode one year later.

*** An unbounded utility works worse than he says. Recall the ^^^^ operator originally by Knuth (see http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) that was used in the Pascal's Mugging article at http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/.

If one allows unbounded utilities, then one has allowed a utility of about 3^^^^3 that has no low-entropy representation. In other words, there isn't enough matter to represent a utility.

Humans have heads of a limited size that don't use higher math to represent their desires, so bounding the utility function doesn't limit our ability to describe human desire.

*** Ad-hominem is a fallacy. The merit of a proposed FAI solution is a function of the solution, not who proposed it or how long it took them. An essential step toward overcoming bias is to train oneself not to commit well-known fallacies. There's a good list in "The Art of Controversy" by Schopenhauer, see http://www.gutenberg.org/etext/10731.

Of course, I'm bothering to say this because I have a proposed solution out. See http://www.fungible.com/respect/paper.html.

Comment author: Cosmos 23 September 2009 04:46:10PM 4 points [-]

Interestingly enough, Schumpeter essentially makes this argument in his Theory of Economic Development. He is against the view that humans have intrinsic discount rates, an innate time preference, which was one of the Austrian school's axioms. He thinks that interest is a phenomenon of economic development - resources need to be withdrawn from their customary usage, to allow entrepreneurs to find new combinations of things, and that requires compensation. Once this alternative use of resources is available, however, it becomes an opportunity cost for all other possible actions, which is the foundation of discount rates.

Comment author: JGWeissman 23 September 2009 05:19:38PM 0 points [-]

If an agent with no intrinsic utility discounting still has effective discounting in its instrumental values because it really can achieve exponential growth in such values, would it not still be subject to the same problem of expending all resources on attempting time travel?

Comment author: pengvado 23 September 2009 08:43:11PM *  2 points [-]

An agent with no intrinsic utility discounting doesn't care whether it starts an exponential investment now, or travels to a zillion years in the past and does its investing then. Either way ends up with the same total assets after any given amount of subjective time. (This assumes you can't just chuck some money through the time machine and instantly end up with a zillion year's interest. And that you aren't using the time machine to avoid the end of the universe or some other time limit on future investment. And that you aren't doing anything else interesting with it, like building a PSPACE oracle. I don't think the original statement was about those.)

Whereas an agent with intrinsic discounting would rather live in a universe where the clock says "a zillion BC" than "2009 AD", even if the situations are otherwise identical.

Comment author: XiXiDu 16 March 2011 06:14:09PM *  1 point [-]

[...] an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel [...]

But wouldn't an AI without temporal discounting have an infinite incentive to expend all available resources on attempting to leave the universe to avoid the big freeze? It seems that discounting is a way to avoid Pascal's Mugging scenarios where expected utility can outweigh tiny probabilities. Or isn't it similar to Pascal's Mugging if an AI tries to build a time machine regardless of the possibility of success just because the expected utility does does outweigh any uncertainty? It seems to me that in such cases one is being mugged by one's own expectation. I suppose that is why many people disregard mere possibilities, or logical implications, if they are not backed by other kinds of evidence than their personal "betting preferences".

Anyway, I only came across this and the Pascal's Mugging post yesterday (which do draft my two biggest problems) only to find out that they are still unsolved problems. Or are they dissolved somewhere else?

XiXiDu's Mugging

SIAI guy: We need money to mitigate risks from artificial intelligence.

XiXiDu: I see, but how do you know there are risks from artificial intelligence?

SIAI guy: Years worth of disjunctive lines of reasoning!

XiXiDu: Ok, so given what we know today we'll eventually end up with superhuman AI. But we might err as we've been wrong in the past. Is it wise to decide against other risks in favor of risks from AI given all the uncertainty about the nature of intelligence and its possible time frame? Shouldn't we postpone that decision?

SIAI guy: That doesn't matter. Even given a tiny probability, the expected utility will outweigh it. If we create friendly AI we'll save a galactic civilization from not being created. So we should err on the side of caution.

Comment author: orthonormal 15 April 2011 09:26:44PM 1 point [-]

This should be two separate comments, the first of which is quite insightful, but the second of which belongs in a more relevant thread.

Comment author: timtyler 16 March 2011 08:01:10PM -2 points [-]

I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.

Probably not if it knows how hopeless that is - or if it has anything useful to be getting on with.

With discounting, time is of the essence - it is not to be wasted on idle fantasies.

Comment author: Wei_Dai 15 May 2011 10:37:25PM *  14 points [-]

And there's worse: If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.

Eliezer, you're make non-exponential discounting out to be worse that it actually is. "Time-inconsistent goals" just means different goals, and do not "force you to wage war against your future selves" more than my having different preferences from you forces us to war against each other. One's (non-exponential discounting) agent-moments can avoid war by conventional methods such as bargains or unilateral commitments enforced by third parties, or by more exotic methods such as application of TDT.

For your specific example, conventional game theory says that since agent_2009 moves later, backward induction implies that agent_2008 should not pay $1 since if he did, his choice would just be reversed by agent_2009. TDT-type reasoning makes this game harder to solve and seems to imply that agent_2008 might have some non-zero bargaining power, but in any case I don't think we should expect that agent_2008 and agent_2009 each end up paying $1.

Comment author: timtyler 18 May 2011 07:23:51PM *  0 points [-]

This is often called dynamic inconsistency.

It is not the end of the world - but it is easy enough to avoid.

Comment author: gwern 01 December 2011 12:15:25AM 0 points [-]

And of course there's the argument that "Hyperbolic discounting is rational" given that one's opportunities for return often bounce around a great deal.

Comment author: Good_Burning_Plastic 06 March 2015 10:58:01PM 0 points [-]

If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600

Your choice of an example makes the bullet unduly easy for me to swallow. I had to pretend you had said "to save a random peasant from pneumonia in 1600" instead for my System 1 to get your point.