Related to Exterminating life is rational.

ADDED: Standard assumptions about utility maximization and time-discounting imply that we shouldn't care about the future.  I will lay out the problem in the hopes that someone can find a convincing way around it.  This is the sort of problem we should think about carefully, rather than grasping for the nearest apparent solution.  (In particular, the solutions "If you think you care about the future, then you care about the future", and, "So don't use exponential time-discounting," are easily-grasped, but vacuous; see bullet points at end.)

The math is a tedious proof that exponential time discounting trumps geometric expansion into space.  If you already understand that, you can skip ahead to the end.  I have fixed the point raised by Dreaded_Anomaly.  It doesn't change my conclusion.

Suppose that we have Planck technology such that we can utilize all our local resources optimally to maximize our utility, nearly instantaneously.

Suppose that we colonize the universe at light speed, starting from the center of our galaxy (we aren't in the center of our galaxy; but it makes the computations easier, and our assumptions more conservative, since starting from the center is more favorable to worrying about the future, as it lets us grab lots of utility quickly near our starting point).

Suppose our galaxy is a disc, so we can consider it two-dimensional.  (The number of star systems expanded into per unit time is well-modeled in 2D, because the galaxy's thickness is small compared to its diameter.)

The Milky Way is approx. 100,000 light-years in diameter, with perhaps 100 billion stars.  These stars are denser at its center.  Suppose density changes linearly (which Wikipedia says is roughly true), from x stars/sq. light year at its center, to 0 at 50K light-years out, so that the density at radius r light-years is x(50,000-r).  We then have that the integral over r = 0 to 50K of 2πrx(50000-r)dr = 100 billion, 2πx(50000∫rdr - ∫r2dr) = 100 billion, x = 100 billion / 2π(50000∫rdr - ∫r2dr) = 100 billion / π[(50000r2 - 2r3/3) from r=0 to 50K = π(50000(50000)2 - 2(50000)3/3) = 500003π(1 - 2/3)] = 100 billion / 130900 billion = .0007639.

We expand from the center at light speed, so our radius at time t (in years) is t light-years.  The additional area enclosed in time dt is 2πtdt, which contains 2πtx(50000-t)dt stars.

Suppose that we are optimized from the start, so that expected utility at time t is proportional to number of stars consumed at time t.  Suppose, in a fit of wild optimism, that our resource usage is always sustainable.  (A better model would be that we completely burn out resources as we go, so utility at time t is simply proportional to the ring of colonization at time t.  This would result in worrying a lot less about the future.)  Total utility at time t is 2πx∫t(50000-t)dt from 0 to t = 2πx(50000t2/2 - t3/3) ≈120t2 - .0016t3.

Our time discounting for utility is related to that we find empirically today, encoded in our rate of return on investment, which roughly doubles every ten years.  Suppose that, with our Planck technology, subjective time is Y Planck-tech years = 1 Earth year, so our time discounting says that utility x at time t is worth utility x/2.1Y at time t+1.  Thus, the utility that we, at time 0, assign to time t, with time discounting, is (120t2 - .0016t3) / 2.1Yt.  The total utility we assign to all time from now to infinity is the integral, from t=0 to infinity, of (120t2 - .0016t3) / 2.1Yt.

Look at that exponential, and you see where this is going.

Let's be optimistic again, and drop the .0016t3, even though including it would make us worry less about the future. <CORRECTION DUE TO Dreaded_Anomaly> Rewrite 2.1Yt as (2.1Y)t = eat, a = .1Yln2.  Integrate by parts to see ∫t2e-atdt = -e-at(t2/a + 2t/a2 + 2/a3).  Then ∫120t2/2.1Ytdt = 120∫t2e-atdt = -120e-at(t2/a + 2t/a2 + 2/a3) from t=0 to infinity.</CORRECTION DUE TO Dreaded_Anomaly>

For Y = 1 (no change in subjective time), t=0 to infinity, this is about 6006. For comparison, the integral from t=0 to 10 years is about 5805.  Everything after the first 10 years accounts for 3.3% of total utility over all time, as viewed by us in the present.  For Y = 100, the first 10 years account for all but 1.95 x 10-27 of the total utility.

What all this math shows is that, even making all our assumptions so as to unreasonably favor getting future utility quickly and having larger amounts of utility as time goes on, time discounting plus the speed of light plus the Planck limit means the future does not matter to utility maximizers.  The exponential loss due to time-discounting always wins out over the geometric gains due to expansion through space.  (Any space.  Even supposing we lived in a higher-dimensional space would probably not change the results significantly.)

Here are some ways of making the future matter:

  • Assume that subjective time will change gradually, so that each year of real time brings in more utility than the last.
  • Assume that the effectiveness at utilizing resources to maximize utility increases over time.
  • ADDED, hat tip to Carl Shulman: Suppose some loophole in physics that lets us expand exponentially, whether through space, additional universes, or downward in size.
  • ADDED: Suppose that knowledge can be gained forever at a rate that lets us increase our utility per star exponentially forever.

The first two don't work:

  • Both these processes run up against the Planck limit pretty soon.
  • However far the colonization has gone when we run up against the Planck limit, the situation at that point will be worse (from the perspective of wanting to care about the future) than starting from Earth, since the rate of gain per year in utility divided by total utility is smaller the further out you go from the galactic core.

So it seems that, if we maximize expected total utility with time discounting, we need not even consider expansion beyond our planet.  Even the inevitable extinction of all life in the Universe from being restricted to one planet scarcely matters in any rational utility calculation.

Among other things, this means we might not want to turn the Universe over to a rational expected-utility maximizer.

I know that many of you will reflexively vote this down because you don't like it.  Don't do that.  Do the math.

ADDED: This post makes it sound like not caring about the future is a bad thing.  Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.  For example, while a FAI that doesn't care about the future might neglect expansion into space, it won't kill 90% of the people on earth because they pose a threat during this precarious transition period.

ADDED: Downvoting this is saying, "This is not a problem".  And yet, most of those giving their reasons for downvoting have no arguments against the math.

  • If you do the math, and you find you don't like the outcome, that does not prove that your time-discounting is not exponential.  There are strong reasons for believing that time-discounting is exponential; whereas having a feeling that you hypothetically care about the future is not especially strong evidence that your utility function is shaped in a way that makes you care about the future, or that you will in fact act as if you cared about the future.  There are many examples where people's reactions to described scenarios do not match utility computations!  You are reading LessWrong; you should be able to come up with a half-dozen off the top of your head.  When your gut instincts disagree with your utility computations, it is usually evidence that you are being irrational, not proof that your utility computations are wrong.
  • I am fully aware that saying "we might not want to turn the Universe over to a rational expected-utility maximizer" shows I am defying my utility calculations.  I am not a fully-rational expectation maximizer.  My actions do not constitute a mathematical proof; even less, my claims in the abstract about what my actions would be.  Everybody thinks they care about the future; yet few act as if they do.
  • The consequences are large enough that it is not wise to say, "We can dispense with this issue by changing our time-discounting function".  It is possible that exponential time-discounting is right, and caring about the future is right, and that there is some subtle third factor that we have not thought of that works around this.  We should spend some time looking for this answer, rather than trying to dismiss the problem as quickly as possible.
  • Even if you conclude that this proves that we must be careful to design an AI that does not use exponential time-discounting, downvoting this topic is a way of saying, "It's okay to ignore or forget this fact even though this may lead to the destruction of all life in the universe."  Because the default assumption is that time-discounting is exponential.  If you conclude, "Okay, we need to not use an exponential function in order to not kill ourselves", you should upvote this topic for leading you to that important conclusion.
  • Saying, "Sure, a rational being might let all life in the Universe die out; but I'm going to try to bury this discussion and ignore the problem because the way you wrote it sounds whiny" is... suboptimal.
  • I care about whether this topic is voted up or down because I care (or at least think I care) about the fate of the Universe.  Each down-vote is an action that makes it more likely that we will destroy all life in the Universe, and it is legitimate and right for me to argue against it.  If you'd like to give me a karma hit because I'm an ass, consider voting down Religious Behaviourism instead.
Rationalists don't care about the future
New Comment


148 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Concise version:
If we have some maximum utility per unit space (reasonable, since there is maximum entropy, and therefore probably a maximum information, per unit space), and we do not break the speed of light, our maximum possible utility will expand polynomially. If we discount future utility exponentially, like the 10-year doubling time of the economy can suggest, the merely polynomial growth gets damped exponentially and we don't care about the far future.

Big problem:
Assumes exponential discounting. However this can also be seen as a reductio of exponential discounting - we don't want to ignore what happens 50 years from now, and we exhibit many behaviors typical of caring about the far future. There's also a sound genetic basis for caring about our descendants, which implies non-exponential discounting programmed into us.

5Morendil
Alternately, as a reductio of being strict utility maximizers.

Or or or maybe it's a reductio of multiplication!

3Normal_Anomaly
As of right now, I'd rather be a utility maximizer than an exponential discounter.
7Morendil
Oh, don't worry. Your time preferences being inconsistent, you'll eventually come around to a different point of view. :)
1Manfred
That's tricky, since utility is defined as the stuff that gets maximized - and it can be extended beyond just consequentialism. What it relies on is the function-like properties of "goodness" of the relevant domain (world histories and world states being notable domains). So a reductio of utility would have to contrast the function-ish properties of utility with some really compelling non-function-ish properties of our judgement of goodness. An example would be if if world state A was better than B, which was better than C, but C was better than A. This qualifies as "tricky" :P
4Morendil
Why should the objection "actual humans don't work that way" work to dismiss exponential discounting, but not work to dismiss utility maximization? Humans have no genetic basis to either maximize their utility OR discount exponentially. My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?
1Manfred
Ah, that's a good point - to have a constant utility function over time, things have to look proportionately the same at any time, so if there's discounting it should be exponential. So I agree, this post is an argument against making strict utility maximizers with constant utility functions and also discounting. So I guess the options are to have either non-constant utility functions or no discounting. (random links!)
0Morendil
It seems very difficult to argue for a "flat" discount function, even if (as I can do only with some difficulty) one sees things from a utilitarian standpoint: I am not indifferent between gaining 1 utilon right away, versus gaining 1 utilon in one hundred years. Probing to see where this intuition comes from, the first answer seems to be "because I'm not at all sure I'll still be around in one hundred years". The farther in the future the consequences of a present decision, the more uncertain they are.
0Wei Dai
I guess you're referring to this post by Eliezer? If so, see the comment I just made there.
0dspeyer
Do things become any clearer if you figure that some of what looks like time-discounting is actually risk-aversion with regard to future uncertainty? Ice cream now or more ice cream tomorrow? Well tomorrow I might have a stomach bug and I know I don't now, so I'll take it now. In this case, changing the discounting as information becomes available makes perfect sense.
2Wei Dai
Yes, there's actually a literature on how exponential discounting combined with uncertainty can look like hyperbolic discounting. There are apparently two lines of thought on this: 1. There is less hyperbolic discounting than it seems. What has been observed as "irrational" hyperbolic discounting is actually just rational decision making using exponential discounting when faced with uncertainty. See Can We Really Observe Hyperbolic Discounting? 2. Evolution has baked hyperbolic discounting into us because it actually approximates optimal decision making in "typical" situations. See Uncertainty and Hyperbolic Discounting.
4sark
May I ask how the doubling time of the economy can suggest how we discount future utility?
1Manfred
People are willing to pay people future money that increases exponentially in exchange for money now (stock trends bear this out and many other sorts of investments are inherently exponential). If we make the (bad, unendorsed by me) simplification that utility is proportional to money, people are willing to pay an exponential amount of future utility for current utility - that is, they discount the value of future utility.
-1Morendil
Name three.
0Manfred
Okay, I thought of some. Exercise completed!
0[anonymous]
For a typical value of "we": * We do have rent funds, or your local countries equivalent. * We want to educate our grandchildren, for a time after we expect us already to be dead. * We value fundamental research (i.e. give status to) just for the possibility that something interesting for the future which may not even be helpful for us comes out. Point being: we want, people just fail to do so. Because of a lack of rationality, and knowledge. Which is the reason why LW ever came into being. Now, you can argue about "far". But with more brain than me, it should not be that difficult to make the same point.

Downvoted for (1) being an extraordinarily laborious way of saying "decaying exponential times modest-degree polynomial is rapidly decaying", (2) only doing the laborious calculations and not mentioning why the result was pretty obvious from the outset, (3) purporting to list ways around the problem (if it is one) and not so much as mentioning "don't discount exponentially", (4) conflating "rationalist" with "exponentially discounting expected-utility maximizer", and most of all (5) the horrible, horrible I-know-I'm-going-to-be-downvoted-for-this-and-you're-all-so-stupid sympathy-fishing.

[EDITED to fix a typo: I'd numbered my points 1,2,3,5,5. Oops.]

5luminosity
I would have downvoted it for 5 alone, if I had enough karma to.
5wedrifid
Unless the reference is obsolete we can make 4 downvotes per karma point. If so you must be really laying down the quality control. Bravo!

What information can be derived about utility functions from behavior?

(Here, "information about utility functions" may be understood in your policy-relevant sense, of "factors influencing the course of action that rational expected-utility maximization might surprisingly choose to force upon us after it was too late to decommit.")

Suppose you observe that some agents, when they are investing, take into account projected market rates of return when trading off gains and losses at different points in time. Here are two hypotheses about the utility functions of those agents.

Hypothesis 1: These agents happened to already have a utility function whose temporal discounting was to match what the market rate of return would be. This is to say: The utility function already assigned particular intrinsic values to hypothetical events in which assets were gained or lost at different times. The ratios between these intrinsic values were already equal to what the appropriate exponential of the integrated market rate of return would later turn out to be.

Hypothesis 2: These agents have a utility function in which assets gained or lost in the near term are valued because of an in... (read more)

-2PhilGoetz
Any temporal discounting other than temporal is provably inconsistent, a point Eliezer makes in his post against temporal discounting. Exponential temporal discounting is the default assumption. My post works with the default assumption. Arguing that you can use an alternate method of discounting would require a second post. When you have a solution that is provably the only self-consistent solution, it's a drastic measure to say, "I will simply override that with my preferences. I will value irrationality. And I will build a FAI that I am entrusting with the future of the universe, and teach it to be irrational." It's not off the table. But it needs a lot of justification. I'm glad the post has triggered discussion of possible other methods of temporal discounting. But only if it leads to a serious discussion of it, not if it just causes people to say, "Oh, we can get around this problem with a non-exponential discounting", without realizing all the problems that entails.

Any temporal discounting other than temporal is provably inconsistent

The conditions of the proof are applicable only to reinforcement agents which, as a matter of architecture, are forced to integrate anticipated rewards using a fixed weighting function whose time axis is constantly reindexed to be relative to the present. If we could self-modify to relax that architectural constraint -- perhaps weighting according to some fixed less temporally indexical schedule, or valuing something other than weighted integrals of reward -- would you nonetheless hold that rational consistency would require us to continue to engage in exponential temporal discounting? Whether or not the architectural constraint had previously been a matter of choice? (And who would be the "us" who would thus be required by rational consistency, so that we could extract a normative discount rate from them? Different aspects of a person or civilization exhibit discount functions with different timescales, and our discount functions and architectural constraints can themselves partially be traced to decision-like evolutionary and ecological phenomena in the biosphere, whose "reasoning" we may ... (read more)

0timtyler
To recap, the idea is that it is the self-similarity property of exponential functions that produces this result - and the exponential function is the only non-linear function with that property. All other forms of discounting allow for the possibility of preference reversals with the mere passage of time - as discussed here. This idea has nothing to do with reinforcement learning.
3CarlShulman
People are in fact inconsistent, and would like to bind their future selves and future generations. Folk care more about themselves than future generations, but don't care much more about people 100 generations out than 101 generations out. If current people could, they would commit to a policy that favored the current generation, but was much more long-term focused thereafter.
0timtyler
You meant to say: "other than exponential".

You keep constructing scenarios whose intent, as far as I can tell, is to let you argue that in those scenarios any currently imaginable non-human system would be incapable of choosing a correct or defensible course of action. By comparison, however, you must also be arguing that some human system in each of those scenarios would be capable of choosing a correct or defensible course of action. How?

And: Suppose you knew that someone was trying to understand the answer to this question, and create the field of "Artificial Ability to Choose Correct and Defensible Courses of Action The Way Humans Apparently Can". What kinds of descriptions do you think they might give of the engineering problem at the center of their field of study, of their criteria for distinguishing between good and bad ways of thinking about the problem, and of their level of commitment to any given way in which they've been trying to think about the problem? Do those descriptions differ from Eliezer's descriptions regarding "Friendly AI" or "CEV"?

You seem to be frustrated about some argument(s) and conclusions that you think should be obvious to other people. The above is an explanation of how some conclusions that seem obvious to you could seem not obvious to me. Is this explanation compatible with your initial model of my awareness of arguments' obviousnesses?

Rational expected-utility-maximizing agents get to care about whatever the hell they want. Downvoted.

0wedrifid
Most inspirational philosophical quote I've seen in a long time! Up there as a motivational quote too.
-1PhilGoetz
If an agent explicitly says, "My values are such that I care more about the state of the universe a thousand years from now than the state of the universe tomorrow", I have no firm basis for saying that's not rational. So, yes, I can construct a "rational" agent for which the concern in this post does not apply. If I am determined simply to be perverse, that is, rather than to be concerned with preventing the destruction of the universe by the sort of agents anyone is likely to actually construct. An agent like that doesn't have a time-discounting function. It only makes sense to talk about a time discounting function when your agent - like every single rational expectation-maximizing agent ever discussed, AFAIK, anywhere, ever, except in the above comment - has a utility function that evaluates states of the world at a given moment, and whose utility function for possible timelines specifies some function (possibly a constant function) describing their level of concern for the world state as a function of time. When your agent is like that, it runs into the problem described in this post. And, if you are staying within the framework of temporal discounting, you have only a few choices: * Don't care about the future. Eventually, accidentally destroy all life, or fail to preserve it from black swans. * Use hyperbolic discounting, or some other irrational discounting scheme, even though this may be like adding a contradiction into a system that uses resolution. (I think the problems with hyperbolic discounting may go beyond its irrationality, but that would take another post.) * Use a constant function weighting points in time (don't use temporal discounting). Probably end up killing lots of humans. If you downvoted the topic as unimportant because rational expectation-maximizers can take any attitude towards time-discounting they want, why did you write a post about how they should do time-discounting?
2PhilGoetz
BTW, genes are an example of an agent that arguably has a reversed time-discounting function. Genes "care" about their eventual, "equilibrium" level in the population. This is a tricky example, though, because genes only "care" about the future retrospectively; the more-numerous genes that "didn't care", disappeared. But the body as a whole can be seen as maximizing the proportion of the population that will contain its genes in the distant future. (Believing this is relevant to theories of aging that attempt to explain the Gompertz curve.)
0timtyler
Kinda - but genes are not in practice of looking a million years ahead - they are lucky if they can see or influence two generations worth ahead - so: instrumental discounting applies here too.

Both Eliezer and Robin Hanson have argued strongly against time discounting of utility.

EDIT: I'm partially wrong, Hanson is in favour. Sorry and thanks for the correction.

Roko once argued to me that if we are to discount the future, we should use our true discounting function: a hyperbolic function. Because even if that's inherently irrational, it's still what we want. This would also not display the behaviour you discuss here.

Both Eliezer and Robin Hanson have argued strongly against time discounting of utility.

Not Robin Hanson AFAIK - see his: For Discount Rates. Here's YuEl's Against Discount Rates.

3PhilGoetz
I'm bothered by saying, "even if that's inherently irrational, it's still what we want." Do you really want to deliberately make your AI irrational? Seems that the subject merits more discussion before committing to that step.
2Rain
Assigning value to things is an arational process.
1Will_Newsome
I think that is a dangerous anti-epistemic meme. (Especially since calling the process of determining our values "assigning value to things" is misleading, though I get what you mean.) You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value. Instrumental and epistemic rationality do not decouple nearly so easily.
3Vladimir_Nesov
Probably nothing short of a good post focused on this single idea will change minds.
-1Rain
I didn't even realize it was controversial. Evolution created our core values > evolution is arational > our core values are arational.
4wedrifid
I don't disagree with the conclusion but the reasoning does not follow.
4Vladimir_Nesov
If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong. Also, if the goals set by evolution should indeed be seen as arbitrary on reflection, you should ignore them. But some of them are not (while others are).
2Rain
As I've mentioned before, I hated the 'arbitrary' article, and most of the meta-ethics sequence. Value is arational, and nobody's provided a coherent defense otherwise. You're not discovering "rational value", you're discarding irrational instrumental values in a quest to achieve or discover arational terminal values. Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...
-2Will_Newsome
This idea of "rational value" you think is incoherent is perhaps a straw-man. Let's instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable. For example, I think a lot of people use a simple line of reasoning like "okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they're the real values, ..." and then basically stop thinking, or stop examining the intuitions they're using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult. Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think "wait, maybe I am more confused about morality than I had thought". It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.
0Rain
My "straw-man" does appear to have defenders, though we seem to agree you aren't one of them. I've admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.
0Friendly-HI
What other valid tools for dissecting and analyzing morality are there again? I'm not facetiously nit-picking, just wondering about your answer if there is one.
0Rain
Before rationality can be applied, there has to be something there to say 'pick rationality'. Some other options might include intuition, astrology, life wisdom, or random walk. You required a very narrow subset of possibilities ("valid tools for analyzing and dissecting"), so I'm sure the above options aren't included in what you would expect; it seems to me that you've got an answer already and are looking for a superset.
0Friendly-HI
Thanks for your reply. Reading the sentence "rationality is one of the most powerful tools we have to dissect and analyze [morality]" seemed to imply that you thought there were other "equally powerful" (powerful = reliably working) tools to arrive at true conclusions about morality. As far as I'm concerned rationality is the whole superset, so I was curious about your take on it. And yes, your above options are surely not included in what I would consider to be "powerful tools to arrive at true conclusions". Ultimately I think we don't actually disagree about anything - just another "but does it really make a sound" pitfall.
0Will_Newsome
To some extent I am one such defender in the sense that I probably expect there to be a lot more of something like rationality to our values than you do. I was just saying that it's not necessary for that to be the case. Either way the important thing is that values are in the territory where you can use rationality on them.
0Vladimir_Nesov
For reference, this point was discussed in this post:
0Rain
The point at which I think rationality enters our values is when those values are self-modifying, at which point you must provide a function for updating. Perhaps we only differ on the percentage we believe to be self-modifying.
3Richard_Kennaway
Evolution created our rationality > evolution is arational > our rationality is arational. Genetic fallacy.
0Rain
Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of "established material".
2Richard_Kennaway
What does the link have to do with it? There just isn't any way to get from the two premises to the conclusion.
2Rain
The link gave me a reason to think I had explained myself, when I obviously hadn't included enough material to form a coherent comment. I know that what I'm thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.
1Wei Dai
Why do you use the phrase "art of rationality", as opposed to say, "philosophy"? Can you suggest a process for determine what you value, and show how it is related to things that are more typically associated with the word "rationality", such as Bayesian updating and expected utility? Or is "art of rationality" meant to pretty much cover all of philosophy or at least "good" philosophy?
1Vladimir_Nesov
Primarily training of intuition to avoid known failure modes, implicit influence on the process of arriving at judgments, as compared to explicit procedures for pre- or post-processing interactions with it.
0Will_Newsome
I haven't found any system of thought besides LW-style rationality that would be sufficient to even start thinking about your values, and even LW-style rationality isn't enough. More concretely, very few people know about illusion of introspection, evolutionary psychology, verbal overshadowing, the 'thermodynamics of cognition', revealed preference (and how 'revealed' doesn't mean 'actual'), cognitive biases, and in general that fundamental truth that you can't believe everything you think. And more importantly, the practical and ingrained knowledge that things like those are always sitting there waiting to trip you up, if you don't unpack your intuitions and think carefully about them. Of course I can't suggest a process for determining what you value (or what you 'should' value) since that's like the problem of the human condition, but I know that each one of those things I listed would most likely have to be accounted for in such a process. Hm... the way you say it makes me want to say "no, that would be silly and arrogant, of course I don't think that", but ya know I spent a fair amount of time using 'philosophy' before I came across Less Wrong, and it turns out philosophy, unlike rationality, just isn't useful for answering the questions I care about. So, yeah, I'll bite that bullet. The "art of rationality" covers "good" philosophy, since most philosophy sucks and what doesn't suck has been absorbed. But that isn't to say that LW-style philosophy hasn't added a huge amount of content that makes the other stuff look weak by comparison. (I should say, it's not like something like LW-style rationality didn't exist before; you, for instance, managed to find and make progress on interesting and important questions long before there were 'sequences'. I'm not saying LW invented thinking. It's just that the magic that people utilized to do better than traditional rationality was never really put down in a single place, as far as I know.)
0Wei Dai
I don't disagree with what you write here, but I think if you say something like "You use the art of rationality to determine what you value" you'll raise the expectation that there is already an art of rationality that can be used to determine what someone values, and then people will be disappointed when they look closer and find out that's not the case.
0Will_Newsome
Ah, I see your point. So the less misleading thing to say might be something roughly like: "We don't yet know how to find or reason about our values, but we have notions of where we might start, and we can expect that whatever methods do end up making headway are going to have to be non-stupid in at least as many ways as our existing methods of solving hard problems are non-stupid."
0Thomas
You can't go all the way down the turtles to firmly put each one on the previous. Adopting axioms is always a bit arbitrary thing. No deeper axioms, then the deepest. Always quite arrationally put how they are.
-1Rain
A process of discovery which uncovers things which came from where? A process of change which determines the direction things should go based on what?
0Will_Newsome
You misunderstand, I'm not saying your values necessarily have to be 'rational' in some deep sense. I'm just saying that in order to figure out what your values might be, how they are related to each other, and what that means, you have to use something like rationality. I would also posit that in order to figure out what rocks are, how they are related to each other, and what that means, you have to use something like rationality. That obviously doesn't mean that values or rocks are 'rational', but it might mean you can notice interesting things about them you wouldn't have otherwise.
0Rain
I agree with this statement. I'm sorry to have continued the argument when I was apparently unclear.
0Paul Crowley
I agree, but I think there's at least a case to be made that we take that step when we decide we're going to discount the future at all.
3PhilGoetz
I don't think it makes sense to not discount utility; or, at least, it is moving from agent utilities to some kind of eternal God's utility. Hard to see how to relate that to our decisions. Using a different function seems more promising. Why does Roko say our true discounting function is hyperbolic?
6timtyler
Some of the evidence that that is true summarised here.
4Will_Newsome
I think it's an SIAI meme caused by excitement about George Ainslie's work in psychology, briefly summarized here: http://en.wikipedia.org/wiki/George_Ainslie_(psychologist) . I'm not sure if Roko picked it up from there though. There is some debate over whether human discount rates are in fact generally hyperbolic, though Ainslie's book Breakdown of Will is pretty theoretically aesthetic; worth checking out in any case, both for potential insight into human psychology and firepower for practical self-enhancement plans. ETA: Apparently ciphergoth made a post about it: http://lesswrong.com/lw/6c/akrasia_hyperbolic_discounting_and_picoeconomics/
0[anonymous]
At all? Clearly, if the market rate of return were different, the wealthiest agents would tend to discount investments and payoffs according to that rate of return, and not by accident. Some kind of relating has to be happening there. This would also be true if it were common knowledge that projected market returns followed a non-exponential schedule. (E.g. temporarily subexponential for a previously unforeseen resource crunch, or temporarily superexponential for a previously unforeseen technology improvement feedback loop). The more general problem is that our present concept of "utility" affords no direct way to argue an observable difference between an instrumental and intrinsic value, if there is no available condition that would make an observable difference in the results of the hypothetical instrumental value calculation. So you take the "intrinsic" branch, and assume a hypothesis about rational utility that extrapolates rigidly from the present; it attributes to timeless "rational utility" the intrinsic relative value ratios associated with infinite-term perfect exponential discounting. And I take something more like the "instrumental" branch, yielding a hypothesis that "rational utility" would contain only the instrumental value ratios associated with near-term exponential discounting. Because of how we reason about utility, the two hypotheses about utility have equal "preference likelihood" on present surface-level behavior, and so we have to argue about priors. There is probably something wrong with how we reason about utility. Is that the motivation for posts like this one? Are you trying to reductio what you see as the only possible way of reasoning about rational utility, or trying to reductio what you see as the only possible way of making something formal out of local informal discourse about utility maximization, and trying to force people to move on? What would a discourse look like that was about the theory of judgment and desirable actions
0Normal_Anomaly
According to Against Discount rates, it does make you vulnerable to a Dutch Book.
7Paul Crowley
Damn, can't find the cartoon now that says "Pffft, I'll let Future Me deal with it. That guy's a dick!"
1Vaniver
I think you are referring to this comic which was referred to in this comment which I found by googling your quote without quotes.
0Paul Crowley
Thanks - though the lmgtfy link was unnecessarily rude, I did try various Google combinations without success.
1magfrump
Perhaps this?
0XiXiDu
I don't understand, what are their arguments? * Isn't time discounting mainly a result of risk aversion? What is wrong with being risk averse? * If an agent's utility function does place more weight on a payoff that is nearer in time, should that agent alter its utility function? Rationality are heuristics used to satisfy one's utility function. What heuristics are applicable when altering one's own utility function? * The expected utility of the future can grow much faster than its prior probability shrinks. Without any time preferences, at what point does a rational agent stop its exploration and start the exploitation to actually "consume" utility?
3Kaj_Sotala
Eliezer's arguments here.
0timtyler
I think it is better to think of temporal discounting and risk aversion as orthogonal. Exploration vs expolitation is based on what the utility function says.

Downvoted, because your math is wrong.

(2¹ºº)^t = exp(t*ln(2¹ºº)), so the factor you call 'c' is not a constant multiplier for the integral; in fact, that combination of constants doesn't even show up. The (approximated) integral is actually b∫t²*exp(-at)dt, where a = 100*ln(2) and b = 120. Evaluating this from 0 to T produces the expression: (2b/a³)*(1 - exp(-aT)*(1 + aT + ½(aT)²)).

These factors of exp(-aT) show up when evaluating the integral to T<∞. (Obviously, when T → ∞, the integral converges to (2b/a³).) For a ~ O(5) or higher, then, the entire total utility is found in 10 years, within double precision. That corresponds to a ≈ 7*ln(2). I think this indicates that the model may not be a good approximation of reality. Also, for slower subjective time (a < ln(2) ≈ 0.693), the percentage of total utility found in 10 years drops. For a = 0.1*ln(2), it's only 3.33%.

Also, you defined either 'x' or your linear density function incorrectly. If you want x to be stars/ly^2, the density function should be ρ = x(1 - r/50000). If you do all of the calculations symbolically and don't plug in values until the end, the equation for total utility as a function of time (before discountin... (read more)

1PhilGoetz
You are correct! However, you note yourself that "For a ~ O(5) or higher, then, the entire total utility is found in 10 years, within double precision." So the result does depend on subjective time, which is what I had expected. This is important - but it still doesn't change the conclusion, that rational expected utility-maximizers operating in this framework don't care about the future. I'm grateful to you for finding the flaw in my math, and am upvoting you on that basis. But I don't think you should say, "I found an error, therefore I downvote your whole post" instead of "Here is a correction to your work on this interesting and important topic; your conclusion is now only valid under these conditions." A general comment to everybody: Most of you are in the habit of downvoting a post if you find a single flaw in it. This is stupid. You should upvote a post if it illuminates an important topic or makes you realize something important. Einstein didn't say, "Newton's law is inaccurate. Downvoted." Voting that way discourages people from ever posting on difficult topics. No; x is the density as a function of r, and it varies linearly from a maximum at r=0, to zero at r=50,000. The way I wrote it is the only possible function satisfying that.
3Dreaded_Anomaly
I downvoted not simply because there was a math error, but because of your aggressive comments about downvoting this topic without having arguments against your math, which was in fact faulty. When you spend a significant portion of the post chastising people about downvoting, you should take a little more time to make sure your arguments are as ironclad as you think they are. The point is not to discourage people from posting on difficult topics; it's to discourage unwarranted arrogance. You define x as having units of stars/ly^2. Because you're approximating the galaxy in two dimensions, the general density function should also have units of stars/ly^2. You wrote this density function: x(50000-r), which has units of stars/ly. I wrote the density function ρ = x(1 - r/50000) which has the correct units and fits the boundary conditions you described: ρ(r=0) = x, and ρ(r=50000) = 0. In your post, you end up redefining x when you solve for it from your density function, so it does not affect the final result. However, in a model that's supposed to be physically motivated, this is sloppy.

Among other things, this means we might not want to turn the Universe over to a rational expected-utility maximizer.

So this is just a really long way of saying that your utility function doesn't actually include temporal discounting.

I think this post (Evolution and irrationality) is interesting but don't know what to make of it due to a lack of general expertise:

Sozou’s idea is that uncertainty as to the nature of any underlying hazards can explain time inconsistent preferences. Suppose there is a hazard that may prevent the pay-off from being realised. This would provide a basis (beyond impatience) for discounting a pay-off in the future. But suppose further that you do not know what the specific probability of that hazard being realised is (although you know the probability dis

... (read more)
1nazgulnarsil
this was my initial reaction to the OP, stated more rigorously. Our risk assessment seems to be hardwired into several of our heuristics. Those risk assessments are no longer appropriate because our environment has become much less dangerous.
0sark
It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions. In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty. At least I think thats possible. Have there been any formal analyses of this idea?
3Oscar_Cunningham
There's this post by Vladimir Nesov.

Utility functions are calculated from your preferences, not vice-versa. (To a first approximation.)

This would explain the Fermi's paradox. Would.

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled. At what point is an expected utility maximizer (without time preferences) going to satisfy its utility function, or is the whole purpose of expected utility maximization to maximize expected utility rather than actual utility?

People here talk about the possibility of a positive Singularity as if it was some sort of payoff. I don't see that. If you think it ... (read more)

4Perplexed
This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions. First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics). Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility. Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings). Replace the second with the assumption that collective action is described by something like a Nash bargaining solution - that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and "fairness" constraints (to pick out the solution point on the Pareto surface). Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions? Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective. More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some
0XiXiDu
Nicely put, very interesting. What about Aumann's agreement theorem? Doesn't this assume that contributions to a charity are based upon genuinely subjective considerations that are only "right" from the inside perspective of certain algorithms? Not to say that I disagree. Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
2Perplexed
Bob comes to agree that Alice likes ballet - likes it a lot. Alice comes to agree that Bob prefers nature to art. They don't come to agree that art is better than nature, nor that nature is better than art. Because neither is true! "Better than" is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet). Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I'm talking about real compound agents created either by bargaining among humans or by FAI engineers. But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.
3steven0461
Would you accept "at some currently unknown point" as an answer? Or is the issue that you think enjoyment of life will be put off infinitely? But whatever the right way to deal with possible infinities is (if such a way is needed), that policy is obviously irrational.
1nazgulnarsil
your risk of dying function determines the frontier between units devoted to hedonism and units devoted to continuation of experience.
0Perplexed
Ok, but which side of the frontier is which? I have seen people argue that we discount the future since we fear dying, and therefore are devoted to instannt hedonism. But if there were no reason to fear death, we would be willing to delay gratification and look to the glorious future.
0timtyler
It doesn't seem to be much of a problem to me - because of instrumental discounting.
0loqi
Enjoying life and securing the future are not mutually exclusive.
2Document
Optimizing enjoyment of life or security of the future superficially is, if resources are finite and fungible between the two goals.
0loqi
Agreed. I don't see significant fungibility here.
0[anonymous]
Downvoted for being simple disagreement.
-1benelliott
Why not try tackling it yourself?

After spending some time thinking about the result from the correct math, here are my conclusions:

You claimed that the percentage of total utility attained in the first 10 years was independent of the level of time discounting. This is clearly not the case, as the percentage of total utility attained in the first T years with time discounting factor a is given by (1 - exp(-aT)*(1 + aT + ½(aT)²)). The expression -exp(-aT)*(1 + aT + ½(aT)²) (the difference between the previous expression and 1) goes to zero within double precision when the combined factor a... (read more)

0PhilGoetz
I agree with the math; I disagree that my time-discounting constant is arbitrary. I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth. Anticipating that human-level software will eventually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend. If I said 1 billion instead of 1000, I might be on shaky ground. Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years. If I say there is a speedup of 100, I get all but 2*10-27 of it in 10 years. This is worse than before! (I'm upvoting this comment because it enlightened me, even though I take issue with part of it.)
2timtyler
I already observed: I do not think that interest rates are really a reflection of human temporal discounting. Why would anyone think that they were?
0Dreaded_Anomaly
The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it's important to spend some time motivating your chosen level of time discounting. For a = 0.1*ln(2), the value of the integral from t=0..10 is ~24008. The value of the integral from t=0..∞ is ~720667. There is an order of magnitude difference between those two values. 97% of the utility comes after the first 10 years if there's no difference in subjective time.
0PhilGoetz
Yes, I agree, and I just did. We must be evaluating different integrals. I wrote my calculations up in the main post. I'm evaluating -120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating? You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn't pass the sanity check.
0Dreaded_Anomaly
I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though. Edit: Wolfram Alpha's results for t=0..10 and t=0..∞.
0PhilGoetz
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case. But I'm not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.

This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.

For the record, I dispute your causal model of the audience's response.

In particular, I dispute your model of the audience's moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.

I also dispute your model of the audience's factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience's process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent's present time. If that was not your model, then I dispute your confusing apparent claim that such ... (read more)

2[anonymous]
This is a critical topic, but not as critical as how much karma you get on LessWrong? Please care about karma less.
1Dreaded_Anomaly
I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case. I think you have neglected the presentation of the topic as a confounding variable in that analysis.
0Thomas
Where could your mistake be? If it nowhere to be seen, it is possible that there isn't one. In that case it is quite a crisis here.

I feel like my concern for the well being of people I don't know does not change at all with time, but my concern for people I do know is discounted, and for myself, I discount more heavily. This seems to imply that we do not discount with increasing time but instead with decreasing association. As in, we care more about minds more similar to our own, or with whom we interact more, and our minds become more different farther in the future.

I second Manfred and gjm's comments.

One additional point regarding subjective time. You say:

Strange but true. (If subjective time is slower, the fact that t=20 matters more to us is balanced out by the fact that t=2 and t=.2 also matter more to us.)

But even if I temporally discount by my subjective sense of time, if I can halt subjective time (e.g. by going into digital or cryonic storage) then the thing to do on your analysis is to freeze up as long as possible while the colonization wave proceeds (via other agents, e.g. Von Neumann probes or the res... (read more)

0steven0461
Is there a better word for what you call "fanaticism"? Too many connotations.

Didn't like this post much either (sorry!). Yes, if you assume a substantial level of temporal discounting that makes the future matter less. If you don't like that, perhaps do not apply so much temporal discounting.

The dense maths hinders the reader here. I don't really approve of the dissing of expected utility maximizers at the end either.

2[anonymous]
"The dense maths hinders the reader here." This is an argument against the reader, not the post. Anyone interested in these matters should be able to handle basic calculus, or else should withhold voting on such matters.
5Vaniver
I would agree, if the post treated the reader that way. When you multiply a polynomial by an exponential, the exponential wins. That's all the author needed to get to his point; instead we have dense paragraphs poorly explaining why that is the case.
-1[anonymous]
"Dense paragraphs" and poor/unclear wording is not the same thing as "dense maths". So I disagree with timtyler's point, but not with yours. Presumably, even if the exposition were phrased more clearly, timtyler would still have a problem with the "dense maths".

Could someone please explain any possible justification for exponential discounting in this situation? I asked earlier, but got voted below the threshold. If this is a sign of disagreement, then I would like to understand why there is disagreement.

Robin Hanson's argument for exponential discounting derives from an exponential interest rate. Our current understanding of physics implies there won't be an exponential interest rate forever (in fact this is the point of the present article). So Robin Hanson's argument doesn't apply at all to the situation in th... (read more)

2timtyler
To quote from: http://en.wikipedia.org/wiki/Dynamically_inconsistent
0paulfchristiano
At best, this is an argument not to use non-exponential, translation invariant discounting. You can discount in a way that depends on time (for example, Robin Hanson would probably recommend discounting by current interest rate, which changes over time; the UDASSA recommends discounting in a way that depends on absolute time) or you can not discount at all. I know of plausible justifications for these approaches to discounting. I know of no such justification for exponential discounting. The wikipedia article does not provide one.
2timtyler
It is an argument not to use non-exponential, discounting. Exponential discounting depends on time. It is exponential temporal discounting being discussed. So: values being scaled by ke^-ct - where the t is for "time". The prevailing interest rate is normally not much of a factor - since money is only instrumentally valuable. That is the trivial kind of exponential discounting, where the exponent is zero. The bit I quoted was a justification. Exponential discounting yields time-consistent preferences. Only exponential discounting does that.
1Wei Dai
Not sure why your earlier comment got voted down. I voted it up to -1. I think exponential discounting has gotten ingrained in our thinking mostly for historical reasons. Quoting from the book Time and Decision: economic and psychological perspectives on intertemporal choice ("DU model" here being the 1937 model from Paul Samuelson that first suggested exponential discounting): I notice that axiomatizations in economics/theory of rationality seem to posses much more persuasive power than they should. (See also vNM's axiom of independence.) People seem to be really impressed that something is backed up by axioms and forget to check whether those axioms actually make sense for the situation.

People pay an exponential amount of future utility for utility now because we die. We inappropriately discount the future because our current environment has a much longer life expectancy than the primitive one. One should discount according to actual risk, and I plan on self modifying to do this when the opportunity arises.

Could you perhaps give some plausible argument for exponential discounting, or some record of anyone who has seriously considered applying it universally? I appear to discount approximately exponentially in the near term, but really its a reflection of my uncertainty about the future. I value future humans about as much as present humans, I just doubt my ability to understand my influence on them (in most but not all cases).

Even if you accept exponential discounting, your physical arguments seem pretty weak. How confident are you that faster than light tra... (read more)

So, this calculation motivates non-expansion, but an agent with an identical utility function that is expansionist anyway attains greater utility and for a longer time... is that right?

0PhilGoetz
No, because maximizing utility involves tradeoffs. Being expansionist means expanding instead of doing something else.
0CuSithBell
By my reading, that contradicts your assumption that our utility is a linear function of the number of stars we have consumed. (And, moreover, you seem to say that never running out of starstuff is a conservative assumption which makes it less likely we will go seek out more starstuff.)

Rewrite 2^(100t) as (2^100)^t = ln(2^100)e^t.

Plugging in for t=2 is giving me 2^(100t)=1.6*10^60 and ln(2^100)e^t = 512.17

Is this an error or did I read it wrong?

There are strong reasons for believing that time-discounting is exponential.

For all utility functions a human may have? What are these reasons?

2Dreaded_Anomaly
As I described below, his math is wrong.
0PhilGoetz
Yep. Sorry. Fixing it now. The impact on the results is that your time horizon depends on your discount rate.

If we use time discounting we should care about the future, because it's possible that time machines can be made, but are difficult. If so, we'd need a lot of people work it out. A time machine would be valuable normally, but under time discounting, it gets insane. I don't know what half-life you're using, but let's use 1000 years, just for simplicity. Lets say that we bring a single person back to the beginning of the universe, for one year. This would effectively create about 8.7*10^4,154,213 QALYs. Any chance of time travel would make this worth while.

I... (read more)

2CarlShulman
This has been previously discussed on Less Wrong.
0PhilGoetz
That's interesting. I don't think you should extend your utility function into the past that way. You have to go back to the application, and ask why you're doing discounting in the first place. It would be more reasonable to discount for distance from the present, whether forwards or backwards in time.
1benelliott
Elsewhere in this thread, you have criticised hyperbolic discounting for being 'irrational', by which I presume you mean the fact that is inconsistent under reflection, while exponentials are not. Your new function is also inconsistent under reflection. Maybe this is an argument for not discounting, since that is the only possible way to have a past-future symmetric, reflexively consistent utility function. Just a thought.
0timtyler
Not really - this is the problem.
0benelliott
We are referring to the same fact. Reflective inconsistency is a trivial consequence of dynamic inconsistency.

If we assume that our time-discounting function happens to be perfectly adjusted to match our rate of economic growth now, is it wise to assume that eventually the latter will change drastically but the former will remain fixed?

ADDED: Downvoting this is saying, "This is not a problem". And yet, most of those giving their reasons for downvoting have no arguments against the math.

A major problem with simple voting systems like that used on LW is that people impute meanings to voters more confidently than they should. I've seen this several times here.

If people give a reason for downvoting, they're probably not being deceptive and may even be right about their motives, but most who vote will not explain why in a comment and you're overstepping the bounds of what you ca... (read more)

[-][anonymous]00

Value is arational.

If you compute the implications of a utility function and they do not actually agree with observed preferences, then that is an argument that the utility function you started with was wrong. In this case, you seem to have an argument that our utility function should not have time discounting that's stronger than polynomial.

Discussion with Jeff Medina made me realize that I can't even buy into the model needed to ask how to time discount. That model supposes you compute expected utility out into the infinite future. That means that, for every action k, you compute the sum, over every timestep t and every possible world w, of p(w(t))U(w(t, k))_t .

If any of these things have countably many objects - possible worlds, possible actions, or timesteps - then the decision process is uncomputable. It can't terminate after finitely many steps. This is a fatal flaw with the standard approach to computing utility forward to the infinite future.

Assuming people that have children are rational then the time discounting factor is not what you claim or (perhaps more likely) as people love and expect to love their children and grandchildren (and future descendents) then their (the descendents) expected utility is not time discounted even while ones own is. I likewise imagine that some people will treat long lived versions of themselves in a similar fashion such that they discount their own expected utility for the near term but do not discount their expected utility in the same amount for the version... (read more)