Rationalists don't care about the future

3 Post author: PhilGoetz 15 May 2011 07:48AM

Related to Exterminating life is rational.

ADDED: Standard assumptions about utility maximization and time-discounting imply that we shouldn't care about the future.  I will lay out the problem in the hopes that someone can find a convincing way around it.  This is the sort of problem we should think about carefully, rather than grasping for the nearest apparent solution.  (In particular, the solutions "If you think you care about the future, then you care about the future", and, "So don't use exponential time-discounting," are easily-grasped, but vacuous; see bullet points at end.)

The math is a tedious proof that exponential time discounting trumps geometric expansion into space.  If you already understand that, you can skip ahead to the end.  I have fixed the point raised by Dreaded_Anomaly.  It doesn't change my conclusion.

Suppose that we have Planck technology such that we can utilize all our local resources optimally to maximize our utility, nearly instantaneously.

Suppose that we colonize the universe at light speed, starting from the center of our galaxy (we aren't in the center of our galaxy; but it makes the computations easier, and our assumptions more conservative, since starting from the center is more favorable to worrying about the future, as it lets us grab lots of utility quickly near our starting point).

Suppose our galaxy is a disc, so we can consider it two-dimensional.  (The number of star systems expanded into per unit time is well-modeled in 2D, because the galaxy's thickness is small compared to its diameter.)

The Milky Way is approx. 100,000 light-years in diameter, with perhaps 100 billion stars.  These stars are denser at its center.  Suppose density changes linearly (which Wikipedia says is roughly true), from x stars/sq. light year at its center, to 0 at 50K light-years out, so that the density at radius r light-years is x(50,000-r).  We then have that the integral over r = 0 to 50K of 2πrx(50000-r)dr = 100 billion, 2πx(50000∫rdr - ∫r2dr) = 100 billion, x = 100 billion / 2π(50000∫rdr - ∫r2dr) = 100 billion / π[(50000r2 - 2r3/3) from r=0 to 50K = π(50000(50000)2 - 2(50000)3/3) = 500003π(1 - 2/3)] = 100 billion / 130900 billion = .0007639.

We expand from the center at light speed, so our radius at time t (in years) is t light-years.  The additional area enclosed in time dt is 2πtdt, which contains 2πtx(50000-t)dt stars.

Suppose that we are optimized from the start, so that expected utility at time t is proportional to number of stars consumed at time t.  Suppose, in a fit of wild optimism, that our resource usage is always sustainable.  (A better model would be that we completely burn out resources as we go, so utility at time t is simply proportional to the ring of colonization at time t.  This would result in worrying a lot less about the future.)  Total utility at time t is 2πx∫t(50000-t)dt from 0 to t = 2πx(50000t2/2 - t3/3) ≈120t2 - .0016t3.

Our time discounting for utility is related to that we find empirically today, encoded in our rate of return on investment, which roughly doubles every ten years.  Suppose that, with our Planck technology, subjective time is Y Planck-tech years = 1 Earth year, so our time discounting says that utility x at time t is worth utility x/2.1Y at time t+1.  Thus, the utility that we, at time 0, assign to time t, with time discounting, is (120t2 - .0016t3) / 2.1Yt.  The total utility we assign to all time from now to infinity is the integral, from t=0 to infinity, of (120t2 - .0016t3) / 2.1Yt.

Look at that exponential, and you see where this is going.

Let's be optimistic again, and drop the .0016t3, even though including it would make us worry less about the future. <CORRECTION DUE TO Dreaded_Anomaly> Rewrite 2.1Yt as (2.1Y)t = eat, a = .1Yln2.  Integrate by parts to see ∫t2e-atdt = -e-at(t2/a + 2t/a2 + 2/a3).  Then ∫120t2/2.1Ytdt = 120∫t2e-atdt = -120e-at(t2/a + 2t/a2 + 2/a3) from t=0 to infinity.</CORRECTION DUE TO Dreaded_Anomaly>

For Y = 1 (no change in subjective time), t=0 to infinity, this is about 6006. For comparison, the integral from t=0 to 10 years is about 5805.  Everything after the first 10 years accounts for 3.3% of total utility over all time, as viewed by us in the present.  For Y = 100, the first 10 years account for all but 1.95 x 10-27 of the total utility.

What all this math shows is that, even making all our assumptions so as to unreasonably favor getting future utility quickly and having larger amounts of utility as time goes on, time discounting plus the speed of light plus the Planck limit means the future does not matter to utility maximizers.  The exponential loss due to time-discounting always wins out over the geometric gains due to expansion through space.  (Any space.  Even supposing we lived in a higher-dimensional space would probably not change the results significantly.)

Here are some ways of making the future matter:

  • Assume that subjective time will change gradually, so that each year of real time brings in more utility than the last.
  • Assume that the effectiveness at utilizing resources to maximize utility increases over time.
  • ADDED, hat tip to Carl Shulman: Suppose some loophole in physics that lets us expand exponentially, whether through space, additional universes, or downward in size.
  • ADDED: Suppose that knowledge can be gained forever at a rate that lets us increase our utility per star exponentially forever.

The first two don't work:

  • Both these processes run up against the Planck limit pretty soon.
  • However far the colonization has gone when we run up against the Planck limit, the situation at that point will be worse (from the perspective of wanting to care about the future) than starting from Earth, since the rate of gain per year in utility divided by total utility is smaller the further out you go from the galactic core.

So it seems that, if we maximize expected total utility with time discounting, we need not even consider expansion beyond our planet.  Even the inevitable extinction of all life in the Universe from being restricted to one planet scarcely matters in any rational utility calculation.

Among other things, this means we might not want to turn the Universe over to a rational expected-utility maximizer.

I know that many of you will reflexively vote this down because you don't like it.  Don't do that.  Do the math.

ADDED: This post makes it sound like not caring about the future is a bad thing.  Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.  For example, while a FAI that doesn't care about the future might neglect expansion into space, it won't kill 90% of the people on earth because they pose a threat during this precarious transition period.

ADDED: Downvoting this is saying, "This is not a problem".  And yet, most of those giving their reasons for downvoting have no arguments against the math.

  • If you do the math, and you find you don't like the outcome, that does not prove that your time-discounting is not exponential.  There are strong reasons for believing that time-discounting is exponential; whereas having a feeling that you hypothetically care about the future is not especially strong evidence that your utility function is shaped in a way that makes you care about the future, or that you will in fact act as if you cared about the future.  There are many examples where people's reactions to described scenarios do not match utility computations!  You are reading LessWrong; you should be able to come up with a half-dozen off the top of your head.  When your gut instincts disagree with your utility computations, it is usually evidence that you are being irrational, not proof that your utility computations are wrong.
  • I am fully aware that saying "we might not want to turn the Universe over to a rational expected-utility maximizer" shows I am defying my utility calculations.  I am not a fully-rational expectation maximizer.  My actions do not constitute a mathematical proof; even less, my claims in the abstract about what my actions would be.  Everybody thinks they care about the future; yet few act as if they do.
  • The consequences are large enough that it is not wise to say, "We can dispense with this issue by changing our time-discounting function".  It is possible that exponential time-discounting is right, and caring about the future is right, and that there is some subtle third factor that we have not thought of that works around this.  We should spend some time looking for this answer, rather than trying to dismiss the problem as quickly as possible.
  • Even if you conclude that this proves that we must be careful to design an AI that does not use exponential time-discounting, downvoting this topic is a way of saying, "It's okay to ignore or forget this fact even though this may lead to the destruction of all life in the universe."  Because the default assumption is that time-discounting is exponential.  If you conclude, "Okay, we need to not use an exponential function in order to not kill ourselves", you should upvote this topic for leading you to that important conclusion.
  • Saying, "Sure, a rational being might let all life in the Universe die out; but I'm going to try to bury this discussion and ignore the problem because the way you wrote it sounds whiny" is... suboptimal.
  • I care about whether this topic is voted up or down because I care (or at least think I care) about the fate of the Universe.  Each down-vote is an action that makes it more likely that we will destroy all life in the Universe, and it is legitimate and right for me to argue against it.  If you'd like to give me a karma hit because I'm an ass, consider voting down Religious Behaviourism instead.

Comments (143)

Comment author: Manfred 15 May 2011 08:56:26AM 18 points [-]

Concise version:
If we have some maximum utility per unit space (reasonable, since there is maximum entropy, and therefore probably a maximum information, per unit space), and we do not break the speed of light, our maximum possible utility will expand polynomially. If we discount future utility exponentially, like the 10-year doubling time of the economy can suggest, the merely polynomial growth gets damped exponentially and we don't care about the far future.

Big problem:
Assumes exponential discounting. However this can also be seen as a reductio of exponential discounting - we don't want to ignore what happens 50 years from now, and we exhibit many behaviors typical of caring about the far future. There's also a sound genetic basis for caring about our descendants, which implies non-exponential discounting programmed into us.

Comment author: Morendil 15 May 2011 09:16:30AM 3 points [-]

can also be seen as a reductio of exponential discounting

Alternately, as a reductio of being strict utility maximizers.

Comment author: steven0461 15 May 2011 06:00:29PM 12 points [-]

Or or or maybe it's a reductio of multiplication!

Comment author: Normal_Anomaly 15 May 2011 04:18:35PM *  2 points [-]

As of right now, I'd rather be a utility maximizer than an exponential discounter.

Comment author: Morendil 15 May 2011 04:31:46PM 4 points [-]

Oh, don't worry. Your time preferences being inconsistent, you'll eventually come around to a different point of view. :)

Comment author: Manfred 15 May 2011 12:15:40PM *  1 point [-]

That's tricky, since utility is defined as the stuff that gets maximized - and it can be extended beyond just consequentialism. What it relies on is the function-like properties of "goodness" of the relevant domain (world histories and world states being notable domains).

So a reductio of utility would have to contrast the function-ish properties of utility with some really compelling non-function-ish properties of our judgement of goodness. An example would be if if world state A was better than B, which was better than C, but C was better than A. This qualifies as "tricky" :P

Comment author: Morendil 15 May 2011 04:30:26PM 3 points [-]

Why should the objection "actual humans don't work that way" work to dismiss exponential discounting, but not work to dismiss utility maximization? Humans have no genetic basis to either maximize their utility OR discount exponentially.

My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?

Comment author: Manfred 16 May 2011 02:16:01AM 1 point [-]

Ah, that's a good point - to have a constant utility function over time, things have to look proportionately the same at any time, so if there's discounting it should be exponential. So I agree, this post is an argument against making strict utility maximizers with constant utility functions and also discounting. So I guess the options are to have either non-constant utility functions or no discounting. (random links!)

Comment author: Morendil 16 May 2011 06:37:38PM 0 points [-]

It seems very difficult to argue for a "flat" discount function, even if (as I can do only with some difficulty) one sees things from a utilitarian standpoint: I am not indifferent between gaining 1 utilon right away, versus gaining 1 utilon in one hundred years.

Probing to see where this intuition comes from, the first answer seems to be "because I'm not at all sure I'll still be around in one hundred years". The farther in the future the consequences of a present decision, the more uncertain they are.

Comment author: Wei_Dai 15 May 2011 10:42:02PM 0 points [-]

My (admittedly sketchy) understanding of the argument for exponential discounting is that any other function leaves you vulnerable to a money pump, IOW the only rational way for a utility maximizer to behave is to have that discount function. Is there a counter-argument?

I guess you're referring to this post by Eliezer? If so, see the comment I just made there.

Comment author: dspeyer 17 May 2011 03:36:26PM 0 points [-]

Do things become any clearer if you figure that some of what looks like time-discounting is actually risk-aversion with regard to future uncertainty? Ice cream now or more ice cream tomorrow? Well tomorrow I might have a stomach bug and I know I don't now, so I'll take it now. In this case, changing the discounting as information becomes available makes perfect sense.

Comment author: Wei_Dai 17 May 2011 04:29:52PM 2 points [-]

Yes, there's actually a literature on how exponential discounting combined with uncertainty can look like hyperbolic discounting. There are apparently two lines of thought on this:

  1. There is less hyperbolic discounting than it seems. What has been observed as "irrational" hyperbolic discounting is actually just rational decision making using exponential discounting when faced with uncertainty. See Can We Really Observe Hyperbolic Discounting?
  2. Evolution has baked hyperbolic discounting into us because it actually approximates optimal decision making in "typical" situations. See Uncertainty and Hyperbolic Discounting.
Comment author: sark 15 May 2011 12:41:33PM 2 points [-]

May I ask how the doubling time of the economy can suggest how we discount future utility?

Comment author: Manfred 16 May 2011 12:25:43AM *  1 point [-]

People are willing to pay people future money that increases exponentially in exchange for money now (stock trends bear this out and many other sorts of investments are inherently exponential). If we make the (bad, unendorsed by me) simplification that utility is proportional to money, people are willing to pay an exponential amount of future utility for current utility - that is, they discount the value of future utility.

Comment author: Morendil 15 May 2011 04:34:55PM *  2 points [-]

we exhibit many behaviors typical of caring about the far future

Name three.

Comment author: Manfred 16 May 2011 12:40:12AM 0 points [-]

Okay, I thought of some. Exercise completed!

Comment author: Steve_Rayhawk 15 May 2011 07:09:11PM *  10 points [-]

What information can be derived about utility functions from behavior?

(Here, "information about utility functions" may be understood in your policy-relevant sense, of "factors influencing the course of action that rational expected-utility maximization might surprisingly choose to force upon us after it was too late to decommit.")

Suppose you observe that some agents, when they are investing, take into account projected market rates of return when trading off gains and losses at different points in time. Here are two hypotheses about the utility functions of those agents.

Hypothesis 1: These agents happened to already have a utility function whose temporal discounting was to match what the market rate of return would be. This is to say: The utility function already assigned particular intrinsic values to hypothetical events in which assets were gained or lost at different times. The ratios between these intrinsic values were already equal to what the appropriate exponential of the integrated market rate of return would later turn out to be.

Hypothesis 2: These agents have a utility function in which assets gained or lost in the near term are valued because of an intrinsic good which could be purchased with those assets at a point in the distant future. These agents evaluate near-term investments and payoffs happening at different times in terms of market rates of return, for understandable and purely instrumental reasons relating to opportunity cost.

Neither hypothesis is quite plausible psychologically or historically, but the second hypothesis is closer to being plausible, and each hypothesis makes the same predictive distribution about the agents' near-term investment behaviors. This is to say that the "preference likelihood" ratio between the two hypotheses is flat.

(In your apparent policy terms, this would correspond roughly to the idea that, while rational expected-utility maximization may be trying to "choose" which of these two utility functions to define as normative, so that it can then "force" the courses of action dictated by the chosen utility function "upon" the agents, in this case the balance of factors affecting rational expected-utility maximization's "choice" evens out. Therefore, rational expected-utility maximization's "decision" will depend on its prior disposition to "prefer" one or the other utility function, for reasons unrelated to observation.)

Now, suppose that the agents from the second hypothesis forecast market rates of return for some period, and then create new agents. These new agents have recognizable internal data structures representing utility functions in a form as per the first hypothesis, and these data structures will be queried to determine the new agents' decisions about near-term trades. However, the new agents' only source of information about their utility functions comes from observing their own behavior: they do not have direct introspective access to their internal data structure, and they do not know about the asset conversion event in the future. (However, they will convert their holdings at that time, as a hard-coded instinct; in terms of revealed preference, this can be interpreted as having a utility function that assigns the purchased good infinite relative value). Now, which hypothesis should we say is "really" true of these new agents' utility functions?

(And how do we delineate what the parts of this situation even are, that supposedly "have" the utility functions we want to inquire about?)

This is a general problem with our present framework for reasoning about utility. The predictions and recommendations from a hypothesized utility function are invariant under various transformations of the hypothesis; in particular, transformations that preserve relative intervals of expected utility between available actions at each juncture. For example, for a perfect expected-utility maximizer, the reward function constructed by a perfectly trained temporal-difference reinforcement learning system motivates exactly the same behavior as the reward function whose integrals the TD learner was trained to predict. (This is quite apart from the problem of invariance under transformations that stretch or squeeze probability and reward simultaneously, such as the transformations that relate different methods of anthropic reasoning.)

As if to add to the confusion, when humans are informed about utility theory, and asked to interpret their introspective information about their preferences in terms of utility, they will report different preferences as being "intrinsic" vs. "instrumental" at different points in time [citation: folk belief]. There may be a psychological process related to temporal-difference reinforcement learning which converts preferences which introspectively appear "instrumental" into preferences which introspectively appear "intrinsic".

Why were you so certain, in your original draft, that exponential temporal discounting behavior was a matter of intrinsic value rather than instrumental value, so that a normative framework of utilitarian reasoning would force it upon us, and the alternative possibility was not worth mentioning?

Comment author: PhilGoetz 15 May 2011 08:10:47PM *  0 points [-]

Why were you so certain, in your original draft, that exponential temporal discounting behavior was a matter of intrinsic value rather than instrumental value, so that a normative framework of utilitarian reasoning would force it upon us, and the alternative possibility was not worth mentioning?

Any temporal discounting other than temporal is provably inconsistent, a point Eliezer makes in his post against temporal discounting. Exponential temporal discounting is the default assumption. My post works with the default assumption. Arguing that you can use an alternate method of discounting would require a second post.

When you have a solution that is provably the only self-consistent solution, it's a drastic measure to say, "I will simply override that with my preferences. I will value irrationality. And I will build a FAI that I am entrusting with the future of the universe, and teach it to be irrational."

It's not off the table. But it needs a lot of justification. I'm glad the post has triggered discussion of possible other methods of temporal discounting. But only if it leads to a serious discussion of it, not if it just causes people to say, "Oh, we can get around this problem with a non-exponential discounting", without realizing all the problems that entails.

Comment author: Steve_Rayhawk 15 May 2011 09:31:03PM *  9 points [-]

Any temporal discounting other than temporal is provably inconsistent

The conditions of the proof are applicable only to reinforcement agents which, as a matter of architecture, are forced to integrate anticipated rewards using a fixed weighting function whose time axis is constantly reindexed to be relative to the present. If we could self-modify to relax that architectural constraint -- perhaps weighting according to some fixed less temporally indexical schedule, or valuing something other than weighted integrals of reward -- would you nonetheless hold that rational consistency would require us to continue to engage in exponential temporal discounting? Whether or not the architectural constraint had previously been a matter of choice? (And who would be the "us" who would thus be required by rational consistency, so that we could extract a normative discount rate from them? Different aspects of a person or civilization exhibit discount functions with different timescales, and our discount functions and architectural constraints can themselves partially be traced to decision-like evolutionary and ecological phenomena in the biosphere, whose "reasoning" we may wish to re-examine.)

(ETA: Maybe I should be less uncharitable about your implied position, since you may not have been aware of the conditions of the proof you cited, or not thought to consider a wider range of agent motivational architectures. But if that was the sort of thing you didn't know, and it was crucial to your original case, you should have known to state your case in more measured and careful language. If you commit strongly to a hostile conclusion that seems unjustifiable, I unthinkingly respond by exploiting the unjustifiability and strength of commitment to make the hostile conclusion look bad, using lines of modus tollens reasoning that wouldn't be able to rhetorically connect if your commitment had been weaker.)

To my current thinking, preferences would be one form of information about desirability of events, and any information about desirability of events would be timeless -- even if the events that were desirable were within time, and even if the information about their desirability must have been acquired within time. There's no direct reason why questions of "when you learned about the desirability" or "when you had to act on the desirability" should enter into it.

Why were you so certain [...] that exponential temporal discounting behavior was a matter of intrinsic value rather than instrumental value [...]?

Perhaps I should have left out the distraction of the term "exponential", and asked: "Why were you so certain that temporal discounting in behavior was a matter of intrinsic value rather than instrumental value?" In part my comment was to argue that:

  • discounting behavior can be generated for instrumental reasons;
  • we may reach different conclusions as to whether discounting behavior is a matter of intrinsic or instrumental value, depending on the level of analysis at which we identify agency (and/or instrumental agency);
  • there are reasons to expect that, in interpreting utility functions from preference claims, we may easily become confused and inappropriately assign intrinsicality to values or rules of valuation which were actually instrumental.

I should have argued more explicitly that:

  • Instrumental exponential discounting is conditional, not eternal; it lasts only as long as the exponentially growing opportunity costs which motivate it.
  • Inappropriate hypotheses of intrinsicality of values can lead to paradoxes, which the corresponding hypotheses of instrumentality may avoid. This is because instrumental values have effect conditionally while intrinsic values have effect unconditionally. Thus, if you observe an apparent paradox during an analysis that assumes intrinsicality, you should put more weight on competing analyses that assume instrumentality, on the theory that you missed a relevant condition which prevents a conflicting value from extending to the paradoxical case.

(My argument was meant to cover non-exponential discounting as well, and show that exponential discounting behavior can be caused by a same mechanism as non-exponential discounting behavior, since I did not specify that market rates of return were constant.)

My comment was also to argue that we are simply confused about the right way to extract utility functions from information about behavior or reported preferences, and therefore that apparent paradoxes do not necessarily mean that the premises are wrong which they appear to mean are wrong.

Comment author: timtyler 17 May 2011 04:54:40PM *  0 points [-]

Any temporal discounting other than temporal is provably inconsistent

The conditions of the proof are applicable only to reinforcement agents which, as a matter of architecture, are forced to integrate anticipated rewards using a fixed weighting function whose time axis is constantly reindexed to be relative to the present.

To recap, the idea is that it is the self-similarity property of exponential functions that produces this result - and the exponential function is the only non-linear function with that property.

All other forms of discounting allow for the possibility of preference reversals with the mere passage of time - as discussed here.

This idea has nothing to do with reinforcement learning.

Comment author: CarlShulman 15 May 2011 08:34:57PM *  3 points [-]

Any temporal discounting other than temporal is provably inconsistent, a point Eliezer makes in his post against temporal discounting.

People are in fact inconsistent, and would like to bind their future selves and future generations. Folk care more about themselves than future generations, but don't care much more about people 100 generations out than 101 generations out. If current people could, they would commit to a policy that favored the current generation, but was much more long-term focused thereafter.

Comment author: timtyler 17 May 2011 04:49:27PM *  0 points [-]

Any temporal discounting other than temporal is provably inconsistent [...]

You meant to say: "other than exponential".

Comment author: Steve_Rayhawk 19 May 2011 07:04:48AM *  8 points [-]

You keep constructing scenarios whose intent, as far as I can tell, is to let you argue that in those scenarios any currently imaginable non-human system would be incapable of choosing a correct or defensible course of action. By comparison, however, you must also be arguing that some human system in each of those scenarios would be capable of choosing a correct or defensible course of action. How?

And: Suppose you knew that someone was trying to understand the answer to this question, and create the field of "Artificial Ability to Choose Correct and Defensible Courses of Action The Way Humans Apparently Can". What kinds of descriptions do you think they might give of the engineering problem at the center of their field of study, of their criteria for distinguishing between good and bad ways of thinking about the problem, and of their level of commitment to any given way in which they've been trying to think about the problem? Do those descriptions differ from Eliezer's descriptions regarding "Friendly AI" or "CEV"?

You seem to be frustrated about some argument(s) and conclusions that you think should be obvious to other people. The above is an explanation of how some conclusions that seem obvious to you could seem not obvious to me. Is this explanation compatible with your initial model of my awareness of arguments' obviousnesses?

Comment author: ciphergoth 15 May 2011 08:12:51AM *  8 points [-]

Both Eliezer and Robin Hanson have argued strongly against time discounting of utility.

EDIT: I'm partially wrong, Hanson is in favour. Sorry and thanks for the correction.

Roko once argued to me that if we are to discount the future, we should use our true discounting function: a hyperbolic function. Because even if that's inherently irrational, it's still what we want. This would also not display the behaviour you discuss here.

Comment author: timtyler 15 May 2011 10:30:11AM *  6 points [-]

Both Eliezer and Robin Hanson have argued strongly against time discounting of utility.

Not Robin Hanson AFAIK - see his: For Discount Rates. Here's YuEl's Against Discount Rates.

Comment author: PhilGoetz 15 May 2011 07:44:08PM 2 points [-]

I'm bothered by saying, "even if that's inherently irrational, it's still what we want." Do you really want to deliberately make your AI irrational? Seems that the subject merits more discussion before committing to that step.

Comment author: Rain 15 May 2011 07:48:33PM 2 points [-]

Assigning value to things is an arational process.

Comment author: Will_Newsome 22 May 2011 06:50:53AM 1 point [-]

I think that is a dangerous anti-epistemic meme. (Especially since calling the process of determining our values "assigning value to things" is misleading, though I get what you mean.) You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value. Instrumental and epistemic rationality do not decouple nearly so easily.

Comment author: Vladimir_Nesov 22 May 2011 01:15:56PM 2 points [-]

Probably nothing short of a good post focused on this single idea will change minds.

Comment author: Rain 22 May 2011 01:37:08PM *  0 points [-]

I didn't even realize it was controversial.

Evolution created our core values > evolution is arational > our core values are arational.

Comment author: wedrifid 22 May 2011 02:49:43PM 3 points [-]

Evolution created our core values > evolution is arational > our core values are arational.

I don't disagree with the conclusion but the reasoning does not follow.

Comment author: RichardKennaway 23 May 2011 10:41:25AM 2 points [-]

Evolution created our core values > evolution is arational > our core values are arational.

Evolution created our rationality > evolution is arational > our rationality is arational.

Genetic fallacy.

Comment author: Rain 23 May 2011 01:08:47PM 0 points [-]

Yeah, I should really stop linking to anything written by Eliezer. Putting it in my own words invariably leads to much better communication, and everyone is quite content to tear it apart should I misunderstand the slightest nuance of "established material".

Comment author: RichardKennaway 23 May 2011 01:17:30PM 1 point [-]

What does the link have to do with it? There just isn't any way to get from the two premises to the conclusion.

Comment author: Rain 23 May 2011 01:21:42PM *  1 point [-]

The link gave me a reason to think I had explained myself, when I obviously hadn't included enough material to form a coherent comment. I know that what I'm thinking feels to be correct, and people do seem to agree with the core result, but I do not have the words to attempt to explain my thinking to you and correct it just now.

Comment author: Vladimir_Nesov 22 May 2011 02:16:23PM *  2 points [-]

If one can make mistakes in deciding what to value, what goals to set (in explicit reasoning or otherwise), then there is a place for pointing out that pursuing certain goals is an error (for so and so reasons), and a place for training to not make such errors and to perceive the reasons that point out why some goal is right or wrong.

Also, if the goals set by evolution should indeed be seen as arbitrary on reflection, you should ignore them. But some of them are not (while others are).

Comment author: Rain 22 May 2011 07:50:51PM *  2 points [-]

As I've mentioned before, I hated the 'arbitrary' article, and most of the meta-ethics sequence. Value is arational, and nobody's provided a coherent defense otherwise. You're not discovering "rational value", you're discarding irrational instrumental values in a quest to achieve or discover arational terminal values.

Heh. And after looking up that link, I see it was you I was arguing with on this very same topic back then as well. Around and around we go...

Comment author: Will_Newsome 23 May 2011 05:09:22AM *  0 points [-]

This idea of "rational value" you think is incoherent is perhaps a straw-man. Let's instead say that some people think that those methods you are using to discard instrumental values as irrational or find/endorse arational terminal values, might be generalized beyond what is obvious, might assume mistaken things, or might be an approximation of rules that are more explicitly justifiable.

For example, I think a lot of people use a simple line of reasoning like "okay, genetic evolution led me to like certain things, and memetic evolution led me to like other things, and maybe quirks of events that happened to me during development led me to like other things, and some of these intuitively seem more justified, or upon introspecting on them they feel more justified, or seem from the outside as if there would be more selection pressure for their existence so that probably means they're the real values, ..." and then basically stop thinking, or stop examining the intuitions they're using to do that kind of thinking, or continue thinking but remain very confident in their thinking despite all of the known cognitive biases that make such thinking rather difficult.

Interestingly very few people ponder ontology of agency, or timeless control, or the complex relationship between disposition and justification, or spirituality and transpersonal psychology; and among the people who do ponder these things it seems to me that very few stop and think "wait, maybe I am more confused about morality than I had thought". It seems rather unlikely to me that this is because humans have reached diminishing marginal returns in the field of meta-ethics.

Comment author: Rain 23 May 2011 01:14:00PM *  0 points [-]

My "straw-man" does appear to have defenders, though we seem to agree you aren't one of them. I've admitted great confusion regarding ethics, morality, and meta-ethics, and I agree that rationality is one of the most powerful tools we have to dissect and analyze it.

Comment author: Wei_Dai 23 May 2011 12:29:50AM 1 point [-]

You use the art of rationality to determine what you value, and you use the art of rationality to determine how you should reflect on or change the process of determining what you value.

Why do you use the phrase "art of rationality", as opposed to say, "philosophy"? Can you suggest a process for determine what you value, and show how it is related to things that are more typically associated with the word "rationality", such as Bayesian updating and expected utility? Or is "art of rationality" meant to pretty much cover all of philosophy or at least "good" philosophy?

Comment author: Vladimir_Nesov 23 May 2011 01:18:25AM *  1 point [-]

Primarily training of intuition to avoid known failure modes, implicit influence on the process of arriving at judgments, as compared to explicit procedures for pre- or post-processing interactions with it.

Comment author: Will_Newsome 23 May 2011 02:58:38AM *  0 points [-]

I haven't found any system of thought besides LW-style rationality that would be sufficient to even start thinking about your values, and even LW-style rationality isn't enough. More concretely, very few people know about illusion of introspection, evolutionary psychology, verbal overshadowing, the 'thermodynamics of cognition', revealed preference (and how 'revealed' doesn't mean 'actual'), cognitive biases, and in general that fundamental truth that you can't believe everything you think. And more importantly, the practical and ingrained knowledge that things like those are always sitting there waiting to trip you up, if you don't unpack your intuitions and think carefully about them. Of course I can't suggest a process for determining what you value (or what you 'should' value) since that's like the problem of the human condition, but I know that each one of those things I listed would most likely have to be accounted for in such a process.

Or is "art of rationality" meant to pretty much cover all of philosophy or at least "good" philosophy?

Hm... the way you say it makes me want to say "no, that would be silly and arrogant, of course I don't think that", but ya know I spent a fair amount of time using 'philosophy' before I came across Less Wrong, and it turns out philosophy, unlike rationality, just isn't useful for answering the questions I care about. So, yeah, I'll bite that bullet. The "art of rationality" covers "good" philosophy, since most philosophy sucks and what doesn't suck has been absorbed. But that isn't to say that LW-style philosophy hasn't added a huge amount of content that makes the other stuff look weak by comparison.

(I should say, it's not like something like LW-style rationality didn't exist before; you, for instance, managed to find and make progress on interesting and important questions long before there were 'sequences'. I'm not saying LW invented thinking. It's just that the magic that people utilized to do better than traditional rationality was never really put down in a single place, as far as I know.)

Comment author: Wei_Dai 24 May 2011 06:30:09AM *  0 points [-]

I don't disagree with what you write here, but I think if you say something like "You use the art of rationality to determine what you value" you'll raise the expectation that there is already an art of rationality that can be used to determine what someone values, and then people will be disappointed when they look closer and find out that's not the case.

Comment author: Will_Newsome 24 May 2011 06:35:11AM 0 points [-]

Ah, I see your point. So the less misleading thing to say might be something roughly like: "We don't yet know how to find or reason about our values, but we have notions of where we might start, and we can expect that whatever methods do end up making headway are going to have to be non-stupid in at least as many ways as our existing methods of solving hard problems are non-stupid."

Comment author: Thomas 22 May 2011 07:44:07AM *  0 points [-]

You can't go all the way down the turtles to firmly put each one on the previous. Adopting axioms is always a bit arbitrary thing. No deeper axioms, then the deepest. Always quite arrationally put how they are.

Comment author: Rain 22 May 2011 12:46:48PM *  0 points [-]

You use the art of rationality to determine what you value

A process of discovery which uncovers things which came from where?

you use the art of rationality to determine how you should reflect on or change the process of determining what you value

A process of change which determines the direction things should go based on what?

Comment author: Will_Newsome 23 May 2011 03:06:21AM *  0 points [-]

You misunderstand, I'm not saying your values necessarily have to be 'rational' in some deep sense. I'm just saying that in order to figure out what your values might be, how they are related to each other, and what that means, you have to use something like rationality. I would also posit that in order to figure out what rocks are, how they are related to each other, and what that means, you have to use something like rationality. That obviously doesn't mean that values or rocks are 'rational', but it might mean you can notice interesting things about them you wouldn't have otherwise.

Comment author: Rain 23 May 2011 01:06:26PM 0 points [-]

I agree with this statement.

I'm sorry to have continued the argument when I was apparently unclear.

Comment author: ciphergoth 16 May 2011 06:07:23AM 0 points [-]

I agree, but I think there's at least a case to be made that we take that step when we decide we're going to discount the future at all.

Comment author: PhilGoetz 15 May 2011 08:23:08AM 2 points [-]

I don't think it makes sense to not discount utility; or, at least, it is moving from agent utilities to some kind of eternal God's utility. Hard to see how to relate that to our decisions.

Using a different function seems more promising. Why does Roko say our true discounting function is hyperbolic?

Comment author: timtyler 15 May 2011 10:19:31AM *  3 points [-]

Why does Roko say our true discounting function is hyperbolic?

Some of the evidence that that is true summarised here.

Comment author: Will_Newsome 15 May 2011 08:58:20AM *  3 points [-]

I think it's an SIAI meme caused by excitement about George Ainslie's work in psychology, briefly summarized here: http://en.wikipedia.org/wiki/George_Ainslie_(psychologist) . I'm not sure if Roko picked it up from there though. There is some debate over whether human discount rates are in fact generally hyperbolic, though Ainslie's book Breakdown of Will is pretty theoretically aesthetic; worth checking out in any case, both for potential insight into human psychology and firepower for practical self-enhancement plans. ETA: Apparently ciphergoth made a post about it: http://lesswrong.com/lw/6c/akrasia_hyperbolic_discounting_and_picoeconomics/

Comment author: XiXiDu 15 May 2011 10:30:41AM 0 points [-]

Both Eliezer and Robin Hanson have argued strongly against time discounting of utility.

I don't understand, what are their arguments?

  • Isn't time discounting mainly a result of risk aversion? What is wrong with being risk averse?
  • If an agent's utility function does place more weight on a payoff that is nearer in time, should that agent alter its utility function? Rationality are heuristics used to satisfy one's utility function. What heuristics are applicable when altering one's own utility function?
  • The expected utility of the future can grow much faster than its prior probability shrinks. Without any time preferences, at what point does a rational agent stop its exploration and start the exploitation to actually "consume" utility?
Comment author: Kaj_Sotala 15 May 2011 10:49:31AM 3 points [-]
Comment author: timtyler 15 May 2011 11:39:27AM *  1 point [-]

Isn't time discounting mainly a result of risk aversion?

I think it is better to think of temporal discounting and risk aversion as orthogonal.

at what point does a rational agent stop its exploration and start the exploitation

Exploration vs expolitation is based on what the utility function says.

Comment author: Normal_Anomaly 15 May 2011 04:20:42PM 0 points [-]

Roko once argued to me that if we are to discount the future, we should use our true discounting function: a hyperbolic function. Because even if that's inherently irrational, it's still what we want. This would also not display the behaviour you discuss here.

According to Against Discount rates, it does make you vulnerable to a Dutch Book.

Comment author: ciphergoth 15 May 2011 05:39:33PM 4 points [-]

Damn, can't find the cartoon now that says "Pffft, I'll let Future Me deal with it. That guy's a dick!"

Comment author: Vaniver 17 May 2011 12:34:11PM 1 point [-]

I think you are referring to this comic which was referred to in this comment which I found by googling your quote without quotes.

Comment author: ciphergoth 17 May 2011 02:38:22PM 0 points [-]

Thanks - though the lmgtfy link was unnecessarily rude, I did try various Google combinations without success.

Comment author: magfrump 17 May 2011 04:44:15AM *  1 point [-]

Perhaps this?

Comment author: Dreaded_Anomaly 15 May 2011 09:38:48PM *  6 points [-]

Downvoted, because your math is wrong.

(2¹ºº)^t = exp(t*ln(2¹ºº)), so the factor you call 'c' is not a constant multiplier for the integral; in fact, that combination of constants doesn't even show up. The (approximated) integral is actually b∫t²*exp(-at)dt, where a = 100*ln(2) and b = 120. Evaluating this from 0 to T produces the expression: (2b/a³)*(1 - exp(-aT)*(1 + aT + ½(aT)²)).

These factors of exp(-aT) show up when evaluating the integral to T<∞. (Obviously, when T → ∞, the integral converges to (2b/a³).) For a ~ O(5) or higher, then, the entire total utility is found in 10 years, within double precision. That corresponds to a ≈ 7*ln(2). I think this indicates that the model may not be a good approximation of reality. Also, for slower subjective time (a < ln(2) ≈ 0.693), the percentage of total utility found in 10 years drops. For a = 0.1*ln(2), it's only 3.33%.

Also, you defined either 'x' or your linear density function incorrectly. If you want x to be stars/ly^2, the density function should be ρ = x(1 - r/50000). If you do all of the calculations symbolically and don't plug in values until the end, the equation for total utility as a function of time (before discounting) is exact, not approximate.

Edit: Actually, I made an error when referencing slower subjective time above. That would be a < 0, not a < ln(2). The model doesn't make sense in this case, because the integral b∫t²*exp(-at)dt diverges for a < 0.

Edit again: Nope, I lost track of the 0.1 factor in 2^(0.1Y). Disregard previous statements about what constitutes slower subjective time; it's not really relevant anyway.

Comment author: PhilGoetz 18 May 2011 04:37:58PM *  1 point [-]

(2¹ºº)^t = exp(tln(2¹ºº)), so the factor you call 'c' is not a constant multiplier for the integral; in fact, that combination of constants doesn't even show up. The (approximated) integral is actually b∫t²exp(-at)dt, where a = 100ln(2) and b = 120. Evaluating this from 0 to T produces the expression: (2b/a³)(1 - exp(-aT)*(1 + aT + ½(aT)²)).

You are correct! However, you note yourself that "For a ~ O(5) or higher, then, the entire total utility is found in 10 years, within double precision." So the result does depend on subjective time, which is what I had expected.

This is important - but it still doesn't change the conclusion, that rational expected utility-maximizers operating in this framework don't care about the future.

I'm grateful to you for finding the flaw in my math, and am upvoting you on that basis. But I don't think you should say, "I found an error, therefore I downvote your whole post" instead of "Here is a correction to your work on this interesting and important topic; your conclusion is now only valid under these conditions."

A general comment to everybody: Most of you are in the habit of downvoting a post if you find a single flaw in it. This is stupid. You should upvote a post if it illuminates an important topic or makes you realize something important. Einstein didn't say, "Newton's law is inaccurate. Downvoted." Voting that way discourages people from ever posting on difficult topics.

Also, you defined either 'x' or your linear density function incorrectly. If you want x to be stars/ly^2, the density function should be ρ = x(1 - r/50000). If you do all of the calculations symbolically and don't plug in values until the end, the equation for total utility as a function of time (before discounting) is exact, not approximate.

No; x is the density as a function of r, and it varies linearly from a maximum at r=0, to zero at r=50,000. The way I wrote it is the only possible function satisfying that.

Comment author: Dreaded_Anomaly 18 May 2011 07:19:08PM 3 points [-]

I'm grateful to you for finding the flaw in my math, and am upvoting you on that basis. But I don't think you should say, "I found an error, therefore I downvote your whole post" instead of "Here is a correction to your work on this interesting and important topic; your conclusion is now only valid under these conditions."

A general comment to everybody: Most of you are in the habit of downvoting a post if you find a single flaw in it. This is stupid. You should upvote a post if it illuminates an important topic or makes you realize something important. Einstein didn't say, "Newton's law is inaccurate. Downvoted." Voting that way discourages people from ever posting on difficult topics.

I downvoted not simply because there was a math error, but because of your aggressive comments about downvoting this topic without having arguments against your math, which was in fact faulty. When you spend a significant portion of the post chastising people about downvoting, you should take a little more time to make sure your arguments are as ironclad as you think they are.

The point is not to discourage people from posting on difficult topics; it's to discourage unwarranted arrogance.

No; x is the density as a function of r, and it varies linearly from a maximum at r=0, to zero at r=50,000. The way I wrote it is the only possible function satisfying that.

You define x as having units of stars/ly^2. Because you're approximating the galaxy in two dimensions, the general density function should also have units of stars/ly^2. You wrote this density function: x(50000-r), which has units of stars/ly. I wrote the density function ρ = x(1 - r/50000) which has the correct units and fits the boundary conditions you described: ρ(r=0) = x, and ρ(r=50000) = 0.

In your post, you end up redefining x when you solve for it from your density function, so it does not affect the final result. However, in a model that's supposed to be physically motivated, this is sloppy.

Comment author: XiXiDu 15 May 2011 09:59:27AM *  6 points [-]

I think this post (Evolution and irrationality) is interesting but don't know what to make of it due to a lack of general expertise:

Sozou’s idea is that uncertainty as to the nature of any underlying hazards can explain time inconsistent preferences. Suppose there is a hazard that may prevent the pay-off from being realised. This would provide a basis (beyond impatience) for discounting a pay-off in the future. But suppose further that you do not know what the specific probability of that hazard being realised is (although you know the probability distribution). What is the proper discount rate?

Sozou shows that as time passes, one can update their estimate of the probability of the underlying hazard. If after a week the hazard has not occurred, this would suggest that the probability of the hazard is not very high, which would allow the person to reduce the rate at which they discount the pay-off. When offered with a choice of one or two bottles of wine 30 or 31 days into the future, the person applies a lower discount rate in their mind than for the short period because they know that as each day passes in which there has been no hazard preventing the pay-off, their estimate of the hazard’s probability will drop.

Comments would be appreciated.

Comment author: nazgulnarsil 16 May 2011 04:28:07AM 1 point [-]

this was my initial reaction to the OP, stated more rigorously. Our risk assessment seems to be hardwired into several of our heuristics. Those risk assessments are no longer appropriate because our environment has become much less dangerous.

Comment author: sark 15 May 2011 04:49:12PM 0 points [-]

It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions.

In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty.

At least I think thats possible. Have there been any formal analyses of this idea?

Comment author: Oscar_Cunningham 15 May 2011 04:58:26PM 2 points [-]

There's this post by Vladimir Nesov.

Comment author: endoself 15 May 2011 12:36:09PM 8 points [-]

Among other things, this means we might not want to turn the Universe over to a rational expected-utility maximizer.

So this is just a really long way of saying that your utility function doesn't actually include temporal discounting.

Comment author: AlexMennen 16 May 2011 09:56:08PM 3 points [-]

I feel like my concern for the well being of people I don't know does not change at all with time, but my concern for people I do know is discounted, and for myself, I discount more heavily. This seems to imply that we do not discount with increasing time but instead with decreasing association. As in, we care more about minds more similar to our own, or with whom we interact more, and our minds become more different farther in the future.

Comment author: Thomas 16 May 2011 12:52:46PM 3 points [-]

This would explain the Fermi's paradox. Would.

Comment author: XiXiDu 15 May 2011 05:56:08PM *  3 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled. At what point is an expected utility maximizer (without time preferences) going to satisfy its utility function, or is the whole purpose of expected utility maximization to maximize expected utility rather than actual utility?

People here talk about the possibility of a positive Singularity as if it was some sort of payoff. I don't see that. If you think it is rational to donate money to the SIAI to enable it to create a galactic civilisation then it would be as rational, once you reached the post-Singularitarian paradise, to donate any computational resources to the ruling FAI to enable it to overcome the heat-death of the universe. Just as the current risks from AI comprise vast amounts of disutility, so does the heat-death of the universe.

At what point are we going to enjoy life? If you can't answer that basic question, what does it mean to win?

Comment author: Perplexed 17 May 2011 02:44:59PM *  3 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled. ... At what point are we going to enjoy life? If you can't answer that basic question, what does it mean to win?

This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.

First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).

Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.

Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).

Replace the second with the assumption that collective action is described by something like a Nash bargaining solution - that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and "fairness" constraints (to pick out the solution point on the Pareto surface).

Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?

Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.

More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ's production will pay GenY's pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ's welfare, even though it doesn't.

But, even though GenZ's welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX's hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).

Comment author: XiXiDu 17 May 2011 03:38:45PM 0 points [-]

Nicely put, very interesting.

Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.

What about Aumann's agreement theorem? Doesn't this assume that contributions to a charity are based upon genuinely subjective considerations that are only "right" from the inside perspective of certain algorithms? Not to say that I disagree.

Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Comment author: Perplexed 17 May 2011 11:21:54PM 2 points [-]

Bob comes to agree that Alice likes ballet - likes it a lot. Alice comes to agree that Bob prefers nature to art. They don't come to agree that art is better than nature, nor that nature is better than art. Because neither is true! "Better than" is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).

...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I'm talking about real compound agents created either by bargaining among humans or by FAI engineers.

But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.

Comment author: steven0461 15 May 2011 06:10:28PM 3 points [-]

Would you accept "at some currently unknown point" as an answer? Or is the issue that you think enjoyment of life will be put off infinitely? But whatever the right way to deal with possible infinities is (if such a way is needed), that policy is obviously irrational.

Comment author: timtyler 17 May 2011 05:09:22PM 0 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled.

It doesn't seem to be much of a problem to me - because of instrumental discounting.

Comment author: nazgulnarsil 16 May 2011 04:30:41AM 0 points [-]

your risk of dying function determines the frontier between units devoted to hedonism and units devoted to continuation of experience.

Comment author: Perplexed 17 May 2011 02:03:07PM 0 points [-]

Ok, but which side of the frontier is which?

I have seen people argue that we discount the future since we fear dying, and therefore are devoted to instannt hedonism. But if there were no reason to fear death, we would be willing to delay gratification and look to the glorious future.

Comment author: loqi 16 May 2011 01:05:37AM 0 points [-]

Enjoying life and securing the future are not mutually exclusive.

Comment author: Document 16 May 2011 04:03:04AM 1 point [-]

Optimizing enjoyment of life or security of the future superficially is, if resources are finite and fungible between the two goals.

Comment author: loqi 16 May 2011 05:55:00PM 0 points [-]

Agreed. I don't see significant fungibility here.

Comment author: benelliott 17 May 2011 05:53:55PM -1 points [-]

Indeed! I am still waiting for this problem to be tackled.

Why not try tackling it yourself?

Comment author: gjm 15 May 2011 11:38:12AM *  14 points [-]

Downvoted for (1) being an extraordinarily laborious way of saying "decaying exponential times modest-degree polynomial is rapidly decaying", (2) only doing the laborious calculations and not mentioning why the result was pretty obvious from the outset, (3) purporting to list ways around the problem (if it is one) and not so much as mentioning "don't discount exponentially", (4) conflating "rationalist" with "exponentially discounting expected-utility maximizer", and most of all (5) the horrible, horrible I-know-I'm-going-to-be-downvoted-for-this-and-you're-all-so-stupid sympathy-fishing.

[EDITED to fix a typo: I'd numbered my points 1,2,3,5,5. Oops.]

Comment author: luminosity 15 May 2011 11:11:00PM 4 points [-]

I would have downvoted it for 5 alone, if I had enough karma to.

Comment author: wedrifid 18 May 2011 06:50:26PM 3 points [-]

I would have downvoted it for 5 alone, if I had enough karma to.

Unless the reference is obsolete we can make 4 downvotes per karma point. If so you must be really laying down the quality control. Bravo!

Comment author: Oscar_Cunningham 15 May 2011 10:31:09AM 6 points [-]

Utility functions are calculated from your preferences, not vice-versa. (To a first approximation.)

Comment author: Dreaded_Anomaly 17 May 2011 08:45:32PM *  2 points [-]

After spending some time thinking about the result from the correct math, here are my conclusions:

You claimed that the percentage of total utility attained in the first 10 years was independent of the level of time discounting. This is clearly not the case, as the percentage of total utility attained in the first T years with time discounting factor a is given by (1 - exp(-aT)*(1 + aT + ½(aT)²)). The expression -exp(-aT)*(1 + aT + ½(aT)²) (the difference between the previous expression and 1) goes to zero within double precision when the combined factor aT ≈ 745.13322.

For any T<∞, then, we can find a level of exponential time discounting such that we should care about the future (at least out to that time T). You provided no real justification for why we should choose an especially high level, e.g. a = 100*ln(2). This model, when calculated correctly, does not support your assertions in the general case. Getting to a more specific case which would support your assertions requires motivating a specific level of time discounting, which you did not accomplish with an arbitrary decision about "Planck-tech years."

Comment author: PhilGoetz 18 May 2011 04:46:14PM *  0 points [-]

I agree with the math; I disagree that my time-discounting constant is arbitrary.

I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth. Anticipating that human-level software will eventually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend. If I said 1 billion instead of 1000, I might be on shaky ground.

Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years. If I say there is a speedup of 100, I get all but 2*10-27 of it in 10 years. This is worse than before!

(I'm upvoting this comment because it enlightened me, even though I take issue with part of it.)

Comment author: timtyler 21 May 2011 07:25:48PM *  1 point [-]

I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth.

I already observed:

The prevailing interest rate is normally not much of a factor - since money is only instrumentally valuable.

I do not think that interest rates are really a reflection of human temporal discounting. Why would anyone think that they were?

Comment author: Dreaded_Anomaly 18 May 2011 07:55:19PM 0 points [-]

Anticipating that human-level software will evantually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend.

The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it's important to spend some time motivating your chosen level of time discounting.

Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years.

For a = 0.1*ln(2), the value of the integral from t=0..10 is ~24008. The value of the integral from t=0..∞ is ~720667. There is an order of magnitude difference between those two values. 97% of the utility comes after the first 10 years if there's no difference in subjective time.

Comment author: PhilGoetz 18 May 2011 08:30:35PM *  0 points [-]

The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it's important to spend some time motivating your chosen level of time discounting.

Yes, I agree, and I just did.

97% of the utility comes after the first 10 years if there's no difference in subjective time.

We must be evaluating different integrals. I wrote my calculations up in the main post. I'm evaluating -120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating?

You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn't pass the sanity check.

Comment author: Dreaded_Anomaly 18 May 2011 09:17:18PM *  0 points [-]

I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though.

Edit: Wolfram Alpha's results for t=0..10 and t=0..∞.

Comment author: PhilGoetz 19 May 2011 03:08:59AM *  0 points [-]

Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.

But I'm not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.

Comment author: Steve_Rayhawk 22 May 2011 06:02:54AM *  9 points [-]

This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.

For the record, I dispute your causal model of the audience's response.

In particular, I dispute your model of the audience's moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.

I also dispute your model of the audience's factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience's process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent's present time. If that was not your model, then I dispute your confusing apparent claim that such functions, if non-exponential, must be dynamically inconsistent.)

I am concerned that the errors in your model of the audience, if left unchallenged, will only serve to reinforce in you the apparent resentful, passive-aggressive self-righteousness which would have largely been itself the cause of the misinterpretations which led to those errors originally. This self-reinforcing effect might create needless mutual epistemic alienation.

Comment author: Dreaded_Anomaly 19 May 2011 03:51:26AM 1 point [-]

Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.

I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case.

This is a critical topic, but LessWrong hates it.

I think you have neglected the presentation of the topic as a confounding variable in that analysis.

Comment author: [deleted] 19 May 2011 03:35:30AM 1 point [-]

This is a critical topic, but not as critical as how much karma you get on LessWrong? Please care about karma less.

Comment author: Thomas 22 May 2011 08:05:40AM 0 points [-]

Where could your mistake be? If it nowhere to be seen, it is possible that there isn't one. In that case it is quite a crisis here.

Comment author: Eliezer_Yudkowsky 15 May 2011 09:53:05AM 12 points [-]

Rational expected-utility-maximizing agents get to care about whatever the hell they want. Downvoted.

Comment author: wedrifid 15 May 2011 10:41:38AM *  0 points [-]

Rational expected utility maximizing agents get to care about whatever the hell they want.

Most inspirational philosophical quote I've seen in a long time! Up there as a motivational quote too.

Comment author: PhilGoetz 17 May 2011 02:54:51AM *  0 points [-]

If an agent explicitly says, "My values are such that I care more about the state of the universe a thousand years from now than the state of the universe tomorrow", I have no firm basis for saying that's not rational. So, yes, I can construct a "rational" agent for which the concern in this post does not apply.

If I am determined simply to be perverse, that is, rather than to be concerned with preventing the destruction of the universe by the sort of agents anyone is likely to actually construct.

An agent like that doesn't have a time-discounting function. It only makes sense to talk about a time discounting function when your agent - like every single rational expectation-maximizing agent ever discussed, AFAIK, anywhere, ever, except in the above comment - has a utility function that evaluates states of the world at a given moment, and whose utility function for possible timelines specifies some function (possibly a constant function) describing their level of concern for the world state as a function of time.

When your agent is like that, it runs into the problem described in this post. And, if you are staying within the framework of temporal discounting, you have only a few choices:

  • Don't care about the future. Eventually, accidentally destroy all life, or fail to preserve it from black swans.
  • Use hyperbolic discounting, or some other irrational discounting scheme, even though this may be like adding a contradiction into a system that uses resolution. (I think the problems with hyperbolic discounting may go beyond its irrationality, but that would take another post.)
  • Use a constant function weighting points in time (don't use temporal discounting). Probably end up killing lots of humans.

If you downvoted the topic as unimportant because rational expectation-maximizers can take any attitude towards time-discounting they want, why did you write a post about how they should do time-discounting?

Comment author: PhilGoetz 17 May 2011 07:02:20AM 1 point [-]

BTW, genes are an example of an agent that arguably has a reversed time-discounting function. Genes "care" about their eventual, "equilibrium" level in the population. This is a tricky example, though, because genes only "care" about the future retrospectively; the more-numerous genes that "didn't care", disappeared. But the body as a whole can be seen as maximizing the proportion of the population that will contain its genes in the distant future. (Believing this is relevant to theories of aging that attempt to explain the Gompertz curve.)

Comment author: timtyler 17 May 2011 06:22:44PM 0 points [-]

Kinda - but genes are not in practice of looking a million years ahead - they are lucky if they can see or influence two generations worth ahead - so: instrumental discounting applies here too.

Comment author: timtyler 15 May 2011 10:28:26AM *  3 points [-]

Didn't like this post much either (sorry!). Yes, if you assume a substantial level of temporal discounting that makes the future matter less. If you don't like that, perhaps do not apply so much temporal discounting.

The dense maths hinders the reader here. I don't really approve of the dissing of expected utility maximizers at the end either.

Comment author: [deleted] 16 May 2011 04:01:16PM *  1 point [-]

"The dense maths hinders the reader here."

This is an argument against the reader, not the post. Anyone interested in these matters should be able to handle basic calculus, or else should withhold voting on such matters.

Comment author: Vaniver 17 May 2011 12:27:39PM 3 points [-]

This is an argument against the reader, not the post. Anyone interested in these matters should be able to handle basic calculus, or else should withhold voting on such matters.

I would agree, if the post treated the reader that way. When you multiply a polynomial by an exponential, the exponential wins. That's all the author needed to get to his point; instead we have dense paragraphs poorly explaining why that is the case.

Comment author: [deleted] 17 May 2011 01:21:09PM *  0 points [-]

"Dense paragraphs" and poor/unclear wording is not the same thing as "dense maths". So I disagree with timtyler's point, but not with yours.

Presumably, even if the exposition were phrased more clearly, timtyler would still have a problem with the "dense maths".

Comment author: CarlShulman 15 May 2011 02:15:42PM *  2 points [-]

I second Manfred and gjm's comments.

One additional point regarding subjective time. You say:

Strange but true. (If subjective time is slower, the fact that t=20 matters more to us is balanced out by the fact that t=2 and t=.2 also matter more to us.)

But even if I temporally discount by my subjective sense of time, if I can halt subjective time (e.g. by going into digital or cryonic storage) then the thing to do on your analysis is to freeze up as long as possible while the colonization wave proceeds (via other agents, e.g. Von Neumann probes or the rest of society).

Now, in fact, I wouldn't care for this strategy at all. If we're talking about distant galaxies being colonized with happy people that I do not then interact with, I don't care if they are 5 years in the future or a billion. I don't care additively and unboundedly about them, but temporal discounting is a bad way to represent my bounded concern. For instance, the possibility that physics might surprisingly turn out to allow indefinite exponential growth (maybe by creating baby universes, or it turning out that we are simulations in a universe with different physics than we see) isn't unboundedly motivating to me.

Nick Bostrom discusses, in his "Infinite Ethics" paper and "Astronomical Waste" paper, discusses this general phenomenon of time discounting being proposed as a patch to create a framework for cost-benefit analysis that does not recommend big current sacrifices for future people (better representing folk's behavioral preferences), but in fact failing to do so because of uncertainty about the growth possibilities. [Edited per Steven's request].

Comment author: steven0461 15 May 2011 06:06:42PM 0 points [-]

Is there a better word for what you call "fanaticism"? Too many connotations.

Comment author: jimrandomh 15 May 2011 12:59:36PM 2 points [-]

If you compute the implications of a utility function and they do not actually agree with observed preferences, then that is an argument that the utility function you started with was wrong. In this case, you seem to have an argument that our utility function should not have time discounting that's stronger than polynomial.

Comment author: paulfchristiano 15 May 2011 04:30:59PM 1 point [-]

Could you perhaps give some plausible argument for exponential discounting, or some record of anyone who has seriously considered applying it universally? I appear to discount approximately exponentially in the near term, but really its a reflection of my uncertainty about the future. I value future humans about as much as present humans, I just doubt my ability to understand my influence on them (in most but not all cases).

Even if you accept exponential discounting, your physical arguments seem pretty weak. How confident are you that faster than light travel is impossible, or that there is a fundamental bound on entropy / unit space? Are you 95% confident? 99% confident? How confident are you that reality looks anything like you think it does, at the bottom of the abstraction stack? Minimally, you should care about the future on the off chance you are wrong. You can still get 1/100 the utility of a civilization growing much faster than light, if you are only 99% sure it is impossible.

Comment author: JohnH 15 May 2011 05:12:25PM *  0 points [-]

Assuming people that have children are rational then the time discounting factor is not what you claim or (perhaps more likely) as people love and expect to love their children and grandchildren (and future descendents) then their (the descendents) expected utility is not time discounted even while ones own is. I likewise imagine that some people will treat long lived versions of themselves in a similar fashion such that they discount their own expected utility for the near term but do not discount their expected utility in the same amount for the version of themselves that will be alive some 200 years hence.

Those individuals and those families that behave in this manner should be expected to outlive or out breed those that do not. They also from history out colonize people that do not hold the expected long term utility of their descendents as a high concern.

Therefore, while society at large may never place a high priority on colonizing space or other high short term cost and risk but potential long term gain proposals, we should expect that individuals that care about potential future descendents will. In your colonization scheme there will be those individuals that will give up a life of current pleasure for the potential to provide an equal or greater amount of pleasure to themselves and their descendents elsewhere.

Comment author: paulfchristiano 17 May 2011 04:53:18PM 1 point [-]

Could someone please explain any possible justification for exponential discounting in this situation? I asked earlier, but got voted below the threshold. If this is a sign of disagreement, then I would like to understand why there is disagreement.

Robin Hanson's argument for exponential discounting derives from an exponential interest rate. Our current understanding of physics implies there won't be an exponential interest rate forever (in fact this is the point of the present article). So Robin Hanson's argument doesn't apply at all to the situation in this article, and I strongly suspect Robin Hanson himself would agree that exponential discounting in this situation is ridiculous.

The other reason I see to discount exponentially is exponentially decreasing confidence in our predictions about the future. While this is a good approximation in some cases, its not built into your utility function and arguments like the one in this article don't make any sense if this is the only basis for exponential discounting.

I have not seen any other arguments in this thread.

Comment author: Wei_Dai 17 May 2011 09:31:20PM *  1 point [-]

Not sure why your earlier comment got voted down. I voted it up to -1.

Could someone please explain any possible justification for exponential discounting in this situation?

I think exponential discounting has gotten ingrained in our thinking mostly for historical reasons. Quoting from the book Time and Decision: economic and psychological perspectives on intertemporal choice ("DU model" here being the 1937 model from Paul Samuelson that first suggested exponential discounting):

Samuelson did not endorse the DU model as a normative model of intertemporal choice, noting that "any connection between utility as discussed here and any welfare concept is disavowed" (1937, 161). He also made no claims on behalf of its descriptive validity, stressing, "It is completely arbitrary to assume that the individual behaves so as to maximize an integral of the form envisaged in [the DU model]" (1937, 159). Yet despite Samuelson's manifest reservations, the simplicity and elegance of this formulation was irresistible, and the DU model was rapidly adopted as the framework of choice for analyzing inter-temporal decisions.

The DU model received a scarcely needed further boost to its dominance as the standard model of intertemporal choice when Tjalling C. Koopmans (1960) showed that the model could be derived from a superficially plausible set of axioms. Koopmans, like Samuelson, did not argue that the DU model was psychologically or normatively plausible; his goal was only to show that under some well-specified (though arguably unrealistic) circumstances, individuals were logically compelled to possess positive time preference. Producers of a product, however, cannot dictate how the product will be used, and Koopmans's central technical message was largely lost while his axiomatization of the DU model helped to cement its popularity and bolster its perceived legitimacy.

I notice that axiomatizations in economics/theory of rationality seem to posses much more persuasive power than they should. (See also vNM's axiom of independence.) People seem to be really impressed that something is backed up by axioms and forget to check whether those axioms actually make sense for the situation.

Comment author: timtyler 17 May 2011 05:14:01PM *  1 point [-]

Could someone please explain any possible justification for exponential discounting in this situation?

To quote from: http://en.wikipedia.org/wiki/Dynamically_inconsistent

Exponential discounting yields time-consistent preferences. Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker's selves will agree with the choices made by each self.

Comment author: paulfchristiano 17 May 2011 05:23:37PM 0 points [-]

At best, this is an argument not to use non-exponential, translation invariant discounting.

You can discount in a way that depends on time (for example, Robin Hanson would probably recommend discounting by current interest rate, which changes over time; the UDASSA recommends discounting in a way that depends on absolute time) or you can not discount at all. I know of plausible justifications for these approaches to discounting. I know of no such justification for exponential discounting. The wikipedia article does not provide one.

Comment author: timtyler 17 May 2011 05:42:54PM *  1 point [-]

At best, this is an argument not to use non-exponential, translation invariant discounting.

It is an argument not to use non-exponential, discounting.

You can discount in a way that depends on time [...]

Exponential discounting depends on time. It is exponential temporal discounting being discussed. So: values being scaled by ke^-ct - where the t is for "time".

The prevailing interest rate is normally not much of a factor - since money is only instrumentally valuable.

or you can not discount at all.

That is the trivial kind of exponential discounting, where the exponent is zero.

I know of no such justification for exponential discounting. The wikipedia article does not provide one.

The bit I quoted was a justification. Exponential discounting yields time-consistent preferences. Only exponential discounting does that.

Comment author: nazgulnarsil 16 May 2011 04:23:31AM 1 point [-]

People pay an exponential amount of future utility for utility now because we die. We inappropriately discount the future because our current environment has a much longer life expectancy than the primitive one. One should discount according to actual risk, and I plan on self modifying to do this when the opportunity arises.

Comment author: Dorikka 16 May 2011 03:33:50AM 1 point [-]

Rewrite 2^(100t) as (2^100)^t = ln(2^100)e^t.

Plugging in for t=2 is giving me 2^(100t)=1.6*10^60 and ln(2^100)e^t = 512.17

Is this an error or did I read it wrong?

There are strong reasons for believing that time-discounting is exponential.

For all utility functions a human may have? What are these reasons?

Comment author: Dreaded_Anomaly 16 May 2011 07:16:48AM 2 points [-]

Plugging in for t=2 is giving me 2^(100t)=1.6*10^60 and ln(2^100)e^t = 512.17

Is this an error or did I read it wrong?

As I described below, his math is wrong.

Comment author: PhilGoetz 18 May 2011 04:57:02PM *  0 points [-]

Yep. Sorry. Fixing it now. The impact on the results is that your time horizon depends on your discount rate.

Comment author: DanielLC 16 May 2011 01:33:50AM *  1 point [-]

If we use time discounting we should care about the future, because it's possible that time machines can be made, but are difficult. If so, we'd need a lot of people work it out. A time machine would be valuable normally, but under time discounting, it gets insane. I don't know what half-life you're using, but let's use 1000 years, just for simplicity. Lets say that we bring a single person back to the beginning of the universe, for one year. This would effectively create about 8.7*10^4,154,213 QALYs. Any chance of time travel would make this worth while.

I've considered this as a reason why attempting to use time discounting for an FAI could fail.

Also, if time-independent quantum physics is to believed, time is not an inherent property of the universe, so you can't really discount by it.

Comment author: CarlShulman 16 May 2011 02:01:48AM 1 point [-]

This has been previously discussed on Less Wrong.

Comment author: PhilGoetz 18 May 2011 04:59:23PM 0 points [-]

That's interesting. I don't think you should extend your utility function into the past that way. You have to go back to the application, and ask why you're doing discounting in the first place. It would be more reasonable to discount for distance from the present, whether forwards or backwards in time.

Comment author: benelliott 18 May 2011 05:12:18PM *  1 point [-]

Elsewhere in this thread, you have criticised hyperbolic discounting for being 'irrational', by which I presume you mean the fact that is inconsistent under reflection, while exponentials are not.

It would be more reasonable to discount for distance from the present, whether forwards or backwards in time.

Your new function is also inconsistent under reflection.

Maybe this is an argument for not discounting, since that is the only possible way to have a past-future symmetric, reflexively consistent utility function.

Just a thought.

Comment author: timtyler 21 May 2011 07:36:02PM *  0 points [-]

Elsewhere in this thread, you have criticised hyperbolic discounting for being 'irrational', by which I presume you mean the fact that is inconsistent under reflection, while exponentials are not.

Not really - this is the problem.

Comment author: benelliott 21 May 2011 08:12:23PM 0 points [-]

We are referring to the same fact. Reflective inconsistency is a trivial consequence of dynamic inconsistency.

Comment author: CuSithBell 15 May 2011 03:01:26PM 1 point [-]

So, this calculation motivates non-expansion, but an agent with an identical utility function that is expansionist anyway attains greater utility and for a longer time... is that right?

Comment author: PhilGoetz 15 May 2011 07:12:20PM 0 points [-]

No, because maximizing utility involves tradeoffs. Being expansionist means expanding instead of doing something else.

Comment author: CuSithBell 15 May 2011 07:21:53PM 0 points [-]

By my reading, that contradicts your assumption that our utility is a linear function of the number of stars we have consumed. (And, moreover, you seem to say that never running out of starstuff is a conservative assumption which makes it less likely we will go seek out more starstuff.)

Comment author: roystgnr 20 May 2011 12:55:38PM 0 points [-]

If we assume that our time-discounting function happens to be perfectly adjusted to match our rate of economic growth now, is it wise to assume that eventually the latter will change drastically but the former will remain fixed?

Comment author: PhilGoetz 18 May 2011 04:19:15PM *  -1 points [-]

Discussion with Jeff Medina made me realize that I can't even buy into the model needed to ask how to time discount. That model supposes you compute expected utility out into the infinite future. That means that, for every action k, you compute the sum, over every timestep t and every possible world w, of p(w(t))U(w(t, k))_t .

If any of these things have countably many objects - possible worlds, possible actions, or timesteps - then the decision process is uncomputable. It can't terminate after finitely many steps. This is a fatal flaw with the standard approach to computing utility forward to the infinite future.

Comment author: lessdazed 17 May 2011 11:47:12AM *  0 points [-]

ADDED: Downvoting this is saying, "This is not a problem". And yet, most of those giving their reasons for downvoting have no arguments against the math.

A major problem with simple voting systems like that used on LW is that people impute meanings to voters more confidently than they should. I've seen this several times here.

If people give a reason for downvoting, they're probably not being deceptive and may even be right about their motives, but most who vote will not explain why in a comment and you're overstepping the bounds of what you can reasonably infer about the individuals who vote without commenting by generalizing from some statements by a few who comment.

If you agree with this comment, please downvote it.

(Standard LW site mechanics still apply, of course. If this post gets enough downvotes, it will be hidden to many users who otherwise would have seen it. If you want to spread the meme expressed in the post, downvoting it into oblivion has certain obvious negative consequences for that meme. On the other hand, for people who do see it, downvotes may be interpreted positively. On the writhing blue tentacle I woke up with this morning, if you disagree with it and don't want others to see it, downvoting will hide it from many. On the other writhing blue tentacle, perhaps this particular post isn't up to the normal standard of quality you expect from me and think downvoting it will send me that message. On the ponderous green claw, perhaps you agree with the content but not the tone and...)