You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on Rationalists don't care about the future - Less Wrong Discussion

3 Post author: PhilGoetz 15 May 2011 07:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 15 May 2011 09:53:05AM 12 points [-]

Rational expected-utility-maximizing agents get to care about whatever the hell they want. Downvoted.

Comment author: wedrifid 15 May 2011 10:41:38AM *  0 points [-]

Rational expected utility maximizing agents get to care about whatever the hell they want.

Most inspirational philosophical quote I've seen in a long time! Up there as a motivational quote too.

Comment author: PhilGoetz 17 May 2011 02:54:51AM *  0 points [-]

If an agent explicitly says, "My values are such that I care more about the state of the universe a thousand years from now than the state of the universe tomorrow", I have no firm basis for saying that's not rational. So, yes, I can construct a "rational" agent for which the concern in this post does not apply.

If I am determined simply to be perverse, that is, rather than to be concerned with preventing the destruction of the universe by the sort of agents anyone is likely to actually construct.

An agent like that doesn't have a time-discounting function. It only makes sense to talk about a time discounting function when your agent - like every single rational expectation-maximizing agent ever discussed, AFAIK, anywhere, ever, except in the above comment - has a utility function that evaluates states of the world at a given moment, and whose utility function for possible timelines specifies some function (possibly a constant function) describing their level of concern for the world state as a function of time.

When your agent is like that, it runs into the problem described in this post. And, if you are staying within the framework of temporal discounting, you have only a few choices:

  • Don't care about the future. Eventually, accidentally destroy all life, or fail to preserve it from black swans.
  • Use hyperbolic discounting, or some other irrational discounting scheme, even though this may be like adding a contradiction into a system that uses resolution. (I think the problems with hyperbolic discounting may go beyond its irrationality, but that would take another post.)
  • Use a constant function weighting points in time (don't use temporal discounting). Probably end up killing lots of humans.

If you downvoted the topic as unimportant because rational expectation-maximizers can take any attitude towards time-discounting they want, why did you write a post about how they should do time-discounting?

Comment author: PhilGoetz 17 May 2011 07:02:20AM 1 point [-]

BTW, genes are an example of an agent that arguably has a reversed time-discounting function. Genes "care" about their eventual, "equilibrium" level in the population. This is a tricky example, though, because genes only "care" about the future retrospectively; the more-numerous genes that "didn't care", disappeared. But the body as a whole can be seen as maximizing the proportion of the population that will contain its genes in the distant future. (Believing this is relevant to theories of aging that attempt to explain the Gompertz curve.)

Comment author: timtyler 17 May 2011 06:22:44PM 0 points [-]

Kinda - but genes are not in practice of looking a million years ahead - they are lucky if they can see or influence two generations worth ahead - so: instrumental discounting applies here too.