XiXiDu comments on Rationalists don't care about the future - Less Wrong

3 Post author: PhilGoetz 15 May 2011 07:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread.

Comment author: XiXiDu 15 May 2011 05:56:08PM *  3 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled. At what point is an expected utility maximizer (without time preferences) going to satisfy its utility function, or is the whole purpose of expected utility maximization to maximize expected utility rather than actual utility?

People here talk about the possibility of a positive Singularity as if it was some sort of payoff. I don't see that. If you think it is rational to donate money to the SIAI to enable it to create a galactic civilisation then it would be as rational, once you reached the post-Singularitarian paradise, to donate any computational resources to the ruling FAI to enable it to overcome the heat-death of the universe. Just as the current risks from AI comprise vast amounts of disutility, so does the heat-death of the universe.

At what point are we going to enjoy life? If you can't answer that basic question, what does it mean to win?

Comment author: Perplexed 17 May 2011 02:44:59PM *  3 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled. ... At what point are we going to enjoy life? If you can't answer that basic question, what does it mean to win?

This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.

First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).

Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.

Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).

Replace the second with the assumption that collective action is described by something like a Nash bargaining solution - that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and "fairness" constraints (to pick out the solution point on the Pareto surface).

Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?

Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.

More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ's production will pay GenY's pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ's welfare, even though it doesn't.

But, even though GenZ's welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX's hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).

Comment author: XiXiDu 17 May 2011 03:38:45PM 0 points [-]

Nicely put, very interesting.

Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.

What about Aumann's agreement theorem? Doesn't this assume that contributions to a charity are based upon genuinely subjective considerations that are only "right" from the inside perspective of certain algorithms? Not to say that I disagree.

Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Comment author: Perplexed 17 May 2011 11:21:54PM 2 points [-]

Bob comes to agree that Alice likes ballet - likes it a lot. Alice comes to agree that Bob prefers nature to art. They don't come to agree that art is better than nature, nor that nature is better than art. Because neither is true! "Better than" is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).

...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?

Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I'm talking about real compound agents created either by bargaining among humans or by FAI engineers.

But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.

Comment author: steven0461 15 May 2011 06:10:28PM 3 points [-]

Would you accept "at some currently unknown point" as an answer? Or is the issue that you think enjoyment of life will be put off infinitely? But whatever the right way to deal with possible infinities is (if such a way is needed), that policy is obviously irrational.

Comment author: timtyler 17 May 2011 05:09:22PM 0 points [-]

Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.

Indeed! I am still waiting for this problem to be tackled.

It doesn't seem to be much of a problem to me - because of instrumental discounting.

Comment author: nazgulnarsil 16 May 2011 04:30:41AM 0 points [-]

your risk of dying function determines the frontier between units devoted to hedonism and units devoted to continuation of experience.

Comment author: Perplexed 17 May 2011 02:03:07PM 0 points [-]

Ok, but which side of the frontier is which?

I have seen people argue that we discount the future since we fear dying, and therefore are devoted to instannt hedonism. But if there were no reason to fear death, we would be willing to delay gratification and look to the glorious future.

Comment author: loqi 16 May 2011 01:05:37AM 0 points [-]

Enjoying life and securing the future are not mutually exclusive.

Comment author: Document 16 May 2011 04:03:04AM 1 point [-]

Optimizing enjoyment of life or security of the future superficially is, if resources are finite and fungible between the two goals.

Comment author: loqi 16 May 2011 05:55:00PM 0 points [-]

Agreed. I don't see significant fungibility here.

Comment author: benelliott 17 May 2011 05:53:55PM -1 points [-]

Indeed! I am still waiting for this problem to be tackled.

Why not try tackling it yourself?