PhilGoetz comments on Real-Life Anthropic Weirdness - Less Wrong

24 Post author: Eliezer_Yudkowsky 05 April 2009 10:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread.

Comment author: PhilGoetz 06 April 2009 01:41:57PM *  1 point [-]

From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.

When you said that, it seemed to me that you were saying that you shouldn't play the lottery even if the expected payoff - or even the expected utility - were positive, because the payoff would happen so rarely.

Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?

If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there's something wrong with maximizing expected utility.

Comment author: Nick_Tarleton 06 April 2009 04:54:14PM 3 points [-]

Broken intuition pump. The fact that money isn't utility (has diminishing returns) is actually very important here. I, for one, don't think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.

Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)

Comment author: PhilGoetz 06 April 2009 05:05:36PM 0 points [-]

What I'm getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.

Why do we think it's reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?

Comment author: ciphergoth 06 April 2009 02:20:59PM *  0 points [-]

The winning payoff would have to be truly enormous for the expected utility to be positive.