PhilGoetz comments on Real-Life Anthropic Weirdness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
When you said that, it seemed to me that you were saying that you shouldn't play the lottery even if the expected payoff - or even the expected utility - were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there's something wrong with maximizing expected utility.
Broken intuition pump. The fact that money isn't utility (has diminishing returns) is actually very important here. I, for one, don't think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)
What I'm getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.
Why do we think it's reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?
The winning payoff would have to be truly enormous for the expected utility to be positive.