V_V comments on Why you must maximize expected utility - Less Wrong

20 Post author: Benja 13 December 2012 01:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 14 December 2012 12:04:47AM *  0 points [-]

I appreciate the hard work here, but all the math sidesteps the real problems, which are in the axioms, particularly the axiom of independence. See this sequence of comments on my post arguing that saying expectation maximization is correct is equivalent to saying that average utilitarianism is correct.

People object to average utilitarianism because of certain "repugnant" scenarios, such as the utility monster (a single individual who enjoys torturing everyone else so much that it's right to let him or her do so). Some of these scenarios can be transformed into a repugnant scenario for expectation maximization over your own utility function, where instead of "one person" you have "one possible future you". Suppose the world has one billion people. Do you think it's better to give one billion and one utilons to one person than to give one utilon to everyone? If so, why would you believe it's better to take an action that results in you having one billion and one utilons one-one-billionth of the time, and nothing all other times, than an action that reliably gives you one utilon?

The way people think about the lottery suggests that most people prefer to distribute utilons equally among different people, but to lump them together and give them to a few winners in distributions among their possible future selves. This is a case where we reliably violate the Golden Rule, and call ourselves virtuous for doing so.

Comment author: V_V 18 December 2012 05:15:37PM 2 points [-]

von Neumann-Morgenstern decision theory only deals with instantaneous decision making.