timtyler comments on Why you must maximize expected utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
I appreciate the hard work here, but all the math sidesteps the real problems, which are in the axioms, particularly the axiom of independence. See this sequence of comments on my post arguing that saying expectation maximization is correct is equivalent to saying that average utilitarianism is correct.
People object to average utilitarianism because of certain "repugnant" scenarios, such as the utility monster (a single individual who enjoys torturing everyone else so much that it's right to let him or her do so). Some of these scenarios can be transformed into a repugnant scenario for expectation maximization over your own utility function, where instead of "one person" you have "one possible future you". Suppose the world has one billion people. Do you think it's better to give one billion and one utilons to one person than to give one utilon to everyone? If so, why would you believe it's better to take an action that results in you having one billion and one utilons one-one-billionth of the time, and nothing all other times, than an action that reliably gives you one utilon?
The way people think about the lottery suggests that most people prefer to distribute utilons equally among different people, but to lump them together and give them to a few winners in distributions among their possible future selves. This is a case where we reliably violate the Golden Rule, and call ourselves virtuous for doing so.
That thesis seems obviously wrong: the term "utilitarianism" refers not to maximising, but to maximising something pretty specific - namely: the happiness of all people.