loqi comments on Average utilitarianism must be correct? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
We had the happiness discussion already. I'm using the same utility-happiness distinction now as then.
(You're doing that "speaking for everyone" thing again. Also, what you would call "speaking for me", and misinterpreting me. But that's okay. I expect that to happen in conversations.)
<EDITED TO USE STANDARD TERMINOLOGY>
The little-u u(situation) can include terms for inequity. The big-U U(lottery of situations) can't, if you're an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.
Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.
Since you do both, that's perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem. </EDIT>
<part below left as is because someone responded to it> If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.
This doesn't sound right to me. Assuming "world" means "world at time t", a utility function at the very least has type (World -> Utilons). It maps a single world to a single utility measure, but it's still defined over all worlds, the same way that (+3) is defined over all integers. If it was only defined for a single world it wouldn't really be much of a function, it'd be a constant.
We use expected utility due to uncertainty. If we had perfect information, we could maximize utility by searching over all action sequences, computing utility for each resulting world, and returning the sequence with the highest total utility.
I think this illustrates the problem with your definition. The utility you're maximizing is not the same as the "utility 101 for one future you". You first have to map future you's utility to just plain utility for any of this to make sense.
I meant "the domain of a utility function is a single world."
However, it turns out that the standard terminology includes both utility functions over a single world ("outcome"), and a big utility function over all possible worlds ("lottery").
My question/observation is still the same as it was, but my misuse of the terminology has mangled this whole thread.