Emile comments on Average utilitarianism must be correct? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
We had the happiness discussion already. I'm using the same utility-happiness distinction now as then.
(You're doing that "speaking for everyone" thing again. Also, what you would call "speaking for me", and misinterpreting me. But that's okay. I expect that to happen in conversations.)
<EDITED TO USE STANDARD TERMINOLOGY>
The little-u u(situation) can include terms for inequity. The big-U U(lottery of situations) can't, if you're an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.
Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.
Since you do both, that's perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem. </EDIT>
<part below left as is because someone responded to it> If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.
I disagree: that's only the case if you have perfect knowledge.
Case A: I'm wondering whether to flip the switch of my machine. The machine causes a chrono-synclastic infundibulum, which is a physical phenomenon that has a 50% chance of causing a lot of awesomeness (+100 utility), and a 50% chance of blowing up my town (-50 utility).
Case B: I'm wondering whether to flip the switch of my machine, a friendly AI I just programmed. I don't know whether I programmed it right, if I did it will bring forth an awesome future (+100 utility), if I didn't it will try to enslave mankind (-50 utility). I estimate that my program has 50% chances of being right.
Both cases are different, and if you have a utility function that's defined over all possible future words (that just takes the average), you could say that flipping the switch in the first case has utility of +50, and in the second case, expected utility of +50 (actually, utility of +100 or -50, but you don't know which).