Jonathan_Graehl comments on Average utilitarianism must be correct? - Less Wrong

2 Post author: PhilGoetz 06 April 2009 05:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 06 April 2009 07:28:35PM *  1 point [-]

You don't interpret "utility" the same way others here do, just like the word "happiness". Our utility inherently includes terms for things like inequity. What you are using the word "utility" here for would be better described as "happiness".

We had the happiness discussion already. I'm using the same utility-happiness distinction now as then.

(You're doing that "speaking for everyone" thing again. Also, what you would call "speaking for me", and misinterpreting me. But that's okay. I expect that to happen in conversations.)

<EDITED TO USE STANDARD TERMINOLOGY>

Our utility inherently includes terms for things like inequity.

The little-u u(situation) can include terms for inequity. The big-U U(lottery of situations) can't, if you're an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.

Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.

Since you do both, that's perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem. </EDIT>

<part below left as is because someone responded to it> If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.

Comment author: Jonathan_Graehl 06 April 2009 10:43:42PM *  2 points [-]

If you don't prefer 10% chance of 101 utilons to 100% chance of 10, then you can rescale your utility function (in a non-affine manner). I bet you're thinking of 101 as "barely more than 10 times as much" of something that faces diminishing returns. Such diminishing returns should already be accounted for in your utility function.

Comment author: PhilGoetz 07 April 2009 03:17:03AM *  1 point [-]

I bet you're thinking of 101 as "barely more than 10 times as much" of something that faces diminishing returns.

No. I've explained this in several of the other comments. That's why I used the term "utility function", to indicate that diminishing returns are already taken into account.