loqi comments on Average utilitarianism must be correct? - Less Wrong

2 Post author: PhilGoetz 06 April 2009 05:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 06 April 2009 07:28:35PM *  1 point [-]

You don't interpret "utility" the same way others here do, just like the word "happiness". Our utility inherently includes terms for things like inequity. What you are using the word "utility" here for would be better described as "happiness".

We had the happiness discussion already. I'm using the same utility-happiness distinction now as then.

(You're doing that "speaking for everyone" thing again. Also, what you would call "speaking for me", and misinterpreting me. But that's okay. I expect that to happen in conversations.)

<EDITED TO USE STANDARD TERMINOLOGY>

Our utility inherently includes terms for things like inequity.

The little-u u(situation) can include terms for inequity. The big-U U(lottery of situations) can't, if you're an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.

Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.

Since you do both, that's perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem. </EDIT>

<part below left as is because someone responded to it> If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.

Comment author: loqi 06 April 2009 09:03:43PM 2 points [-]

It can't, because utility functions are defined over a single world, not over the set of all possible worlds. If your utility function were defined over all possible worlds, you would just say "maximize utility" instead of "maximize expected utility".

This doesn't sound right to me. Assuming "world" means "world at time t", a utility function at the very least has type (World -> Utilons). It maps a single world to a single utility measure, but it's still defined over all worlds, the same way that (+3) is defined over all integers. If it was only defined for a single world it wouldn't really be much of a function, it'd be a constant.

We use expected utility due to uncertainty. If we had perfect information, we could maximize utility by searching over all action sequences, computing utility for each resulting world, and returning the sequence with the highest total utility.

If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds

I think this illustrates the problem with your definition. The utility you're maximizing is not the same as the "utility 101 for one future you". You first have to map future you's utility to just plain utility for any of this to make sense.

Comment author: PhilGoetz 07 April 2009 03:26:00AM 1 point [-]

It maps a single world to a single utility measure, but it's still defined over all worlds,

I meant "the domain of a utility function is a single world."

However, it turns out that the standard terminology includes both utility functions over a single world ("outcome"), and a big utility function over all possible worlds ("lottery").

My question/observation is still the same as it was, but my misuse of the terminology has mangled this whole thread.