PhilGoetz comments on Average utilitarianism must be correct? - Less Wrong

2 Post author: PhilGoetz 06 April 2009 05:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 06 April 2009 09:16:59PM 0 points [-]

I don't understand the question. Did I define a preference order? I thought I was just pointing out an unspoken assumption. What is the difference between what I have described as maximizing expected utility, and the standard view?

Comment author: Vladimir_Nesov 06 April 2009 09:25:18PM 1 point [-]

The following passage is very strange, it shows either lack of understanding, or some twisted terminology.

A utility measure discounts for inequities within any single possible outcome. It does not discount for utilities across the different possible outcomes. It can't, because utility functions are defined over a single world, not over the set of all possible worlds. If your utility function were defined over all possible worlds, you would just say "maximize utility" instead of "maximize expected utility".

Comment author: PhilGoetz 06 April 2009 11:39:05PM 1 point [-]

It shows twisted terminology. I rewrote the main post to try to fix it.

I'd like to delete the whole post in shame, but I'm still confused as to whether we can be expected utility maximizers without being average utilitarianists.

Comment author: loqi 07 April 2009 02:44:36AM 1 point [-]

I've thought about this a bit more, and I'm back to the intuition that you're mixing up different concepts of "utility" somewhere, but I can't make that notion any more precise. You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions. Can you give a stripped-down, "single-player" example of this that doesn't involve other people or selves?

Comment author: PhilGoetz 07 April 2009 03:33:58AM *  2 points [-]

You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions.

Here's a restatement:

  • We have a utility function u(outcome) that gives a utility for one possible outcome.
  • We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
  • The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
  • This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
  • This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
  • Therefore, I think that the von Neumann-Morgenstern theorem does not prove, but provides very strong reasons for thinking, that average utilitarianism is correct.
  • And yet, average utilitarianism asserts that equity of utility, even among equals has no utility. This is shocking.
Comment author: Peter_de_Blanc 07 April 2009 03:55:54AM 3 points [-]

If you want a more equitable distribution of utility among future selves, then your utility function u(outcome) may be a different function than you thought it was; e.g. the log of the function you thought it was.

More generally, if u is the function that you thought was your utility function, and f is any monotonically increasing function on the reals with f'' < 0, then by Jensen's inequality, an expected f''(u)-maximizer would prefer to distribute u-utility equitably among its future selves.

Comment author: conchis 07 April 2009 11:22:15AM *  1 point [-]

Exactly. (I didn't realize the comments were continuing down here and made the essentially same point here after Phil amended the post.)

The interesting point that Phil raises is whether there's any reason to have a particular risk preference with respect to u. I'm not sure that the analogy between being inequality averse amongst possible "me"s and and inequality averse amongst actual others gets much traction once we remember that probability is in the mind. But it's an interesting question nonetheless.

Allais, in particular argued that any form of risk preference over u should be allowable, and Broome finds this view "very plausible". All of which seems to make rational decision-making under uncertainty much more difficult, particularly as it's far from obvious that we have intuitive access to these risk preferences. (I certainly don't have intuitive access to mine.)

P.S. I assume you mean f(u)-maximizer rather than f''(u)-maximizer?

Comment author: Peter_de_Blanc 07 April 2009 03:49:33PM 1 point [-]

Yes, I did mean an f(u)-maximizer.

Comment author: PhilGoetz 17 April 2009 09:04:02PM *  0 points [-]

Yes - and then the f(u)-maximizer is not maximizing expected utility! Maximizing expected utility requires not wanting equitable distribution of utility among future selves.

Comment author: Vladimir_Nesov 07 April 2009 11:04:47AM 1 point [-]

This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.

Nope. You can have u(10 people alive) = -10 and u(only 1 person is alive)=100 or u(1 person is OK and another suffers)=100 and u(2 people are OK)=-10.

Comment author: PhilGoetz 07 April 2009 02:42:49PM 2 points [-]

Not unless you mean something very different than I do by average utilitarianism.

Comment author: Vladimir_Nesov 07 April 2009 05:12:25PM *  3 points [-]

I objected to drawing the analogy, and gave the examples that show where the analogy breaks. Utility over specific outcomes values the whole world, with all people in it, together. Alternative possibilities for the whole world figuring into the expected utility calculation are not at all the same as different people. People that the average utilitarianism talks about are not from the alternative worlds, and they do not each constitute the whole world, the whole outcome. This is a completely separate argument, having only surface similarity to the expected utility computation.

Comment author: thomblake 07 April 2009 02:35:45PM 2 points [-]

Maybe I'm missing the brackets between your conjunctions/disjunctions, but I'm not sure how you're making a statement about Average Utilitarianism.

Comment author: loqi 07 April 2009 04:01:02AM 1 point [-]
  • We have a utility function u(outcome) that gives a utility for one possible outcome.
  • We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
  • The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.

I'm with you so far.

  • This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.

What do you mean by "distribute utility to your future selves"? You can value certain circumstances involving future selves higher than others, but when you speak of "their utility" you're talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.

  • This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.

I don't see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I'm missing the connection you're trying to draw.

Comment author: loqi 07 April 2009 01:04:24AM 1 point [-]

I'd hate to see it go. I think you've raised a really interesting point, despite not communicating it clearly (not that I can probably even verbalize it yet). Once I got your drift it confused the hell out of me, in a good way.

Assuming I'm correct that it was basically unrelated, I think your previous talk of "happiness vs utility" might have primed a few folks to assume the worst here.