Among the four axioms used to derive the von Neumann-Morgenstern theorem, one stands out as not being axiomatic when applied to the aggregation of individual utilities into a social utility:
Axiom (Independence): Let A and B be two lotteries with A > B, and let t \in (0, 1] then tA + (1 − t)C > tB + (1 − t)C .
In terms of preferences over social outcomes, this axiom means that if you prefer A to B, then you must prefer A+C to B+C for all C, with A+C meaning adding another group of people with outcome C to outcome A.
It's the social version of this axiom that implies "equity of utility, even among equals, has no utility". To see that considerations of equity violates the social Axiom of Independence, suppose my u(outcome) = difference between the highest and lowest individual utilities in outcome. In other words, I prefer A to B as long as A has a smaller range of individual utilities than B, regardless of their averages. It should be easy to see that adding a person C to both A and B can cause A’s range to increase more than B’s, thereby reversing my preference between them.
Oh! I realized only now that this isn't about average utilitarianism vs. total utilitarianism, but about utilitarianism vs. egalitarianism. As far as I understand the word, utilitarianism means summing people's welfare; if you place any intrinsic value on equality, you aren't any kind of utilitarian. The terminology is sort of confusing: most expected utility maximizers are not utilitarians. (edit: though I guess this would mean only total utilitarianism counts, so there's a case that if average utilitarianism can be called utilitarianism, then egalitarian...
We should maximize average utility across all living people.
(Actually all people, but dead people are hard to help.)
Giving diminished returns on some valuable quantity X, equal distribution of X is preferable anyway.
I think you're quite confused about the constraints imposed by von Neumann-Morgenstern theorem.
In particular, it doesn't in any way imply that if you slice a large region of space into smaller regions of space, the utility of the large region of space has to be equal to the sum of utilities of smaller regions of space considered independently by what ever function gives you the utility within a region of space. Space being the whole universe, smaller regions of space being, say, spheres fitted around people's brains. You get the idea.
This post seems incoherent to me :-( It starts out talking about personal utilities, and then draws conclusions about the social utilities used in utilitarianism. Needless to say, the argument is not a logical one.
Proving that average utilitarianism is correct seems like a silly goal to me. What does it even mean to prove an ethical theory correct? It doesn't mean anything. In reality, evolved creatures exhibit a diverse range of ethical theories, that help them to attain their mutually-conflicting goals.
...[Average utilitarianism] implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit
I may be misunderstanding here, but I think there's a distinction you're failing to make:
Max expected utility over possible future states (only one of which turns out to be real, so I guess max utility over expected future properties of the amplitude field over configuration space, rather than properties over individual configurations, if one want's to get nitpicky...), while average/total/whatever utilitarianism has to do with how you deal with summing the good experienced/recieved among people that would exist in the various modeled states.
At least that's my understanding.
This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.
PhilGoetz's argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.
Egoism: X ought to do what maximizes X's happiness.
Utilitarianism: X ought to do what maximizes EVERYONE's happiness
(or put Xo for X. and X_sub_x for Everyone).
X's happiness is not logically equivalent to Everyone's happiness. The im...
I haven't done the math, so take the following with a grain of salt.
We humans care about what will happen in the future. We care about how things will turn out. Call each possible future an "outcome". We humans prefer some outcomes over another. We ought to steer the future towards the outcomes we prefer. Mathematically, we have a (perhaps partial) order on the set of outcomes, and if we had perfect knowledge of how our actions affected the future, our decision procedure would just be "pick the best outcome".
So far I don't think ...
Average utilitarianism is actually a common position.
"Utility", as Eliezer says, is just the thing that an agent maximizes. As I pointed out before, a utility function need not be defined over persons or timeslices of persons (before aggregation or averaging); its domain could be 4D histories of the entire universe, or other large structures. In fact, since you are not indifferent between any two distributions of what you call "utility" with the same total and the same average, your actual preferences must have this form. This makes que...
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes
I'm afraid that's not what it says. It says that any consistent set of choices over gambles can be represented as the maximization of some utility function. It does not say that that utility function has to be u. In fact, it can be any positive monotonic transform of u. Call such a transform u*.
...This means that your utility function U is indifferent with regard to whether the distribution of utilit
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
I'm afraid there's still some confusion here, because this isn't right. To take an example, suppose U = ln(u).
Average utilitarianism is actually a common position.
"Utility", as Eliezer says, is the thing that is maximized, not happiness. As I pointed out before, a utility function need not be defined over persons or timeslices of persons (before aggregation or averaging); its domain could be 4D histories of the entire universe, or other large structures. In fact, since you are not indifferent between any two distributons of what you call "utility" with the same total and the same average, your actual preferences must have this form. This makes ...
Because if your choices under uncertainty do not maximize the expected value of some utility function, you are behaving inconsistently (in a particular sense, axiomatized by Savage, and others - there's a decent introduction here).
These axioms are contestable, but the reasons for contesting them have little to do with population ethics.
Also, as I've said before, the utility function that consistent agents maximize the expectation of need not be identical with an experienced utility function, though it will usually need to be a positive monotonic transform ...
Why do we think it's reasonable to say that we should maximize average utility across all our possible future selves
Because that's what we want, even if our future selves don't. If I know I have a 50/50 chance of becoming a werewolf (permanently, to make things simple) and eating a bunch of tasty campers on the next full moon, then I can increase loqi's expected utility by passing out silver bullets at the campsite ahead of time, at the expense of wereloqi's utility.
In other words, one can attempt to improve one's expected utility as defined by their current utility function by anticipating situations where they no longer implement said function.
I figured out what the problem is. Axiom 4 (Independence) implies average utilitarianism is correct.
Suppose you have two apple pies, and two friends, Betty and Veronica. Let B denote the number of pies you give to Betty, and V the number you give to Veronica. Let v(n) denote the outcome that Veronica gets n apple pies, and similarly define b(n). Let u_v(S) denote Veronica's utility in situation S, and u_b(S) denote Betty's utility.
Betty likes apple pies, but Veronica loves them, so much so that u_v(v(2), b(0)) > u_b(b(1), v(1)) + u_v(b(1), v(1)). We wan...
I tend to think that utilitarianism is a pretty naive and outdated ethical philosophy - but I also think that total utilitarianism is a bit less silly than average utilitarianism. Having read this post, my opinion on the topic is unchanged. I don't see why I should update towards Phil's position.
I started reading the Weymark article that conchis linked to. We have 4 possible functions:
I was imagining a set of dependencies like this:
Weymark describes it like this:
I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.
My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem. We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains. I rewrote the title yet again, and have here a restatement that I hope is clearer.
Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:
(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)