The examples given in the article are bad examples - any decent concept of utility could deal with them pretty easily - but there are good examples he could've used that really do show some underlying ambiguity in the concept around the edges. I think most of those are solveable with enough creativity and enough willingness not to go "Oh, look, something that appears to be a minor surface-level problem, let's immediately give up and throw out the whole edifice!".
But that sort of thing doesn't really matter as regards whether we should use utility for moral judgments. It doesn't have to be perfect, it just has to be good enough. It doesn't take any kind of complicated distinction between hedonism and preference to solve the trolley problem, it just takes the understanding that five lives are, all things being equal, more important than four lives.
This sort of thing is one reason I've tried to stop using the word "utilitarianism" and started using the word "consequentialism". It doesn't set off the same defenses as "utility", and if people agree to judge actions by how well they turn out general human preference similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of "well".
it just takes the understanding that five lives are, all things being equal, more important than four lives.
Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.
if people agree to judge actions by how well they turn out general human preference
What is the method you use to determine how things will turn out?
similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of "well"
Does consensus make decisions correct?
You know the Nirvana fallacy and the fallacy of needing infinite certainty before accepting something as probably true? How the solution is to accept that a claim with 75% probability is pretty likely to be true, and that if you need to make a choice, you should choose based on the 75% claim rather than the alternative? You know how if you refuse to accept the 75% claim because you're virtuously "waiting for more evidence", you'll very likely end up just accepting a claim with even less evidence that you're personally biased towards?
Morality works the same way. Even if you can't prove that one situation will always have higher utility than another, you've still got to go on the balance of probabilities, because that's all you've got.
The last time I used consequentialism in a moral discussion was (thinks back) on health care. I was arguing that when you have limited health care resources, it's sometimes okay to deny care to a "hopeless" case if it can be proven the resources that would be spent on that care could be used to save more people later. So you may refuse to treat one person with a "hopeless" disease that costs $500,000 to treat in order to be able to treat ten people with diseases that cost $50,000.
Now, yes, one of the people involved could be a utility monster. One of the people involved could grow up to be Hitler, or Gandhi, or Ray Kurzweil. Everyone in the example might really be a brain in a vat, or a p-zombie, or Omega, or an Ebborian with constantly splitting quantum mind-sheets. But if you were an actual health care administrator in an actual hospital, would you take the decision that probably fails to save one person, or the decision that probably saves ten people? Or would you say "I have no evidence to make the decision either way", wash your hands of it, and flip a coin?
In this case, it doesn't matter how you define utility; for any person who prefers life to death, there's only one way to proceed. Yet there are many people in the real world, both hospital administrators and especially voters, who would support the other decision - the one where we give one person useless care now but let ten potentially curable people die later - with all their hearts. Our first job is to spread enough consequentialism to get people to stop doing this sort of thing. After that, we can argue about the technical details all we want. We can stop shooting ourselves in the foot even before we have a complete theory of ballistics.
Yet there are many people in the real world, both hospital administrators and especially voters, who would support the other decision - the one where we give one person useless care now but let ten potentially curable people die later - with all their hearts. Our first job is to spread enough consequentialism to get people to stop doing this sort of thing. After that, we can argue about the technical details all we want. We can stop shooting ourselves in the foot even before we have a complete theory of ballistics.
There should be a top-level post to this effect. It belongs as part of the standard introduction to rationality.
Here is a related post: http://lesswrong.com/lw/65/money_the_unit_of_caring/ I'm sure there are others.
I can see how it's related, but that's not what I was trying to think. The main points that drew me out were "spread consequentialism" and "first, stop shooting ourselves in the foot."
I don't know. It's gone.
Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four
If there is literally nothing distinguishing the two scenarios except for the number of people--you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else--then it matters not whether it's 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
and the mere idea has been debunked several times.
The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting "human" (a thing whose utility I care about) for "monster" (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist.
I'll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes--contrary to the utility monster, I note--that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
(Or a constant value for human life but with positive utility assigned to the probability of extinction from independent, chance deaths that follows an even more arbitrary somewhat bizarre function.)
What is the method you use to determine how things will turn out?
Bayes' rule.
Does consensus make decisions correct?
Of course not, don't make straw men. Consensus is simply the best indicator of rightness we know of so far.
The economist's utility function is not the same as the ethicist's utility function. The goal of the economist is to describe and predict human behavior, so naturally, the economist's utility function is ill-suited for normative conclusions.
The ethicist's utility function, on the other hand, summarizes what you actually want, should you have the opportunity to sit down and really think about all of the possibilities. Utility in the ethicist's sense and happiness are not the same thing. Happiness is an emotion, a feeling. Utility (in the ethicist's sense) represents what you want, whether or not it is going to make you happy.
If this isn't entirely clear, consider that both happiness and the economist's utility function (they aren't the same thing either, mind you!) summarize a specific set of adaptations which would lead the actor to maximize his or her genetic fitness in some ancestral environment. The ethicist's utility function summarizes all of your values. Sometimes - many times - these values and adaptations come into conflict. For example, one adaptation for men is to treat a step child worse than a biological child, including up to (if getting caught is unlikely) murder. This will not be in the ethicist's utility function.
side note: Nozick's experience machine is no problem for the ethicist's utility function. Do you see why?
p.s.: you might want to reformat your link
Agreed, and as a further illustration:
For economists, it is common to use a monotone transformation of a utility function in order to make it more tractable in a particular case. Such a transformation preserves the ordering of choices, though not the absolute relationships between them, so if an outcome were preferred in the transformed case it would also be preferred in the original case, and consumption decisions are retained.
This would be a problem for ethicists, because there is a serious difference between, say, U(x,y) = e^x * y and U(x,y) = x + log y, when deciding the outcome of an action. Economists would note that consumption behavior was essentially fixed if given prices, and be unsurprised. Ethicists would have to see the e^x and conclude that humanity should essentially spend the rest of its waking days creating xs; not so in the second function. Of course, the latter function is merely the log-transformation of the former.
ETA: Well, the economist would be a little surprised at the first utility function, because they don't tend to see or postulate things quite that extreme. But it wouldn't be problematic.
Ethicists would have to see the e^x and conclude that humanity should essentially spend the rest of its waking days creating xs; not so in the second function.
Why not so in the second function?
I was unclear in the setup: the utility function isn't supposed to reflect a representative agent for all humanity, but one individual or proper subset of individuals within humanity (if used to be "the human utility function," then you are certainly right that only xs would be produced after everybody in humanity had 1 y, for either U-function).
Imagine we make 100 more units of x. With the second function, it doesn't matter whether we spread these out over 100 people or give them all to one, ethically--they produce the same quantity of utility. In particular, the additional utility produced in the second function per x is always 1.
In the first function, there is a serious difference between distributing the xs and concentrating them in one person--a difference brought out by sum utillitarianism vs. average utilitarianism vs. Rawlsian theory.
I use e^x as an example, but it would be superseded by somebody with e^e^x or x! or x^^X etc.
You seem to be assuming that your U(x) is per-person, so that each person a would have a separate Uₐ(x) = xₐ + log yₐ (or whatever), where xₐ is how much x that person has and yₐ is how much y that person has.
You then imply a universal or societal "overall" utility function of the form V(x) = ∑( Uₐ(x) ) over all a.
Your fallacy is in applying the log transform to the individual Uₐ(x) functions rather than to the top-level function V(x) as a whole.
You then imply a universal or societal "overall" utility function of the form V(x) = ∑( Uₐ(x) ) over all a.
I wasn't intending to imply that the society had homogeneous or even transferable utility functions--that was the substance of my clarification from the previous post.
Your fallacy is in applying the log transform to the individual Uₐ(x) functions rather than to the top-level function V(x) as a whole.
Insofar as there is no decision-maker at the top level, it wouldn't make much sense to do so. The transform is just used (by economists) to compute individuals' decisions in a mathematically simpler format, typically by separating a Cobb-Douglas function into two terms.
The point is that for economists, the two functions produce the same results--people buy the same things, make the same decisions, etc. You cannot aggregate economists' utility functions outside of using a proxy like money. For ethicists, the exact form of the utility function is important, and aggregation is possible--and that's the problem I'm trying to identify.
I don't see how aggregating utility functions is possible without some unjustifiable assumptions.
Agreed--that's related to what I'm arguing. In particular, utility would have to be transferable, and we'd have to know the form of the function in some detail. Not clear that either of those can be resolved.
How does:
Imagine we make 100 more units of x. With the second function, it doesn't matter whether we spread these out over 100 people or give them all to one, ethically--they produce the same quantity of utility. In particular, the additional utility produced in the second function per x is always 1.
not imply V(x) = ∑( Uₐ(x) ) ?
That is precisely the ethical aggregation of utility I am arguing against. You're right--an ethicist trying to use utility will have to aggregate. Thus, the form of the individual utility functions matters a great deal, if we believe we can do that.
We can't apply log-transforms, in the ethical sense against which I am arguing, because the form of the function matters.
I agree that correct aggregation is nontrivial.
If I'm still following the thread of this conversation correctly, the major alternative on the table is the behavior of the hypothetical economist, who presumably chooses to aggregate individual utilities via free-market interactions.
By what standard -- that is to say, by what utility function -- are we judging whether the economist or the naive-ethicist-who-aggregates-by-addition is right (or if both are totally wrong)?
Ah yes, there's the key.
I'm not sure there is anything (yet?) available for the naive-ethicist to sum. The economist's argument, generally construed, may be that we do not know how to and possibly cannot construct a consistent function for individuals, the best we can do is to allow those individuals to search for local maxima under conditions that mostly keep them from inhibiting the searches of others.
In some sense, the economist is advocating a distributed computation of the global maximum utility.
It's not clear that we can talk meaningfully about a meta-utility function over using the economist's or ethicist's aggregative functions. Wouldn't determining that meta-function be the same question as determining the correct aggregative function directly?
In short, absent better options, I think there's not much to do other than structure the system as best we can to allow that computation--and at most, institute targeted programs to eliminate the most obvious disutilities at the most minimal impact to others' utilities.
the best we can do is to allow
the economist is advocating
These constructions deal in should-judgment, implying that the economist, the ethicist, and we are at least attempting to discuss a meta-utility function, even if we don't or can't know what it is.
Wouldn't determining that meta-function be the same question as determining the correct aggregative funciton directly?
Yes.
Just because the question is very, very hard doesn't mean there's no answer.
Just because the question is very, very hard doesn't mean there's no answer.
Definitely true. That's why I said "yet?" It may be possible in the future to develop something like a general individual utility function, but we certainly do not have that now.
Perhaps I'm confused. The meta-utility function--isn't that literally identical to the social utility function? Beyond the social function, utilitarianism/consequentialism isn't making tradeoffs--the goal of the whole philosophy is to maximize the utility of some group, and once we've defined that group (a task for which we cannot use a utility function without infinite regress), the rest is a matter of the specific form.
The meta-utility function--isn't that literally identical to the social utility function?
Yes. The problem is that we can't actually calculate with it because the only information we have about it is vague intuitions, some of which may be wrong.
You seem to be assuming that your U(x) is per-person, so that each person p would have a separate U_p(x) = x_p + log y_p (or whatever), where x_p is how much x that person has and y_p is how much y that person has.
You then imply a universal or societal "overall" utility function of the form V(x) = summation( U_p(x) ) over all p.
Your fallacy is in applying the log transform to the individual U_p(x) functions rather than to the top-level function V(x) as a whole.
I was going to say that the second function punishes you if you don't provide at least a little y, but that's true of the first function too.
The economist's utility function is not the same as the ethicist's utility function
According to who? Are we just redefining terms now?
As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.
I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don't know where the term originated.
According to who? Are we just redefining terms now?
See my reply and the following comments for the distinction. The economist's utility function is ordinal; the ethicist's is cardinal.
According to who? Are we just redefining terms now?
The economist wants to predict human behavior. This being the case, the economist is only interested in values that someone actually acts on. The 'best' utility function for an economist is the one that completely predicts all actions of the agent in interest. Capturing the agent's true values is subservient to predicting actions.
The ethicist wants to come up with the proper course of action, and thus doesn't care about prediction.
The difference between the two is the two is normativity. Human psychology is complicated. Buried deep inside is some set of values that we truly want to maximize. When it comes to every day actions, this set of values need not be relevant for predicting our actual behavior.
Utility = Happiness is one philosophical view, typically called hedonism, but many philosophers disagree with it. The more general normative definition is that utility is well-being (what's good for an individual). Hedonistic utilitarians like Bentham claim that happiness is the only good thing and thus that utility is happiness. But preference utilitarians claim that the fulfillment of preferences is what's good and consider utility a measure of preference satisfaction. (Then there are debates over just what a preference is; some give a definition close to Matt Simpson's, what you would want if you were fully informed & rational.) Others have different definitions of well-being/utility.
I just came across an essay David Friedman posted last Monday The Ambiguity of Utility that presents one of the problems I have with using utilities as the foundation of some "rational" morality.