So here's a question for anyone who thinks the concept of a utility monster is coherent and/or plausible:
The utility monster allegedly derives more utility from whatever than whoever else, or doesn't experience any diminishing returns, etc. etc.
Those are all facts about the utility monster's utility function.
But why should that affect the value of the utility monster's term in my utility function?
In other words: granting that the utility monster experiences arbitrarily large amounts of utility (and granting the even more problematic thesis that experienced utility is intersubjectively comparable)... why should I care?
I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.
But then the monster isn't a problem, because if there were in fact such an entity, I would indeed actually want to sacrifice a billion other humans to make the monster happy. This is true by definition.
I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.
That's easy. For most people (in general; I don't mean here on lesswrong), this just describes one's family (and/or close friends)... not to mention themselves!
I mean, I don't know exactly how many random people's lives, in e.g. Indonesia, would have to be at stake for me to sacrifice my mother's life to save them, but it'd be more than one. Maybe a lot more.
A billion? I don't know that I'd go that far. But some people might.
To continue the argument: It could be a problem if you'd want to protect the utility monster once it exists, but would prefer that the utility monster not exist. For example it could be an innocent being who experiences unimaginable suffering when not given five dollars.
Our oldest utility monster is eight years old. (Did you have this example specifically in mind? Seems to fit the description very well.)
...sometimes I wonder about the people who find it unintuitive to consider that "Killing X, once X is alive and asking not to be killed" and "Preferring that X not be born, if we have that option in advance" could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.
The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.
The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.
In Practical Ethics, Peter Singer advocated a position he called "prior-existence preference utilitarianism". He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.
If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the dece...
This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.
If you're unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that
...Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one's own good.
The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.
Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone's happiness counts the same. When one maximizes the good, it is
In this post, I wrote: "The standard view ... obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be). I will call these "personal ethics", "social ethics", and "normative ethics" ."
Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.
However, it's still perfectly valid to talk about using utilitarianism to construct social utility functions (e.g., those to encode into a set of community laws), and in that context the utility monster makes sense.
Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual. (CEV is based on the converse of this assumption: that you can use a personal utility function, or the average of many personal utility functions. as a social utility function.)
I don't know. Patterns of upvotes and downvotes on LessWrong still mystify me.
You are right; I was, when I wrote the grandparent, confused about what utilitarianism is. Having read the other comment threads on this post, I think the reason is that popular usage of the term "utilitarianism" on this site does not match its usage elsewhere. What I thought utilitarianism was before I started commenting on LessWrong, and what I think utilitarianism is now that I've gotten unconfused, are the same thing (the same silly thing, imo); my interim confusion is more or less described in this thread.
My primary objections to utilitarianism remain the same: intersubjective comparability of utility (I am highly dubious about whether it's possible), disagreement about what sorts of things experience utility in a relevant way (animals? nematodes? thermostats?) and thus ought to be considered in the calculation, divergence of utilitarian conclusions from foundational moral intuitions in non-edge cases, various repugnant conclusions.
As far as the utility monster goes, I think the main issue is that I am really not inclined to grant intersubjective comparability of experienced utility. It jus...
Most people in time and space have considered it strange to take the well-being of non-humans into account
I think this is wrong in an interesting way: it's an Industrial Age blind spot. Only people who've never hunted or herded and buy their meat wrapped in plastic have never thought about animal welfare. Many indigenous hunting cultures ask forgiveness when taking food animals. Countless cultures have taboos about killing certain animals. Many animal species' names translate to "people of the __." As far as I can tell, all major religions consider wanton cruelty to animals a sin, and have for thousands of years, though obviously, people dispute the definition of cruelty.
I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.
I'd like to see a summary of the evidence that many Native Americans actually prayed for forgiveness to animal spirits. There's been a lot of retrospective "reframing" of Native American culture in the past 100 years--go to a pow-wow today and an earnest Native American elder may tell you stories about their great respect for the Earth, but I don't find these stories in 17th thru 19th-century accounts. Praying for forgiveness makes a great story, but you usually hear about it from somebody like James Fenimore Cooper rather than in an ethnographic account. Do contemporary accounts from the Amazon say that tribespeople there do that?
(Regarding the reliability of contemporary Native American accounts: Once I was researching the Cree Indians, and I read an account, circa 1900, by a Cree, boasting that their written language was their own invention and went back generations before the white man came. The next thing I read was an account from around 1860 of a white missionary who had recently learned Cree and invented the written script for i...
I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.
This sounds right to me. After all, you don't find plantation owners agitating for the rights of slaves. No, it's people who live off far away from actual slaves, meeting the occasional lucky black guy who managed to make it in the city and noting that he seems morally worthy.
The holy books do not support laws about animal cruelty in the same way that they support "thou shalt not commit adultery".
IIRC, the requirements for humane slaughter are spelled out in great detail in the mishnah.
The actual reality does not have high level objects such as nematodes or humans.
Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.
What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven't been diced into perfect cubic blocks and then rearranged like a Rubik's cube.
This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.
What I imagine that function would do, is identify existence of particular computationa...
Finding out that the chunks will die (given the laws of physics as they are) is something that the function in question got to do. Likewise, finding out that they won't die with some magic, but would die if they weren't rearranged and the magic was applied (portal-ing the blood all over the place).
You just keep jumping to making an utility that is computed from the labels you already assign to the world.
edit: one could also subdivide it into very small regions of space, and note that you can't compute any kind of utility of the whole by going over every piece in isolation and then summing.
edit2: to be exact, I am counter-exampling the f(ab)=f(a)+f(b) (where "ab" is a concatenated with b) with f(ab)!=f(ba) and a+b=b+a .
More broadly, mathematics1 has been very useful in science, and so ethicists try to use mathematics2 . Where mathematics1 is a serious discipline where one states assumptions and progresses formally, and mathematics2 is "there must be arithmetical operations involved" or even "it is some kind of Elvish" . (while mathematics1 doesn't get you very far because we can't make many assumptions)
I broadly agree - it seems to me a plausible and desirable outcome of FAI that most of the utility of the future comes from a single super-mind made of all the material it can possibly gather in the Universe, rather than from a community of human-sized individuals.
The sort of utility monster I worry about is one that we might weigh more not because it is actually more sophisticated or otherwise of greater intrinsic moral weight, but simply one that feels more strongly.
Well, nematodes might already feel more strongly. If you have a total of 302 neurons, and 15 of them signal "YUM!" when you bite into a really tasty protozoan, that might be pure bliss.
Most people in time and space have considered it strange to take the well-being of non-humans into account.
I don't think this is true. As gwern's The Narrowing Circle argues, major historical exceptions to this include gods and dead ancestors.
Same for most gods, given the degree to which they were anthropomorphized. (In fact, the Bhagavad-Gita talks about how Hindus need to anthropomorphize in order to give "personal loving devotion to Lord Krishna". [Quote from a commentary])
Sure, but there's a fact of the matter: It's not that we don't value the experiences or well-being of dead ancestors; it's that we hold that they do not have any experiences or well-being — or, at least, none that we can affect with the consequences of our actions. (For instance, Christians who believe in heaven consider their dead ancestors to be beyond suffering and mortal concerns; that's kind of the point of heaven.)
The "expanding circle" thesis notices the increasing concern in Western societies for the experiences had by, e.g., black people. The "narrowing circle" thesis notices the decreasing concern for experiences had by dead ancestors and gods.
The former is a difference of sentiment or values, whereas the latter is a difference of factual belief.
The former is a matter of "ought"; the latter of "is".
Slaveholders did not hold the propositional beliefs, "People's experiences are morally significant, but slaves do not have experiences." They did not value the experiences of all people. Their moral upbringing specifically instructed them to not value the experiences of slaves; or to regard the suffering of slaves as the appointed (and thus morally correct) lot in life of slaves; or to regard the experiences of slaves as less important than the continuity of the social order and economy which were supported by slavery.
I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).
We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.
To quote Nozick from wikipedia: "Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." That is exactly what happens on the spaceship, but most people here would find it pretty reasonable. A real utility monster would look more like that than some super-happy alien.
When you're talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment ("I would pay $1 per squirrel to prevent their deaths") how do you know that you aren't just lying to yourself & if it really came down to it, you wouldn't pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.
I am mildly consequentialist, but not a utilitarian (and not in the closet about it, unlike many pretend-utilitarians here), precisely because any utilitarianism runs into a repugnant conclusion of one form or another. That said, it seems that the utility-monster type RC is addressed by negative utilitarians, who emphasize reduction in suffering over maximizing pleasure.
Well, isn't the central end of humanity (nay all sentient life) contentment and ease?
Seems like a strange assumption. Indeed, the reverse is often argued, that the central end of life is to be constantly facing challenges, to never be content, that we should seek out not ease but difficulty.
"How dull it is to pause, to make an end, To rust unburnished, not to shine in use!"
Moreover, even if your assertion were true for humans, and even all mammals, we can imagine non-mammalian sentient life.
Saying that a utility monster means a "creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined" is vague, because it doesn't mean a creature that's just more capable, it's a creature that's a specific kind of "more capable". Just because human beings can experience more utility from the same actions than nematodes can doesn't make humans into utility monsters, because that's the wrong kind of "more capable". According to your own link, a utility monster is not susceptible to diminishing marginal returns, which doesn't seem to describe humans and certainly isn't a distinction between humans and nematodes.
The qualification that a utility monster is not susceptible to diminishing marginal returns is made only because they're still assuming utility is measured in something like dollars, which has diminishing marginal returns, rather than units of utility, which do not. Removing that qualification doesn't banish the utility monster. The important point is that the utility monster's utility is much larger than anybody else's.
The "utility monster" has ceased to be a utility monster because it no longer gets everything. It still gets more, of course, but that's the equivalent of deciding that the starving person gets the food before the full person.
This sounds like it could be almost as repugnant as a utility monster that gets literally everything, depending on precisely how much "more" we're talking about.
Edit: if I were the kind of person who found utility monsters repugnant, that is. I'd already dissolved the "OMG what if utility monsters??" problem in my own mind by reasoning that the repugnant feeling comes from representing utility monsters as black boxes, stripping away all of the features of theirs that make it intuitively obvious why they generate more utility from the same inputs. Put another way, the things that make real-life utility monsters "utility monsters" are exactly the things that make us fail to recognize them as utility monsters. When a parent values their child's continued existence far more than their own, we don't call the child a "utility monster" if the parent sacrifices themselves to save their child, even though that's exactly the child's role in that situation.
I discussed this recently elsewhere: https://utilitarian.quora.com/Utility-monsters-arent-we-all I'm glad I'm not the only one who's thought of this.
Nice post.
I disagree with the premise that humans are utility monsters, but I see what you are getting at.
I'm a little weary of the concept of a utility monster as it is easy to imagine and debate but I don't think it is immediately realistic.
I want my considerations of utility to be aware of possible future outcomes. If we imagine a concrete scenario like Zach's fantastic slave pyramid builders for an increasingly happy man, it seems obvious that there is something psychotic about an individual who could be made more happy by the senseless toil of other...
One man's utility monster is another man's neighbour down the street named Bob who you see when you for walks sometimes.
I do not see a contradiction in claiming that a) utility monsters do not exist and b) under utilitarianism, it is correct to kill an arbitrarily large number of nematodes to save one human.
The solution to this issue is to reject the idea of a continuous scale of "utility capability", under which nematodes can feel a tiny amount of utility, humans can feel a moderate amount, and some superhuman utility monster can feel a tremendous amount. Rather, we can (and, I believe, should) reduce it to two classes: agents and objects.
An agent, such as a hum...
When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.
The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.
Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?
Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?
Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.
If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.
"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."
Well, that's what a real utility monster looks like.
The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.
1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.
2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.