When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.
The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.
Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?
Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?
Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.
If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.
"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."
Well, that's what a real utility monster looks like.
The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.
1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.
2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.
The actual reality does not have high level objects such as nematodes or humans.
Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.
What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven't been diced into perfect cubic blocks and then rearranged like a Rubik's cube.
This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.
What I imagine that function would do, is identify existence of particular computational structures of interest in the region of space, and there are many such structures inside a human head that do not exist in any region of space occupied by nematodes, which have a much smaller set of structures with extra nematodes not adding any new structures (unlike humans who, due to distinct memories and the different ways their brains are arranged, do add new structures, linearly up to a fairly large number).
So even a very large region of spacetime full of nematodes and one human can have it's utility decreased a lot more by random rearrangements of the atoms (quarks, what ever the bottom level is - does not matter) constituting a human than by random rearrangements of the atoms constituting nematodes.
edit: that is, as long as there's enough nematodes to cover the entire nematode experience space (which is quite small), increases in their number won't add to computational structure of the whole region. Something that's not true for people, up to a really very large number of people.
Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.
... (read more)