NOTE: This post contains LaTeX; it is recommended that you install “TeX the World” (for chromium users), “TeX All the Things” or other TeX/LaTeX extensions to view the post properly.
 

Destroying the Utility Monster—An Alternative Formation of Utility

I am a rational egoist, but that is only because there is no existing political system/social construct I identify with. If there was one I identified with, I would be strongly utilitarian. In all moral thought experiments, I err on the side of utilitarianism, and I’m faithful in my devotion to its tenets. There are some criticisms against utilitarianism, and one of the most common—and most powerful—is the utility monster which allegedly proves “utilitarianism is not egalitarian’’. [1]
 
For those who may not understand the terms, I shall define them below:

Utilitarianism is an ethical theory that states that the best action is the one that maximizes utility. "Utility" is defined in various ways, usually in terms of the well-being of sentient entities. Jeremy Bentham, the founder of utilitarianism, described utility as the sum of all pleasure that results from an action, minus the suffering of anyone involved in the action. Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism, utilitarianism considers all interests equally.

[2]

The utility monster is a thought experiment in the study of ethics created by philosopher Robert Nozick in 1974 as a criticism of utilitarianism
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:
“Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.”
 
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.

[1]  
I first found out about the utility monster a few months ago, and pondered on it for a while, before filing it away. Today, I formalised a system for reasoning about utility that would not only defeat the utility monster, but make utilitarianism more egalitarian. I shall state my system, and then explain each of the points in more detail below.
 

Dragon’s System:

  1. All individuals have the same utility system.
  2. $U: -1 <= U <= 1$.
  3. The sum of the utility of an event and its negation is $0$.
  4. Specifically, the sum total of all positive utilities an individual can derive (for unique events without double counting) is $1$.
  5. Specifically, the sum total of all negative utilities an individual can derive (for unique events without double counting) is $-1$.
  6. At any given time, the sum total of an individual's potential utility space is $0$.
  7. To increase the utility of an event, you have to decrease the utility of its negation.
  8. To decrease the utility of an event you have to increase the utility of its negation.
  9. An event and its negation cannot have the same utility unless both are $0$.
  10. If two events are independent then the utility of both events occurring is the sum of their individual utilities.

Explanation:

  1. The same system for appropriating utility is applied to all individuals. This is for the purposes of consistency and to be more egalitarian.
  2. The Utility an individual can get from an event is between $-1$ and $1$. To derive the Utility an individual gains from any event $E_i$, let the utility of $E_i$ under more traditional systems be $W_i$. $U_i = \frac{W_i}{\sum_{k = 1}^n} \forall E_i: W_i > 0$. In English:

    Express the positive utility of each individual as a fraction of their total positive utility across all possible events (without double counting any utility).

  3. For every event that can occur, there’s a corresponding event that represents that event not occurring called its negation; every event has a negation. If an individual gains positive utility from an event happening, then they must gain equivalent negative utility from the event not happening. The utility they derive from an event and its negation must sum to $0$. Such is only logical. The positive utility you gain from an event happening, is proportional to the negative utility g

  4. This follows from the method of deriving “2” explained above.

  5. This follows from the method of deriving “2” explained above.

  6. This follows from “2” and “3”.

  7. This follows from “3”.

  8. This follows from “3”.

  9. This follows from “3”.

  10. This is via intuition. Two events $A$ and $B$ are independent if the utility of $A$ does not depend on the occurrence of $B$ nor does $B$ in any way affect the utility of $A$ and vice versa. If such is true, then to calculate the utility of $A$ and $B$, we need only sum the individual utilities of $A$ and $B$.
     
    It can be seen that my system can be reduced to postulates “1”, “2”, “3”, “6” and “10”. The ten point system is for the sake of clarity which always supersedes brevity and eloquence.
     
    If any desire the concise version:

  11. All individuals have the same utility system.

  12. $U: -1 <= U <= 1$.

  13. The sum of the utility of an event and its negation is $0$.

  14. At any given time, the sum total of an individual's potential utility space is $0$.

  15. If two events are independent then the utility of both events occurring is the sum of their individual utilities.

 

Glossary

Individual: This refers to any sapient entity; generally, this is restricted to humans, but if another conscious life-form (being aware of their own awareness, and capable of conceiving “dubito, ergo cogito, ergo sum—res cogitans”) decided to adopt this system, then it applies to them as well.
Event: Any well-defined outcome from which an individual can derive utility—positive or negative.
Negation: The negation of an event refers to the event not occurring. If event $A$ is the event that I die, then $\neg A$ is the event that I don’t die (i.e. live). If $B$ is the event that I win the lottery, then $\neg B$ is the event that I don’t win the lottery.
Utility Space: The set containing all events from which an individual can possibly derive utility from. This set is finite.
Utility Preferences: The mapping of each event in an individual’s utility space to the fractional utility they derive from the event, and the implicit ordering of events according to it.
 

Assumptions:

Each individual’s utility preferences are unique. No two individuals have the same utility space with the same values for all events therein.
 
We deal only with the utility space of an individual at a given point in time. For example, an immortal who values their continued existence does not value their existence for eternity with ~1.0 utility, but their existence for the next time period, and as such the immortal and mortal may derive same utility from their continued existence. Once an individual receives units of a resource, their utility space is re-evaluated in light of that. After each event, the utility space is re-evaluated.
  The capacity to derive utility (CDU) of any individual is finite. No one is allowed to have infinite CDU. (It may be possible that an individual’s capacity to derive utility is vastly greater than several other individuals (utility monster) but the utility is normalised to deal specifically with such existences). No one has the right to have a greater capacity to derive utility than other individuals. We normalise the utility of every individuals, such that the maximum utility any individual can derive is 1. This makes the system egalitarian as every individual is given equal maximum (and minimum) utility regardless of their CDU.
 
The Utility space of an individual is finite. There are only so many events that you can possibly derive utility from. The death of an individual you do not know about is not an event you can derive utility from (assuming you don’t also find out about their death). Individuals can only be affected (positively or negatively) by a finite number of events.
 

Some Inferences:

A change in an individual’s CDU does not produce a change in normalised utility, unless there’s also a change in their utility preferences.
A change in an individual’s utility preferences is necessary and sufficient to produce a change in their normalised utility.
 

Conclusion

Any Utility system that conforms to these 5 axioms destroys the utility monster. I think the main problems of traditional utility systems, was unbounded utility, and as such they were indeed not egalitarian. My system destroys the concept of unbounded utility by considering the utility of an event to an individual as the fraction of their total utility from their utility space. This means no individual can have their total (positive or negative) utility space sum to more than any other. The sum total of the utility space for all individuals is equal. I believe this makes a utility system in which every individual is equally represented and is truly egalitarian.
This is a concept still in its infancy, so do critique, comment and make suggestions. I will listen to all feedback and use it to develop the system. This only intends to provide a different paradigm for reasoning about utility, especially in the context of egalitarianism. I did not attempt to formalise a mathematical system for calculating utility, and did not accept to do so due to lacking the mathematical acumen to do. I would especially welcome suggestions for calculating utility of dependent events, and other scenarios. This is not a system of utilitarianism and does not pretend to be such; this is only a paradigm for reasoning about utility. This system can however be applied to existing utilitarian systems.
 

References

[1] https://en.wikipedia.org/wiki/Utility\_monster
[2] https://en.wikipedia.org/wiki/Utilitarianism

New Comment
40 comments, sorted by Click to highlight new comments since:

This is the ultimate example of... there should be a name for this.

You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...

man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.

The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.

The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.

You figure out that something is true, like utilitarianism.

That looks like a category error. What does it mean for utilitarianism to be "true"? It's not a feature of the territory.

if you trust your intuition more than the system, then there is no reason to have a system in the first place

Trust is not all-or-nothing. Putting ALL your trust into the system -- no sanity checks, no nothing -- seems likely to lead to regular epic fails.

The term you're looking for is "apologist".

This is the ultimate example of... there should be a name for this.

I think the name you are looking for is ad hoc hypothesis.

Sometimes when explicit reasoning and intuition conflict, intuition turns out to be right, and there is a flaw in the reasoning. There's nothing wrong with using intuition to guide yourself in questioning a conclusion you reached through explicit reasoning. That said, DragonGod did an exceptionally terrible job of this.

Yeah, you're of course right. In the back of my mind I realized that the point I was making was flawed even as I was writing it. A much weaker version of the same would have been correct, "you should at least question whether your intuition is wrong." In this case it's just very obvious to me me that there is nothing to be fixed about utilitarianism.

Anyway, yeah, it wasn't a good reply.

No. I am improving the existing system. All individuals have the same capacity for desire.

All individuals have the same capacity for desire.

This seems very unlikely, if "capacity for desire" corresponds to anything in the real universe.

Do cats or bacteria have the same range of utility as people? Or are we utility monsters compared to bacteria, raising the possibility that something else can be a utility monster compared to us? I think both options are uncomfortable, no matter what math you use.

Individuals refers only to humans and other sapient entities considered by the system.

There is a continuum on this scale. Is there a hard cutoff, or is there any scaling? And what about very similar forks of AIs?

Our system considers only humans; another sapient alien race may implement this system, and consider only themselves.

Restricting attention to humans means assuming that utility monsters can't exist. Most people who are interested in the utility monster problem won't accept such a strong assumption without justification.

A) what cousin_it said.

B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.

There is some level of incapability at which we stop caring (e.g. head crushed), ... I expect some human could be found or made that would flicker across that boundary regularly.

This is wrong, at least for typical humans such as myself. In other words, we do not stop caring about the one with the crushed head just because they are on the wrong side of a boundary, but because we have no way to bring them back across that boundary. If we had a way to bring them back, we would care. So if someone is flickering back and forth across the so-called boundary, we will still care about them, since by stipulation they can come back.

Good point; how about, someone who is stupider than the average dog.

I don't think this is a good illustration, at least for me, since I would never stop caring about someone as long as it was clear that they were biologically human, and not brain dead.

I think a better illustration would be this: take your historical ancestors one by one. If you go back far enough in time, one of them will be a fish, which we would at least not care about in any human way. But in that way I agree with what you said about values. We will care less and less in a gradual way as we go back -- there will not be any boundary where we suddenly stop caring.

Clarification required - what does it mean for everyone to have the "same" utility system? The obvious answer is "every situation gives everyone the same utility", but if I like chocolate, I should gain utility from eating chocolate. If my brother doesn't like eating chocolate, he shouldn't gain utility from it. So if it's not the seemingly obvious answer, how are we defining it?

Also, you've mentioned that the negation of an event is it "not happening", and it has the opposite utility of the original. There are two main objections here:

1) A coworker unexpectedly brings in cookies and hands them out to everyone. This should be a positive utility boost. But am I really getting negative utility every day that doesn't happen? Conversely, am I really getting just as much utility from having my friends alive each and every moment as I lose when they die and I'm stricken with grief?

2) There are an infinite number of things Not Happening at any given time, all of which would in theory play into the utility value. How do we even remotely consider the idea of negations given this?

One way to address this would be to do things like considering probability - we're not terribly happy/sad about the non-occurence of wildly improbable events - but that's just a start.

I tried to rewrite the article for clarification—please reread. I'll reply to any points you have after a re-read.

The objections to your concept of negation still stand, I think - there are an infinite number of possible events, an infinite number of which don't happen. Only finitely many things happen, but the utility of each is similar to the utility of the things that didn't happen, since things that don't happen have the same absolute value as they would if they did. We can't just say that they cancel out, because they eat up the available utility space, so every individual event has to have an infinitesimal value...

I'm not sure that this is really a fixable system, because it has to partition out a bounded amount of utility among an infinite number of events, since every possible event factors in to the result, because it either A) happens or B) doesn't, and either way has a utility value. It would need to completely rebuild some of the axioms to overcome this, and you only really have normalizing to the -1 to 1 utility values and the use of negations as axioms.

Interpersonal utility comparisons are not a natural part of utility theory the same way individual utility functions are. I think of them as being important for two reasons:

1: Our own personal moral reasoning does something like interpersonal utility comparisons, and we want to try to formalize that.

2: we want to cooperate with other people to achieve some goal that will benefit all, but in order to define our collective goals we need some form of interpersonal utility comparison.

2.5: We're about to build an AI and want to program in by hand how it should weigh human values (warning: don't do 2.5).

None of that made any sense because utility is not the sum of components from independent events. You can have bounded utility functions without any of that.

I didn't say that. Is there any part of the post you what me to clarify?

The sum of the utility of an event and its negation is 0.

If two events are independent then the utility of both events occurring is the sum of their individual utilities.

Utilities are defined over outcomes, which don't have negations or independence relations with other outcomes. There is no such thing as the utility of an event in standard expected utility theory, and no need for such a concept.

An event is any outcome from which an individual can derive utility.
 
The negation of an event is the event not happening.

Given an outcome X, there are many outcomes other than X, which generally have different utilities. Thus there isn't one utility value for X not happening.

By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.

personally, my objection to utilitarianism is more fundamental than this. I don't believe utility is an objective scalar measure that can be compared across persons (or even across independent decisions for a person). It's just a convenient mathematical formalism for a decision theory.

By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.

If utility is bounded between -1 and 1, then 0.5 is an extremely large amount of utility, not just some generic good thing. Bounded utility functions do not contradict common sense beliefs about how diminishing marginal returns works.

No. We look at Utility at points in time. One good thing is 0.5. We then calculate subsequent utility of another good thing after receiving that one good thing. You reevaluate the utility again after the first occurrence of the event.

We look at Utility at points in time.

You shouldn't. That's not how utility works.

So, reset to 0 at every 50ms, or some other time unit? And this applies to instantaneous utility as well - do you really mean to say that there can exist no experience that is twice as good as a 0.5 utility experience?

Reset to 0 after each event.
 
You may have a total utility of 10^10X where X is the maximum utility of the average human.
All your utility values are expressed as a fraction of 10^10X. A utility value of 0.5 utils grants you 5.0*10^9 utility, and grants an average human 0.5 utility.

What's an "event"? What if multiple streams of qualia are happening simultaneously - is each instant (I chose 50ms as a guess at minimum experience unit) an event, or the time between sleep periods (and do people not have experiences while sleeping)?

Why do you claim there is a maximum utility for an "average human", and why use that rather than the maximum utility of the maximally-satisfied human? And is this a linear scaling (if so, why not just use the number rather than a constant fraction) or some logarithmic or other transform (and if so, why)?

Maybe your utility system works, but I don't feel like it matches our world.

Plus, what does the "negation" of an event even mean? If someone that I care about dies, I feel sad. If they then come back, I don't feel not-sad, rather I'd be pretty disturbed (and of course happy) because what the hell just happened.

That is to say, if you stab me, but then use a magic wand to make it go away, I don't go back to normal, I become really scared of you instead.

You could say that "negating" an event turns it into "it never happened". But then I don't know what it means or how you could steer actions with it. You can't "negate" events that already happened, so, best you can do with the model is "yeah, I guess we shouldn't have done that"?

If an event happens, then the negation of the event is that event not happening.

Someone you like dying is A.

Negation A is the person living.

Does it "not happen" or does it "unhappen" or does it "get fixed"?

Hmmm, I've received counter examples, where the utility of an event + it's negation isn't zero.
 
E.g receiving $10,000 vs Not receiving $10,000. Getting a cookie vs not getting a cookie.
 
I could redefine the negation of an event in regards to gaining material possessions as losing those material possessions, but would the negative utility be equal to the positive utility?
So, I've decided to knock off one axiom; "utility of an event + its negation = 0".
 
Sum total utility of positive events = 1.
Sum total utility of negative events = 1.
 
The system is preserved.
I'll edit it when I'm on laptop.

If I get a cookie, then I'm happy because I got a cookie. The negation of this event is that I do not get a cookie. However, I am still happy because now I feel healthier, having not eaten a cookie today. So both the event and it's negation cause me positive utility.

The negation of the event is that you did not get a cookie, not that you do not get a cookie. The negation of an event is that it did not happen. Either an event occurs or does not—it goes without saying that both an event and its negation cannot occur.

Even changing "do" to "did", my counter example holds.

Event A: At 1pm I get a cookie and I'm happy. At 10pm, I reflect on my day and am happy for the cookie I ate.

Event (not) A: At 1pm I do not get a cookie. I am not sad, because I did not expect a cookie. At 10pm, I reflect on my day and I'm happy for having eaten so healthy the entire day.

In either case, I end up happy. Not getting a cookie doesn't make me unhappy. Happiness is not a zero sum game.