(Abstract: We have the notion that people can have a "total utility" value, defined perhaps as the sum of all their changes in utility over time.  This is usually not a useful concept, because utility functions can change.  In many cases the less-confusing approach is to look only at the utility from each individual decision, and not attempt to consider the total over time.  This leads to insights about utilitarianism.)

 

Let's consider the utility of a fellow named Bob.  Bob likes to track his total utility; he writes it down in a logbook every night.

Bob is a stamp collector; he gets +1 utilon every time he adds a stamp to his collection, and he gets -1 utilon every time he removes a stamp from his collection.  Bob's utility was zero when his collection was empty, so we can say that Bob's total utility is the number of stamps in his collection.

One day a movie theater opens, and Bob learns that he likes going to movies.  Bob counts +10 utilons every time he sees a movie. Now we can say that Bob's total utility is the number of stamps in his collection, plus ten times the number of movies he has seen.

(A note on terminology: I'm saying that Bob's utility function is the thing that emits +1 or -1 or +10, and his total utility is the sum of all those emits over time.  I'm not sure if this is standard terminology.)

This should strike us as a little bit strange: Bob now has a term in his total utility which is mostly based on history, and mostly independent of the present state of the world.  Technically, we might handwave and say that Bob places value on his memories of watching those movies.  But Bob knows that's not actually true: it's the act of watching the movies that he enjoys, and he rarely thinks about them once they're over.

If a hypnotist convinced Bob that he had watched ten billion movies, Bob would write down in his logbook that he had a hundred billion utilons.  (Plus the number of stamps in his stamp collection.)

Let's talk some more about that stamp collection. Bob wakes up on June 14 and decides that he doesn't like stamps any more. Now, Bob gets -1 utilon every time he adds a stamp to his collection, and +1 utilon every time he removes one.  What can we say about his total utility?  We might say that Bob's total utility is the number of stamps in his collection at the start of June 14, plus ten times the number of movies he's watched, plus the number of stamps he removed from his collection after June 14.  Or we might say that all Bob's utility from his stamp collection prior to June 14 was false utility, and we should strike it from the record books. Which answer is better?

...Really, neither answer is better, because the "total utility" number we're discussing just isn't very useful.  Bob has a very clear utility function which emits numbers like +1 and +10 and -1; he doesn't gain anything by keeping track of the total separately. His total utility doesn't seem to track how happy he actually feels, either.  It's not clear what Bob gains from thinking about this total utility number.

 

I think some of the confusion might be coming from Less Wrong's focus on AI design.

When you're writing a utility function for an AI, one thing you might try is to specify your utility function by specifying the total utility first: you might say "your total utility is the number of balls you have placed in this bucket" and then let the AI work out the implementation details of how happy each individual action makes it.

However, if you're looking at utility functions for actual people, you might encounter something weird like "I get +10 utility every time I watch a movie", or "I woke up today and my utility function changed", and then if you try to compute the total utility for that person, you can get confused.

 

Let's now talk about utilitarianism.  For simplicity, let's assume we're talking about a utilitarian government which is making decisions on behalf of its constituency.  (In other words, we're not talking about utilitarianism as a moral theory.)

We have the notion of total utilitarianism, in which the government tries to maximize the sum of the utility values of each of its constituents.  This leads to "repugnant conclusion" issues in which the government generates new constituents at a high rate until all of them are miserable.

We also have the notion of average utilitarianism, in which the government tries to maximize the average of the utility values of each of its constituents.  This leads to issues -- I'm not sure if there's a snappy name -- where the government tries to kill off the least happy constituents so as to bring the average up.

The problem with both of these notions is that they're taking the notion of "total utility of all constituents" as an input, and then they're changing the number of constituents, which changes the underlying utility function.

I think the right way to do utilitarianism is to ignore the "total utility" thing; that's not a real number anyway.  Instead, every time you arrive at a decision point, evaluate what action to take by checking the utility of your constituents from each action.  I propose that we call this "delta utilitarianism", because it isn't looking at the total or the average, just at the delta in utility from each action.

This solves the "repugnant conclusion" issue because, at the time when you're considering adding more people, it's more clear that you're considering the utility of your constituents at that time, which does not include the potential new people.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 9:17 PM

Instead, every time you arrive at a decision point, evaluate what action to take by checking the utility of your constituents from each action. I propose that we call this "delta utilitarianism", because it isn't looking at the total or the average, just at the delta in utility from each action.

If you look at the sum of all of the actions if you choose option A minus the sum of all the actions if you take option B, then all of the actions until then will cancel out, and you get just the difference in utility between option A and option B. They're equivalent.

Technically, delta utilitarianism is slightly more resistant to infinities. As long as any two actions have a finite difference, you can calculate it, even if the total utility is infinite. I don't think that would be very helpful.

I think the key difference is that delta utilitarianism handles it better when the group's utility function changes. For example, if I create a new person and add it to the group, that changes the group's utility function. Under delta utilitarianism, I explicitly don't count the preferences of the new person when making that decision. Under total utilitarianism, [most people would say that] I do count the preferences of that new person.

Under total utilitarianism, [most people would say that] I do count the preferences of that new person.

You only count their preferences under preference utilitarianism. I never really understood that form.

If you like having more happy people, then your utility function is higher for worlds with lots of happy people, and creating happy people makes the counter go up. If you like having happier people, but don't care how many there are, then having more people doesn't do anything.

[-]gjm10y50

If, as you propose, you completely ignore the welfare of people who don't exist yet, then it seems to me you will give rather odd answers to questions like this: You and your partner are going to have a baby. There is some spiffy new technology that enables you to ensure that the baby will not have nasty genetic conditions like Down's syndrome, cystic fibrosis, spina bifida, etc. For some reason the risk of these is otherwise rather high. How much are you willing to pay for the new technology to be applied?

Your system will presumably not make the answer be zero because the parents will probably be happier with a healthier child. But it seems like the numbers it produces will be underestimates.

(There are other things I find confusing and possibly-wrong in this proposal, but that may simply indicate that I haven't understood it. But I'll list some of them anyway. I don't think anyone -- "total utilitarian" or not -- wants to do utility calculations in much like the way your hypothetical Bob does; your proposal still needs a way of aggregating utilities across people, and that's at least as problematic as aggregating utilities over time; the argument for the "repugnant conclusion" doesn't in fact depend on aggregating utilities over time anyway; your system is plainly incomplete since it says nothing about how it does aggregate the utilities of "your constituents" when making a decision.)

I can't figure out what you're trying to say.

When people talk about utility, they often end up plagued by ambiguity in their definitions. In the context of decision theory, the domain of a utility function has to take into account everything that the agent cares about and that their decisions effect (including events that happen in the future, so it doesn't make sense to talk about the utility an agent is experiencing at a particular time), and the agent prefers probability distributions over outcomes that have higher expected utility. In the context of classical utilitarianism, utility is basically just a synonym for happiness. How are you trying to use the term here?

Edit: From your clarification, it sounds like you're actually talking about aggregated group utility, rather than individual utility, and just claiming that the utilities being aggregated should consist of the utility functions of the people who currently exist, not the utility functions of the people who will exist in the outcome being considered. But I'm still confused because your original example only referred to a single person.

It's not obvious that you've gained anything here. We can reduce to total utilitarianism -- just assume that everyone's utility is zero at the decision point. You still have the repugnant conclusion issue where you're trying to decide whether to create more people or not based on summing utilities across populations.

I think there's a definite difference. As soon as you treat utility as part of decision-making, rather than just an abstract thing-to-maximize, you are allowed to break the symmetry between existing people and nonexisting people.

If I want to take the action with the highest total delta-U, and some actions create new people, the most straightforward way to do it actually only takes the action with the highest delta-U according to currently-existing people. This is actually my preferred solution.

The second most straightforward way is to take the action with the highest delta-U according to the people who exist after you take the action. This is bad because it leads straight to killing off all humans and replacing them with easily satisfied homunculi. Or the not-as-repugnant repugnant conclusion, if all you're allowed to do is create additional people.

Wouldn't the highest delta-U be to modify yourself so that you maximize the utility of people as they are right now, and ignore future people even after they're born?

Nope.

Why not?

Let me try making this more explicit.

Alice has utility function A. Bob will have utility function B, but he hasn't been born yet.

You can make choices u or v, then once Bob is born, you get another choice between x and y.

A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0

B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2

If you can't precommit, you'll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).

If you can precommit, then you know if you don't, you'll pick uy. Precommitting to ux gives you +1 util under A, and since you're still operating under A, that's what you'll do.

While I'm at it, you can also get into prisoner's dilemma with your future self, as follows:

A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0

B(u) = -1, B(v) = 2, B(x) = -2, B(y) = 1

Note that this gives:

A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1

Now, under A, you'd want u for 1 util, and once Bob is born, under A+B you'd want y for 1 util.

But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice's perspective and Alice+Bob's perspective. Certainly that would be a better option.

Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.

You come to this robot before your example starts, and ask "Do you want to precommit to action vx, since that results in higher total utility?"

And the robot replies, "Does taking this action of precommitment cause the biggest increase in utility of currently existing people?"

"No, but you see, in one time step there's this Bob guy who'll pop into being, and if you add in his utilities from the beginning, by the end you'll wish you'd precommitted."

"Will wishing that I'd precommitted be the action that causes the biggest increase in utility of currently existing people?"

You shake your head. "No..."

"Then I can't really see why I'd do such a thing."

And the robot replies, "Does taking this action of precommitment cause the biggest increase in utility of currently existing people?"

I'd say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.

Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn't arrange it, because by the time it happens they won't be in a state to appreciate it?

Ooooh. Okay, I see what you mean now - for some reason I'd interpreted you as saying almost the opposite.

Yup, I was wrong.

My intended solution was that, if you check the utility of your constituents from creating more people, you're explicitly not taking the utility of the new people into account. I'll add a few sentences at the end of the article to try to clarify this.

Another thing I can say is that, if you assume that everyone's utility is zero at the decision point, it's not clear why you would see a utility gain from adding more people.

Isn't this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn't this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?

I suppose you could say that it's equivalent to "total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function".

(Under mere "total utilitarianism that only takes into account the utility of already extant people", the government could wirehead its constituency.)


Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.

[-][anonymous]10y10

Sorry for posting a comment despite not really thinking about the matter very strenuously.

It seems to me that the post is about the formalisms of temporary utility and constant utility.

It would seem to me that the stamps, as used in the text, would actually provide temporary utility whenever they would make the person collecting them happy for a limited period of time, the same with watching movies.

So if you want to do this total utility thing, then perhaps it simply needs to be formulated in the manner of utility over time. And total utility would be expected or average utility over time.

[This comment is no longer endorsed by its author]Reply

We have the notion of total utilitarianism, in which the government tries to maximize the sum of the utility values of each of its constituents. This leads to "repugnant conclusion" issues in which the government generates new constituents at a high rate until all of them are miserable.

We also have the notion of average utilitarianism, in which the government tries to maximize the average of the utility values of each of its constituents. This leads to issues -- I'm not sure if there's a snappy name -- where the government tries to kill off the least happy constituents so as to bring the average up.

Not quite. If our societal utility function S(n) = n x U(n), where n is the number of people in the society, and U(n) is the average utility gain per year per person (which decreases as n increases, for high n, because of over crowding and resource scarcity), then you don't maximise S(n) by just increasing n until U(n) reaches 0. There will be an optimum n, for which 1 x U(n+1) - the utility from yet one more citizen, is less than n x ( U(n) - U(n+1) ) - the loss of utility by the other n citizens from adding that person.

It might be useful to distinguish between the actual total utility experienced so far, and the estimates of that which can be worked out from various view points.

Suppose we break it down by week. If during the first week of March 2014, Bob finds utility (eg pleasure) in watching movies, collecting stamps, and in owning stamp collections, and in having watched movies (4 different things), then you'd multiply the duration (1 week) by the rate at which those things add to his utility experienced to get how much you add to his total lifetime utility experienced.

If, during the second week of March, a fire destroys his stamp collection, that wouldn't reduce his lifetime total. What it would do is reduce the rate at which he added to that total during the following weeks.

Now let's take a different example. Suppose there is a painter whose only concern is their reputation upon their death, as measured by the monetary value of the paintings they put up for one final auction. Painting gives them no joy. Finishing a painting doesn't increase their utility, only the expected amount of utility that they will reap at some future date.

If, before they died, a fire destroyed the warehouse holding the paintings they were about to auction off, then they would account the net utility experienced during their life as zero. Having spent years with owning lots of paintings, and having had a high expectation of gaining future utility during that time, wouldn't have added anything to their actual total utility over those years.

How is that affected by the possibility of the painter changing their utility function?

If they later decide that there is utility to be experienced by weeks spent improving their skill at painting (by means of painting pictures, even if those pictures are destroyed before ever being seen or sold), does that retroactively change the total utility added during the previous years of their life?

I'd say no.

Either utility experienced is real, or it is not. If it is real, then a change in the future cannot affect the past. It can affect the estimate you are making now of the quantity in the past, just as an improvement in telescope technology might affect the estimate a modern day scientist might make about the quantity of explosive force of a nova that happened 1 million years ago, but it can't affect the quantity itself, just as a change to modern telescopes can't actually go back in time to alter the nova itself.

"Instead, every time you arrive at a decision point, evaluate what action to take by checking the utility of your constituents from each action. I propose that we call this "delta utilitarianism", because it isn't looking at the total or the average, just at the delta in utility from each action."

Perhaps we could call it "marginal utility."