Infinity is big. You just won't believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to infinity.

And there are a lot of paradoxes connected with infinity. Here we'll be looking at a small selection of them, connected with infinite ethics.

Suppose that you had some ethical principles that you would want to spread to infinitely many different agents - maybe through acausal decision making, maybe through some sort of Kantian categorical imperative. So even if the universe is infinite, filled with infinitely many agents, you have potentially infinite influence (which is more than most of us have most days). What would you do with this influence - what kind of decisions would you like to impose across the universe(s)'s population? What would count as an improvement?

There are many different ethical theories you could use - but one thing you'd want is that your improvements are actual improvements. You wouldn't want to implement improvements that turn out to be illusionary. And you certainly wouldn't want to implement improvements that could be undone by relabeling people.

How so? Well, imagine that you have a countable infinity of agents, with utilities (..., -3, -2, -1, 0, 1, 2, 3, ...). Then suppose everyone gets +1 utility. You'd think that giving an infinity of agents one extra utility each would be fabulous - but the utilities are exactly the same as before. The current -1 utility belongs to the person who had -2 before, but there's still currently someone with -1, just as there was someone with -1 before the change. And this holds for every utility value: an infinity of improvements has accomplished... nothing. As soon as you relabel who is who, you're in exactly the same position as before.

But things can get worse. Subtracting one utility from everyone also leaves the outcome the same, after relabeling everyone. So this universal improvement is completely indistinguishable from a universal regression.

Conditions for improvement

So the question is, under what conditions can we be sure that an improvement is genuine?

We'll assume that we have a countable infinity of agents, and we'll make a few highly non-trivial assumptions (the only real justification being that these assumptions are traditional). First, we'll assume that everyone's personal preferences/welfare/hedonistic levels (or whatever we're using) are expressed in terms of a utility function (unrealistic). Secondly, we'll assume that each of these utility functions has a defined zero point (dubious). Finally, we'll assume that these utility functions can be put on a common scale, so they can be compared with each other (extremely dubious).

So, what counts as an improvement? We've already seen that adding +1 to everyone's utility is not an improvement. What about multiplication? If everyone's utility is above 0, then surely multiplying everyone's utility by 2 must make things better?

Not so. Assume everyone's utility is (..., 1/8, 1/4, 1/2, 1, 2, 4, 8, ...), then it's clear that multiplying by 2 (or dividing by 2) has no impact on the overall situation.

Since addition and multiplication are out, what about increasing the number of happy people? It is even easier to see that this can have no impact: simply assume that everyone's utility is some constant c>0. Then if we get everyone to construct a copy of themselves with same utility, we end up with twice as many people with utility c - but since we started with infinitely many people and ended up with infinitely many people, we've accomplished nothing.

 

Bounding the individual

To avoid being completely ineffective, we need to make some stronger assumptions. For instance, we could bound the personal utilities. If the utilities are bounded above (or, indeed, below), then adding +1 will have definite impact: we can't undo that effect by relabeling people. This is because the set of utilities now has a supremum (a generalisation of maximum) or an infimum (a generalisation of minimum). And when we add +1, we increase the supremum or infimum by +1, so the two collections of utilities are no longer comparable.

Bounding people's utilities on one side is not enough to ensure multiplication has an impact, as we saw above with the example, which had everyone's utility above 0. But if people's utilities are bounded above and below, then we can ensure that multiplying will change the overall situation (unless everyone's utility is at zero). The argument for supremum and infimum will work just as above, as long as one of them is non-zero.

Ok, so adding +1 utility to everyone is now a definite improvement (or at the least a definite change, and it certainly feels like an improvement). What about adding different amounts to different people? Is this an improvement?

Not so. Assume people utilities are (..., 1/8, 1/4, 1/2, 1, 3/2, 7/4, 15/8, ...). This is bounded below (by 0) and bounded above (by 2). Yet if you move everyone's utility up to the amount of the person just above them, you will have increased everyone's utility and changed... absolutely nothing.

To make sure that your improvement is genuine, you need to ensure that you increase everyone's utility by at least ε, for any given ε>0. But you need not increase everyone's utility - for instance, you can skip a finite number of people and still get a clear improvement.

What about skipping an infinite number of people? This won't work in general. Assume you have infinitely many people at utility 1, and infinitely many at utility 2. Then if you move infinitely many people from 1 to 2 (while still leaving infinitely many at 1), you will have accomplished nothing.

This leads to a rather curious form of egalitarianism: the only way of ensuring that you've got an improvement overall is to ensure that (almost) everyone shares in the improvement - to at least a small extent.

Duplicating happy people is still ineffective in the bounded cases.

 

Bounding the collective

What if not only the individual utilities are bounded, but the sum of utilities is also bounded - just as 1+1/2+1/4+1/8+... sum to 2? This is a very unlikely situation (most people's utilities would be arbitrarily close to zero). But, if it were to occur, everything becomes easy. Any time you increase anyone's utility by any amount, the overall situation is not longer equivalent with the initial one. Same goes for increasing everyone's utility by any amount, whether or not this is above an ε>0. We can slightly generalise this situation by changing the zero point of everyone's utility: if the sum of everyone's utility is bounded, for any choice of the zero point, then any increase of utility changes the situation. Relabeling cannot undo these improvements.

Similarly, making finitely many extra copies of happy people is now finally an inarguable change. Unlike above, however, this no longer holds true if we move the zero point.

 

More versus happier people

It is interesting that improving overall happiness is successful in more situations that duplicating happy people. Infinity, it seems, often wants happier people, nor more of them.

New Comment
13 comments, sorted by Click to highlight new comments since:
[-][anonymous]130

You can't sum an infinite series that doesn't converge. If you pretend you can, you get nonsense.

You prove that total utilitarianism doesn't work with an infinite population, but I completely fail to see the point of going on to try to draw conclusions from the precise kinds of nonsense that various operations can lead to.

You can't sum an infinite series that doesn't converge. If you pretend you can, you get nonsense.

Almost. If you are careful about which series you pretend you can sum you can get meaningful results. Cesaro summation is the most obvious one. But this is to some extent a nitpick and your essential point is sound.

Then suppose everyone gets +1 utility. You'd think that giving an infinity of agents one extra utility each would be fabulous - but the utilities are exactly the same as before. The current -1 utility belongs to the person who had -2 before, but there's still currently someone with -1, just as there was someone with -1 before the change. And this holds for every utility value: an infinity of improvements has accomplished... nothing. As soon as you relabel who is who, you're in exactly the same position as before.

Something seems wrong with this argument. The relabeling, in particular, jars my intuition. You have an infinity of, presumably, conscious beings with subjective experience, each of whom would tell you that they are better off than they were before. Are you sure that "relabel" is a sensible operation in this context? You do not seem to be changing anything external to your own mind, and that seems like it ought not to affect your judgement of whether a thing is good or not.

Would finding out that we live in an infinite universe be the best possible news a utilitarian could receive?

It depends on the type of utilitarian, doesn't it? Infinite universe means infinite utility, yes, but also infinite disutility.

[-]Shmi00

Would finding out that we live in an infinite universe

I cannot imagine an experiment that would show that.

best possible news a utilitarian could receive?

"Best" in what sense?

If given a choice you would rather this be true than anything else.

No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.

This seems like a pretty good reason not to insist that the aggregation is invariant under relabeling of the individuals.

Also, Harsanyi's social aggregation theorem for a finite number of individuals, states that whenever individual preferences and aggregate preferences can all be stated as utility functions, if the aggregation is indifferent whenever each individual is (or alternatively: if the aggregation prefers A to B whenever each individual does, and if this condition is non-vacuous), then the aggregation is a linear combination of the individual utility functions. It looks to me like it should be possible to generalize this to an infinite population, though I haven't checked the details. If this is true, it would be inconsistent with in-variance under relabeling.

Link to Nick Bostrom's infinite ethics.

Naïve intuitions about what's "greater" break with infinity, and I'm fine with that. I wouldn't say Hilbert's fully booked infinite hotel accommodates more patrons after an infinite number of unexpected guests show up than it accommodates before, and I don't treat them differently as a utilitarian either.

[-][anonymous]10

A chapter of a book I wrote now under consideration by a publisher addresses this. I am avoiding the temptation to spill the beans early. But here are a few words.

In an infinite universe all possibilities happen. Odd combinations of possibilities that make previously impossible things possible. The situation you describe (and any other situation) is true at some point in some part of an infinite universe.

Just travel a lot and be patient.

It doesn't work for all problems, but the provided problems become much more manageable when you look at the magnitude and number of utility changes, rather than the magnitude and number of utilities. I could be horribly wrong, but looking at the set of utility changes rather than the set of utilities seems like it could be a productive line of inquiry.

[-][anonymous]00

(or at the least a definite change, and it certainly feels like an improvement)

Let's say you have infinitely many people with utility 0 and infinitely many people with utility 1. Then changing one person's utility to 0.5 is a definite change, but does it feel like an improvement?

the only way of ensuring that you've got an improvement overall is to ensure that (almost) everyone shares in the improvement - to at least a small extent.

Let's say you have infinitely many people with utility 0 and infinitely many people with utility 1. Then changing one person's utility to 2 is a definite change, but does it feel like an improvement?

Duplicating happy people is still ineffective in the bounded cases.

Let's say you have infinitely many people with utility 0, infinitely many people with utility 1, and one person with utility 2. Then duplicating that person is a definite change, but does it feel like an improvement?

[This comment is no longer endorsed by its author]Reply