(Crossposted from the EA forum.)

Summary: The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.

Like everything in life, the canonical reference in philosophy about this problem was written by Nick Bostrom. However, I found that an area of economics known as "sustainable development" has actually made much further progress on this subject than the philosophy world. In this post I go over some of what I consider to be the most interesting results.

NB: This assumes a lot of mathematical literacy and familiarity with the subject matter, and hence isn't targeted to a general audience. Most people will probably prefer to read my other posts:


1. Summary of the most interesting results

  1. There’s no ethical system which incorporates all the things we might want.
  2. Even if we have pretty minimal requirements, satisfactory ethical systems might exist but we can’t prove their existence, much less actually construct them
  3. Discounted utilitarianism, whereby we value people less just because they are further away in time, is actually a pretty reasonable thing despite philosophers considering it ridiculous.
    1. (I consider this to be the first reasonable argument for locavorism I've ever heard)

2. Definitions

In general, we consider a population to consist of an infinite utility vector (u0,u1,…) where ui is the aggregate utility of the generation alive at time i. Utility is a bounded real number (the fact that economists assume utility to be bounded confused me for a long time!). Our goal is to find a preference ordering over the set of all utility vectors which is in some sense “reasonable”. While philosophers have understood for a long time that finding such an ordering is difficult, I will present several theorems which show that it is in fact impossible.

Due to a lack of latex support I’m going to give English-language definitions and results instead of math-ey ones; interested people should look at the papers themselves anyway.

3. Impossibility Results

3.1 Definitions

  • Strong Pareto: if you can make a generation better off, and none worse off, you should.
  • Weak Pareto: if you can make every generation better off, you should.
  • Intergenerational equity: utility vectors are unchanged in value by any permutation of their components.
    • There is an important distinction here between allowing a finite number of elements to be permuted and an infinite number; I will refer to the former as “finite intergenerational equity” and the latter as just “intergenerational equity”
  • Ethical relation: one which obeys both weak Pareto and intergenerational equity
  • Social welfare function: an order-preserving function from the set of populations (utility vectors) to the real numbers

3.2 Diamond-Basu-Mitra Impossibility Result1

  1. There is no social welfare function which obeys Strong Pareto and finite intergenerational equity. This means that any sort of utilitarianism won’t work, unless we look outside the real numbers.

3.3 Zame's impossibility result2

  1. If an ordering obeys intergenerational equity over [0,1]N, then almost always we can’t tell which of two populations is better 
    1. (i.e. the set of populations {X,Y: neither X<Y nor X>Y} has outer measure one)
  2. The existence of an ethical preference relation on [0,1]N is independent of ZF plus the axiom of choice

4. Possibility Results

We’ve just shown that it’s impossible to construct or even prove the existence of any useful ethical system. But not all hope is lost!

The important idea here is that of a “subrelation”: < is a subrelation to <’ if x<y implies x<’y.

Our arguments will work like this:

Suppose we could extend utilitarianism to the infinite case. (We don't, of course, know that we can extend utilitarianism to the infinite case. But suppose we could.) Then A, B and C must follow.

Technically: suppose utilitarianism is a subrelation of <. Then < must have properties A, B and C.

Everything in this section comes from (3), which is a great review of the literature.

4.1 Definition

  • Utilitarianism: we extend the standard total utilitarianism ordering to infinite populations in the following way: suppose there is some time T after which every generation in X is at least as well off as every generation in Y, and that the total utility in X before T is at least as good as the total utility in Y before T. Then X is at least as good as Y.
    • Note that this is not a complete ordering! In fact, as per Zame’s result above, the set of populations it can meaningfully speak about has measure zero.
  • Partial translation scale invariance: suppose after some time T, X and Y become the same. Then we can add any arbitrary utility vector A to both X and Y without changing the ordering. (I.e. X > Y ó X+A > Y+A)

4.2 Theorem

  1. Utilitarianism is a subrelation of > if and only if > satisfies strong Pareto, finite intergenerational equity and partial translation scale invariance.
    1. This means that if we want to extend utilitarianism to the infinite case, we can’t use a social welfare function, as per the above Basu-Mitra result

4.3 Definition

  • Overtaking utilitarianism: suppose there is some point T after which the total utility of the first N generations in X is always greater than the total utility of the first N generations in Y (given N > T). Then X is better than Y.
    • Note that utilitarianism is a subrelation of overtaking utilitarianism
  • Weak limiting preference: suppose that for any time T, X truncated at time T is better than Y truncated at time T. Then X is better than Y.

4.4 Theorem

  1. Overtaking utilitarianism is a subrelation of < if and only if < satisfies strong Pareto, finite intergenerational equity, partial translation scale invariance, and weak limiting preference

4.5 Definition

  • Discounted utilitarianism: the utility of a population is the sum of its components, discounted by how far away in time they are
  • Separability:
    • Separable present: if you can improve the first T generations without affecting the rest, you should
    • Separable future: if you can improve everything after the first T generations without affecting the rest, you should
  • Stationarity: preferences are time invariant
  • Weak sensitivity: for any utility vector, we can modify its first generation somehow to make it better

4.6 Theorem

  1. The only continuous, monotonic relation which obeys weak sensitivity, stationary, and separability is discounted utilitarianism

4.7 Definition

  • Dictatorship of the present: there’s some time T after which changing the utility of generations doesn’t matter

4.8 Theorem

  1. Discounted utilitarianism results in a dictatorship of the present. (Remember that each generation’s utility is assumed to be bounded!)

4.9 Definition

  • Sustainable preference: a continuous ordering which doesn’t have a dictatorship of the present but follows strong Pareto and separability.

4.10 Theorem

  1. The only ordering which is sustainable is to take discounted utilitarianism and add an “asymptotic” part which ensures that infinitely long changes in utility matter. (Of course, finite changes in utility still won't matter.)

5. Conclusion

I hope I've convinced you that there's a "there" there: infinite ethics is something that people can make progress on, and it seems that most of the progress is being made in the field of sustainable development.

Fun fact: the author of the last theorem (the one which defined "sustainable") was one of the lead economists on the Kyoto protocol. Who says infinite ethics is impractical?

6. References

  1. Basu, Kaushik, and Tapan Mitra. "Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian." Econometrica 71.5 (2003): 1557-1563. http://folk.uio.no/gasheim/zB%26M2003.pdf
  2. Zame, William R. "Can intergenerational equity be operationalized?." (2007).  https://tspace.library.utoronto.ca/bitstream/1807/9745/1/1204.pdf
  3. Asheim, Geir B. "Intergenerational equity." Annu. Rev. Econ. 2.1 (2010): 197-222.http://folk.uio.no/gasheim/A-ARE10.pdf
New Comment
3 comments, sorted by Click to highlight new comments since: Today at 12:04 PM

I haven't read the entire post but I believe I solved "infinite ethics" in http://lesswrong.com/lw/jub/updateless_intelligence_metrics_in_the_multiverse/ (by sticking to a bounded utility function with discounts of particular asymptotics resulting from summing over a Solomonoff ensemble).

[This comment is no longer endorsed by its author]Reply

Thanks. As per theorem 3.2 above you can't have both Pareto and an anonymity constraint. Finite anonymity would add a constant factor to the complexity of the utility vector and hence shouldn't affect the prior, so I assume your method follows the finite anonymity constraint.

As a result, you must be disobeying Pareto? It's not obvious to me why your solution results in this, so I'm bringing it up in case it wasn't obvious to you either. (Or it could be that I'm completely misunderstanding what you are trying to do. Or maybe that you don't think Pareto is actually a reasonable requirement. In any case I think at least one of us is misunderstanding what's going on.)

It seems to me there is another principle that needs to be considered in practical ethics.

When we are confronted with a situation with two mutually exclusive options, standard utility calculation normally allows that they may be of equal value. Standard economics agrees, allowing two goods to be of equal value. But an agent always chooses one or the other. Even when dealing with fungible commodities — say, two identical five-pound bags of rice on a shelf — we always do, in practice, end up choosing one or the other. Even if we flip a coin or use some other random decision procedure, ultimately one good ends up in the shopping cart and the other good ends up staying on the shelf.

To avoid Buridan's Ass situations, we must always end up ranking one good above the other. Otherwise we end up with neither good, which is worse than choosing arbitrarily.

"Should two courses be judged equal, then the will cannot break the deadlock, all it can do is to suspend judgement until the circumstances change, and the right course of action is clear." — Jean Buridan

If choice A has utility 1, choice B also has utility 1, and remaining in a state of indecision has utility 0, then we can't allow the equality between A and B to result in us choosing the lower utility option.

This is especially important when time comes into play. Consider choosing between two mutually exclusive activities, each of which produces 1 util per unit time. The longer you spend trying to decide, the less time you have to do either one.

One solution to this seems to be to deny equality. Any time we perform a comparison between two utilities, it always returns < or >, never =. Any two options are ranked, not numerically evaluated.

In this system, utilities do not behave like real numbers; they are ordered but do not have equality.