Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Population ethics in practice

3 Post author: ericyu3 08 August 2014 10:40PM

There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:

  • Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
  • If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
  • Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
What these thought experiments have in common is that they aren't very good for making decisions. For instance, simply adding the condition "avoid the Repugnant Conclusion" to a cost-benefit analysis isn't very useful, since it doesn't give any concrete estimate of the value of additional lives. In this post, I'll give an heuristic that lets total, average, and critical-level utilitarianism be analyzed the same way for most decisions. For simplicity, I'll assume that everyone is identical; if people aren't identical, you need to explicitly normalize utility functions before comparing them, but as long you do that, the heuristic is still valid.

Suppose you have N people with utilities u1, ..., uN, and average utility uavg. Total utilitarianism (TU) would maximize the objective function wTU(N, uavg) = N*uavg. Average utilitarianism (AU) would maximize wAU(N, uavg) = uavg, and critical-level utilitarianism would maximize wCLU(N, uavg) = N*(uavg  u0) for some "critical utility" u0. The interpretation is that only lives with utility above u0 are worth living.

It is easy to use CLU in a cost-benefit analysis: creating an additional person with utility u is equally valuable as raising the utility of an existing person from u0 to u. For example, if utility is estimated using income, and $1000/year is the income level corresponding to u0, then creating a person with an income of $2000/year is about as good as doubling the income of someone making $1000/year. TU is the special case of CLU with u= 0, but if there is disagreement about what "zero utility" means, you can estimate the corresponding income level to estimate the magnitude of the disagreement - disagreement between $400 and $500/year is a lot less serious than between $400 and $40000/year.

In general, AU is not a special case of CLU: CLU's objective function is affected by pure changes in population, while AU's is not (∂wCLU/∂N != 0, unless uavg u0). However, for small changes in N and uavg, AU is equivalent to CLU with uuavg. So although AU and CLU are very different "globally", they are equivalent "locally" with the right choice of u0.

How small is a small change? Define the relative value of two choices as r=(change in w under Choice 1)/(change in w under Choice 2). If > 1, Choice 1 is better, and if r < 1, Choice 2 is better. Then the discrepancy between AU and CLU is indicated by rAU / rCLU: if AU favors Choice 1 more than CLU does, this ratio will be larger. As it turns out, rAU / rCLU ≈ 1 - (ΔN / N) to first order in ΔN. If the population is 1% higher under Choice 1 than Choice 2, the discrepancy is only 1%, and as long as r is not extremely close to 1, AU and CLU will agree on which one is better.

But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.

PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing

Comments (3)

Comment author: AlexMennen 09 August 2014 08:01:18AM 2 points [-]

Critical level utilitarianism is isomorphic to total utilitarianism. Utilities are invariant under adding constants but sums of utilities are not, so to use total utilitarianism, you need to pick what level of utility to call 0, which is effectively the same as picking a level of utility to call u0 in critical level utilitarianism.

If you have some canonical way of picking a 0 point for the utility functions which is not the critical level, then it might be more convenient to use CLU so you don't have to change the 0 point, but the difference is purely notational. Your utility=income suggestion doesn't work as such a canonical method in humans because utility isn't proportional to income.

If r > 1, Choice 1 is better, and if r < 1, Choice 2 is better.

Nitpick: only if change in 2 under choice 2 is positive.

Comment author: ericyu3 09 August 2014 11:45:46PM 0 points [-]

Your utility=income suggestion doesn't work as such a canonical method in humans because utility isn't proportional to income.

I just meant that picking a value of u0 is equivalent to picking a value of income ("y0") such that u(y0)=u0.

Comment author: AlexMennen 10 August 2014 06:22:48AM *  1 point [-]

Which is in turn equivalent to picking a value of income y_0 such that u(y_0)=0 for total utilitarianism.

(btw, to get an _ instead of italics, put a \ in front of it.)