This article is a stub. Alas, you can't help Wikipedia (or LessWrong) by expanding it. Except through good comments.
Here I'll present an old idea for a theory of population ethics. This post exists mainly so that I can have something to point to when I need this example.
Given a total population , each with total individual utility over the whole of their lives, order them from lowest utility to the highest so that implies . These utilities are assumed to have a natural zero point (the "life worth living" standard, or similar).
Then pick some discount factor , and define the total utility of the world with population (which is the total population of the world across all time) as
- .
This is a prioritarian utility that gives greater weight to those least well off. It is not average utilitarianism, and would advocate creating a human with utility larger than than all other humans (as long as it was positive), and would advocate against creating a human with negative utility (for a utility in between, it depends on the details). In the limit , it's total utilitarianism. Increasing someone's individual utility always improves the score. It (sometimes) accepts the "sadistic conclusion", but I've argued that that conclusion is misnamed (the conclusion is a choice between two negative outcomes, meaning that calling it "sadistic" is a poor choice - the preferred outcome is not a good one, just a less bad one). Killing people won't help, unless they will have future lifetime utility that is negative (as everyone that ever lived is included in the sum). Note that this sets up a minor asymmetry between not-creating people and killing them.
Do I endorse this? No; I think a genuine population ethics will be more complicated, and needs a greater asymmetry between life and death. But it's good enough for an example in many situations that come up.
None of the population ethics have decent extensions to infinite populations. I have a very separate idea for infinite populations here. I suppose the extension of this method to infinite population would use the same method as in that post, but use (γs(w)+i(w))/(1+γ) instead of (s(w)+i(w))/2 (where s(w) and i(w) are the limsup and liminf of utilities, respectively).
You can always zero out those utilities by decree, and only consider utilities that you can change. There are other patches you can apply. By talking this way, I'm revealing the principle I'm most willing to sacrifice: elegance.
If A is repugnant and C is now, you can get from C to A by doing improvements (by the standard of total utilitarianism) every step of the way. Similarly, if B is worse than A on that standard, there is a hypothetical path from B to A which is an "improvement" at each step (most population ethics have this property, but not all - you need some form of "continuity").
It's possible that the most total-ut distribution of matter in the universe is a repugnant way; in that case, a sufficiently powerful AI may find a way to reach that.
a) I don't think it makes sense in any strongly principled way, b) I'm trying to build one anyway :-)