This article is a stub. Alas, you can't help Wikipedia (or LessWrong) by expanding it. Except through good comments.
Here I'll present an old idea for a theory of population ethics. This post exists mainly so that I can have something to point to when I need this example.
Given a total population , each with total individual utility over the whole of their lives, order them from lowest utility to the highest so that implies . These utilities are assumed to have a natural zero point (the "life worth living" standard, or similar).
Then pick some discount factor , and define the total utility of the world with population (which is the total population of the world across all time) as
- .
This is a prioritarian utility that gives greater weight to those least well off. It is not average utilitarianism, and would advocate creating a human with utility larger than than all other humans (as long as it was positive), and would advocate against creating a human with negative utility (for a utility in between, it depends on the details). In the limit , it's total utilitarianism. Increasing someone's individual utility always improves the score. It (sometimes) accepts the "sadistic conclusion", but I've argued that that conclusion is misnamed (the conclusion is a choice between two negative outcomes, meaning that calling it "sadistic" is a poor choice - the preferred outcome is not a good one, just a less bad one). Killing people won't help, unless they will have future lifetime utility that is negative (as everyone that ever lived is included in the sum). Note that this sets up a minor asymmetry between not-creating people and killing them.
Do I endorse this? No; I think a genuine population ethics will be more complicated, and needs a greater asymmetry between life and death. But it's good enough for an example in many situations that come up.
I have that idea as my "line of retreat." My issue with it is that it is hard to calibrate it so that it leaves as big a birth-death asymmetry as I want without degenerating into full-blown anti-natalism. There needs to be some way to say that the new happy person's happiness can't compensate for the original person's death without saying that the original person's own happiness can't compensate for their own death, which is hard. If I calibrate it to avoid anti-natalism it becomes such a small negative that it seems like it could easily be overcome by adding more people with only a little more welfare.
There's also the two step "kill and replace" method, where in step one you add a new life barely worth living without affecting anyone. Since the new person exists now, they count the same as everyone else, so then in the second step you kill someone and transfer their resources to the new person. If this process gives the new person the same amount of utility as the old one, it seems neutral under total utilitarianism. I suppose under total preference utilitarianism its somewhat worse, since you now have two people dying with unsatisfied preferences instead of one, but it doesn't seem like a big enough asymmetry for me.
I feel like in order to reject the two step process, and to have as big an asymmetry as I want, I need to be able to reject "mere addition" and accept the Sadistic Conclusion. But that in turn leads to "galaxy far far away issues" where it becomes wrong to have children because of happy people in some far off place. Or "Egyptology" issues where its better for the world to end than for it to decline so future people have somewhat worse lives, and we are obligated to make sure the Ancient Egyptians didn't have way better lives than ours before we decide on having children. I just don't know. I want it to stop hurting my brain so badly, but I keep worrying about how there's no solution that isn't horrible or ridiculous.
For this one, I am just willing to just decree that creating creatures with a diverse variety of complex human-like psychologies is good, and creating creatures with weird minmaxing unambitious creatures is bad (or at least massively sub-optimal). To put it another way, Human Nature is morally valuable and needs to be protected.
Another resource that helped me on this was Derek Parfit's essay "What Makes Someone's Life Go Best." You might find it helpful, it parallels some of your own work on personal identity and preferences. The essay describes which of our preferences we feel count as part of our "self interest" and which do not. It helped me understand things, like why people general feel obligated to respect people's "self interest" preferences (i.e. being happy, not dying), but not their "moral preferences" (i.e. making the country a theocracy, executing heretics).
Parfit's "Success Theory," as he calls it, basically argues that only preferences that are "about your own life" count as "welfare" or "self interest." So that means that we would not be making the world a better place by adding lives who prefer that the speed of light stay constant, or that electrons keep having negative charges. That doesn't defuse the problem entirely, you could still imagine creating creatures with super unambitious life goals. But it gets it part of the way, the rest, again, I deal with by "defending Human Nature."
I had a question about that. It is probably a silly question since my understanding of decision and game theory is poor. When you were working on that you said that there was no independence of irrelevant alternatives. I've noticed that IIA is something that trips me up a lot when I think about population ethics. I want to be able to say something like "Adding more lives might be bad if there is still the option to improve existing ones instead, but might be good if the existing ones have already died and that option is foreclosed." This violates IIA because I am conditioning whether adding more lives is good on whether there is another alternative or not.
I was wondering if my brain might be doing the thing you described in your post on no IIA, where it is smashing two different values together and getting different results if there are more alternatives. It probably isn't I am probably just being irrational, but reading that post just felt familiar.