This article is a stub. Alas, you can't help Wikipedia (or LessWrong) by expanding it. Except through good comments.
Here I'll present an old idea for a theory of population ethics. This post exists mainly so that I can have something to point to when I need this example.
Given a total population , each with total individual utility over the whole of their lives, order them from lowest utility to the highest so that implies . These utilities are assumed to have a natural zero point (the "life worth living" standard, or similar).
Then pick some discount factor , and define the total utility of the world with population (which is the total population of the world across all time) as
- .
This is a prioritarian utility that gives greater weight to those least well off. It is not average utilitarianism, and would advocate creating a human with utility larger than than all other humans (as long as it was positive), and would advocate against creating a human with negative utility (for a utility in between, it depends on the details). In the limit , it's total utilitarianism. Increasing someone's individual utility always improves the score. It (sometimes) accepts the "sadistic conclusion", but I've argued that that conclusion is misnamed (the conclusion is a choice between two negative outcomes, meaning that calling it "sadistic" is a poor choice - the preferred outcome is not a good one, just a less bad one). Killing people won't help, unless they will have future lifetime utility that is negative (as everyone that ever lived is included in the sum). Note that this sets up a minor asymmetry between not-creating people and killing them.
Do I endorse this? No; I think a genuine population ethics will be more complicated, and needs a greater asymmetry between life and death. But it's good enough for an example in many situations that come up.
It's interesting. A few points:
Is there a natural extension for infinite population? It seems harder than most approaches to adapt.
I'm always suspicious of schemes that change what they advocate massively based on events a long time ago in a galaxy far, far away - in particular when it can have catastrophic implications. If it turns out there were 3^^^3 Jedi living in a perfect state of bliss, this advocates for preventing any more births now and forever.
Do you know a similar failure case for total utilitarianism? All the sadistic/repugnant/very-repugnant... conclusions seem to be comparing highly undesirable states - not attractor states. If we'd never want world A or B, wouldn't head towards B from A, and wouldn't head towards A from B (since there'd always be some preferable direction), does an A-vs-B comparison actually matter at all?
Total utilitarianism is an imperfect match for our intuitions when comparing arbitrary pairs of worlds, but I can't recall seeing any practical example where it'd lead to clearly bad decisions. (perhaps birth-vs-death considerations?)
In general, I'd be interested to know whether you think an objective measure of per-person utility even makes sense. People's take on their own situation tends to adapt to their expectations (as you'd expect, from an evolutionary fitness point of view). A zero-utility life from our perspective would probably look positive 1000 years ago, and negative (hopefully) in 100 years. This is likely true even if the past/future people were told in detail how the present-day 'zero' life felt from the inside: they'd assume our evaluation was simply wrong.
Or if we only care about (an objective measure of) subjective experience, does that mean we'd want people who're all supremely happy/fulfilled/... with their circumstances to the point of delusion?
Measuring personal utility can be seen as an orthogonal question, but if I'm aiming to match my intuitions I need to consider both. If I consider different fixed personal-utility-metrics, it's quite possible I'd arrive at a different population ethics. [edited from "different population utilities", which isn't what I meant]
I think you're working in the dark if you try to match population ethics to intuition without fixing some measure of personal utility (perhaps you have one in mind, but I'm pretty hazy myself :)).
I have that idea as my "line of retreat." My issue with it is that it is hard to calibrate it so that it leaves as big a birth-death asymmetry as I want without degenerating into full-blown anti-natalism. There needs to be some way to say that the new happy person's happiness can't compensate for the original person's death without saying that the original person's own happiness can't compensate for their own death, which is hard.... (read more)