Non-aggregative views, views that prioritize the better off and views with positive lexical thresholds may reject this.
Assuming transitivity and the independence of irrelevant alternatives.
It's also called dominance addition, e.g. in Arrhenius, 2003 (pdf).
This is a wide-ish version. For a fully narrow version, rule out any outcome that’s worse than another with exactly the same set of people who ever exist. The narrow version would tell you to be indifferent (or take as incomparable) between a) creating someone with an amazing life and b) creating someone else with a life that would be worse than that, no matter how much worse, whether just good, marginal or bad. See the nonidentity problem (Roberts, 2019).
Or just the present people, or just the necessary moral agents, or just the present moral agents, or just the actual decision-makers.
Or best for the minimum number of people who will ever exist. For an additive view, rank outcomes by the sum of the following two terms:
1. The total welfare of the necessary people.
2. The average welfare of the contingent people in that outcome multiplied by the minimum number of contingent people across all outcomes.
This would give a fully wide version. This is inspired by Thomas, 2019.
The second step could instead choose what’s best for
a. both necessary people and those with bad lives (necessary or contingent) together, or
b. necessary people and those with bad lives, offsetting the contingent bad lives with contingent good lives, i.e. adding the sum of the welfare of contingent people, but replacing it with 0 if positive, and then adding this to the sum of welfare for necessary people, similar to Thomas (2019)’s hard asymmetric views.
The first (a) is quite antinatalist, because contingent bad lives can’t be made up for with contingent good lives. However, I personally find this intuitive. The second (b) allows this offsetting: as long as contingent people have on aggregate net positive lives, contingent bad lives won't count against an outcome at step 2.
We'd also want to avoid strict cycles in general, e.g. A1 < A2 < ... < An < A1, or else we could eliminate all options in the first step.
Or, we could define X ≥ Y, "X is at least as good as Y" as X is at least as good as Y on each axiology, so X > Y would mean X is at least as good as Y on each axiology, and strictly better on at least one.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents usually have in my view more plausible counterparts that support Z>A+.
In particular, it seems
1. increasing marginal returns to additional welfare is less plausible than decreasing marginal returns (prioritarianism),
2. maximax/leximax is less plausible than maximin/leximin,
3. geometrism is less plausible than rank-discounted utiliarianism (or maybe similarly plausible)
4. views with positive lexical thresholds are less plausible than views without lexical thresholds or with only negative lexical thresholds
a) Replace the welfare level 101 in A+ with 201, so giving A+ total utility 300 (million) and average 3, and b) replace the welfare level 3 in Z with 4, so giving Z total utility 400 (million) and average 4. Then,
1. Z indirectly beats A (Z>A+ by 400-300=100, A+>A by 201-100=101; take the minimum of the two as 100) more than A directly beats Z (A>Z by 100-4=96).
2. Z directly beats A+ (Z>A+ by 400-300=100) more than A+ indirectly beats Z (A+>A by 201-100=101, and A>Z by 100-4=96; take the minimum of the two as 96).
See also this comment by Stijn arguing that the view can recommend the Very Repugnant Conclusion.
This is a wide-ish version. For a fully narrow version, rule out any outcome that’s worse than another with exactly the same set of people who ever exist. The narrow version would tell you to be indifferent (or take as incomparable) between a) creating someone with an amazing life and b) creating someone else with a life that would be worse than that, no matter how much worse, whether just good, marginal or bad. See the nonidentity problem (Roberts, 2019).
Or best for the minimum number of people who will ever exist. For an additive view, rank outcomes by the sum of the following two terms:
1. The total welfare of the necessary people.
2. The average welfare of the contingent people in that outcome multiplied by the minimum number of contingent people across all outcomes.
This would give a fully wide version. This is inspired by Thomas, 2019.
Summary
Benign addition
Suppose there are currently one million people in the world, and you have to choose between the following 3 outcomes, as possible futures:
Huemer (2008, JSTOR) describes them as follows, and we will assume that the one million people in A are the same that currently exist:
On the other hand, it seems to me that person-affecting intuitions should generally recommend against Z, or at least not make it obligatory, because A seems better than it on person-affecting intuitions. It's better for the present or necessary people.
The welfare level of each person can represent their aggregate welfare over their whole life (past, present and future), with low levels resulting from early death. In a direct comparison between Z and A, Z can therefore mean killing and replacing the people of A with the extra people, as in replacement and replaceability thought experiments (Singer, 1979a, Singer, 1979b, Jamieson, 1984, Knutsson, 2021). The extra people have lives barely worth living, but still worth living, and in far greater number.
In binary choices, i.e. if you were choosing just between two outcomes, on most person-affecting views, and in particular additive ones, I would expect:
This would give a cycle: A < A+, A+ < Z, Z < A+. Huemer (2008, JSTOR) takes the first two together to be an argument for the repugnant conclusion,[2] and calls the step A < A+ benign addition.[3]
What would we do when all three options are available?
Two person-affecting responses
According to most person-affecting intuitions, I'd guess Z seems like the worst option.
Presentist and necessitarian views would recommend A+. A+ is best for the present people, the people alive at the time of the choice. A+ is best for necessary people, the people who will exist (or will ever exist or have existed) no matter which is chosen.
Or, if we ruled Z out first to reduce it to a binary choice, then we’d be left deciding between A and A+, and then we’d pick A+.
However, I suspect we should pick A instead. With Z available, A+ seems too unfair to the contingent people and too partial to the necessary/present people. Once the contingent people exist, Z would have been better than A+. And if Z is still an option at that point, we’d switch to it. So, anticipating this reasoning, whether or not we can later make the extra people better off later, I suspect we should rule out A+ first, and then select A over Z.
I can imagine myself as one of the original necessary people in A. If we picked A+, I'd judge that to be too selfish of us and too unkind to the extra people relative to the much fairer Z. All of us together, with the extra people, would collectively judge Z to have been better. From my impartial perspective, I would then regret the choice of A+. On the other hand, if we (the original necessary people) collectively decide to stick with A to avoid Z and the unkindness of A+ relative to Z, it's no one else's business. We only hurt ourselves relative to A+. The extra people won't be around to have any claims.
Dasgupta’s view (Dasgupta, 1994, section VI and Broome, 1996, section 5) captures similar reasoning. A wide-ish version of Dasgupta’s view can be described more generally, as a two-step procedure:
Applying this, we get A:
Tentatively, among person-affecting views, I prefer something like Dasgupta’s view, although I would modify it to accommodate the procreation asymmetry, so that adding apparently bad lives is bad, but creating apparently good lives is not good. There are probably multiple ways to do this.[7] Presentist and necessitarian views can also be modified into asymmetric views, e.g. by including both necessary/present lives and contingent/future bad lives, or like the less antinatalist asymmetric views in Thomas, 2019 or Thomas, 2023.
However, it’s unclear how to extend this version of Dasgupta’s view, especially the first step, to uncertainty about the number of people who will exist.
Furthermore, the first step doesn't rule out enough options even in deterministic cases. For example, if we include even just one extra person in Z (not in A or A+), then step 1 does nothing, and A+ is recommended instead. Rather than requiring the same number, step 1 should rule out any option that seems pretty unambiguously worse than another option, and A+ would still seem pretty unambiguously worse than Z, even if Z had one extra person. And we'd need A to not be pretty unambiguously worse than A+.[8]
A promising approach for the first step of the two-step procedure would be to fix a set of axiologies and rule out options by unanimous agreement across the axiologies, e.g. X > Y if X beats Y according to both average utilitarianism and total utilitarianism, or X > Y if X beats Y according to every critical-level utilitarian view in a set of them.[9] The latter is essentially a critical-range theory from Chappell et al., 2023 and both similar to the views in Thomas, 2023. This could be motivated by dissatisfaction with all (complete, transitive and independent of irrelevant alternatives) welfarist/population axiologies, in light of impossibility theorems, largely due to Gustaf Arrhenius (e.g. Arrhenius, 2000, Arrhenius, 2003 (pdf), Arrhenius, 2011 (pdf), Thornley, 2021, Arrhenius & Stefánsson, 2023; for another presentation of multiple of theorems of Arrhenius together, see Thomas, 2016).
We can generalize the two-step procedure as follows, after fixing some set S of axiologies, and defining X ≺ Y and "X is beaten by Y" if for all axiologies < in the set S, X < Y:
Of the most plausible axiologies, e.g. those satisfying almost all of the conditions of the impossibility theorems (or the benign addition argument), there's a near-consensus in favour of Z>A+, with the dissidents being "anti-egalitarian" in comparing Z and A+, basically the opposite of egalitarian or prioritarian.[10] Z>A+ also follows from Harsanyi's utilitarian theorem, extensions to variable population cases and other ~utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
If each of the axiologies used can be captured as utility functions over which we can take expected values or otherwise make ex ante comparisons between probability distributions of outcomes, then this also extends the two-step procedure to probability distributions over outcomes.
Responses across views
Here are the verdicts of various views when all three options are available:
The views above that recommend Z (perhaps other than negative utilitarianism) can also be made to recommend the very repugnant conclusion with three choices, where the original necessary lives are badly net negative in Z, because they are all additive, and the gains to the extra people will outweigh the harms to the original necessary people, with enough extra people.
Replacement with better off beings
In Huemer’s worlds above, the additional people are worse off than the original in A in each of A+ and Z, but we can make them better off in Z instead. For example, to capture replacement by artificial minds, consider the following possible futures, A, A+ and B:
Presentist and narrow necessitarian views still recommend A+. The wide-ish and narrow[4] versions of Dasgupta’s view still recommend A, but the fully wide version[6] recommends B. The other views listed in the previous section recommend B (and negative utilitarianism can recommend B).
Indeed, additive wide views and non-person-affecting views should usually recommend B, or something suitably similar in a similar thought experiment. Just between A and B, replacing 1 trillion future humans with 1 trillion far better off artificial minds is a huge benefit between matched counterparts, not just on aggregate, but also for each of those pairs of counterparts, which should be enough to outweigh the early deaths of the 8 billion humans, unless we prioritize humanity or the worse off.
And this is also not that counterintuitive. Humans today should make some sacrifices to ensure better welfare for future generations, even if these future people are contingent and their identities will be entirely different if we do make these sacrifices. Why should we care that these future people are humans or artificial minds?
On the other hand, maybe B is too unfair to the necessary humans. We are left too badly off and give up too much. A+ is of course even less fair by comparison with B. A prioritarian or egalitarian with a wide person-affecting view could recommend A. Or, we could go with something like the wide-ish version of Dasgupta's view to recommend A.
Non-aggregative views, views that prioritize the better off and views with positive lexical thresholds may reject this.
Assuming transitivity and the independence of irrelevant alternatives.
It's also called dominance addition, e.g. in Arrhenius, 2003 (pdf).
This is a wide-ish version. For a fully narrow version, rule out any outcome that’s worse than another with exactly the same set of people who ever exist. The narrow version would tell you to be indifferent (or take as incomparable) between a) creating someone with an amazing life and b) creating someone else with a life that would be worse than that, no matter how much worse, whether just good, marginal or bad. See the nonidentity problem (Roberts, 2019).
Or just the present people, or just the necessary moral agents, or just the present moral agents, or just the actual decision-makers.
Or best for the minimum number of people who will ever exist. For an additive view, rank outcomes by the sum of the following two terms:
1. The total welfare of the necessary people.
2. The average welfare of the contingent people in that outcome multiplied by the minimum number of contingent people across all outcomes.
This would give a fully wide version. This is inspired by Thomas, 2019.
The second step could instead choose what’s best for
a. both necessary people and those with bad lives (necessary or contingent) together, or
b. necessary people and those with bad lives, offsetting the contingent bad lives with contingent good lives, i.e. adding the sum of the welfare of contingent people, but replacing it with 0 if positive, and then adding this to the sum of welfare for necessary people, similar to Thomas (2019)’s hard asymmetric views.
The first (a) is quite antinatalist, because contingent bad lives can’t be made up for with contingent good lives. However, I personally find this intuitive. The second (b) allows this offsetting: as long as contingent people have on aggregate net positive lives, contingent bad lives won't count against an outcome at step 2.
We'd also want to avoid strict cycles in general, e.g. A1 < A2 < ... < An < A1, or else we could eliminate all options in the first step.
Or, we could define X ≥ Y, "X is at least as good as Y" as X is at least as good as Y on each axiology, so X > Y would mean X is at least as good as Y on each axiology, and strictly better on at least one.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents usually have in my view more plausible counterparts that support Z>A+.
In particular, it seems
1. increasing marginal returns to additional welfare is less plausible than decreasing marginal returns (prioritarianism),
2. maximax/leximax is less plausible than maximin/leximin,
3. geometrism is less plausible than rank-discounted utiliarianism (or maybe similarly plausible)
4. views with positive lexical thresholds are less plausible than views without lexical thresholds or with only negative lexical thresholds
a) Replace the welfare level 101 in A+ with 201, so giving A+ total utility 300 (million) and average 3, and b) replace the welfare level 3 in Z with 4, so giving Z total utility 400 (million) and average 4. Then,
1. Z indirectly beats A (Z>A+ by 400-300=100, A+>A by 201-100=101; take the minimum of the two as 100) more than A directly beats Z (A>Z by 100-4=96).
2. Z directly beats A+ (Z>A+ by 400-300=100) more than A+ indirectly beats Z (A+>A by 201-100=101, and A>Z by 100-4=96; take the minimum of the two as 96).
See also this comment by Stijn arguing that the view can recommend the Very Repugnant Conclusion.