Many views, including even some person-affecting views, endorse the repugnant conclusion (and very repugnant conclusion) when set up as a choice between three options, with a benign addition option.
Many consequentialist(-ish) views, including many person-affecting views, in principle endorse the involuntary killing and replacement of humans with far better off beings, even if the humans would have had excellent lives. Presentism, narrow necessitarian views and some versions of Dasgupta’s view don’t.
Variants of Dasgupta's view seem understudied. I'd like to see more variants, extensions and criticism.
Benign addition
Suppose there are currently one million people in the world, and you have to choose between the following 3 outcomes, as possible futures:
Huemer (2008, JSTOR) describes them as follows, and we will assume that the one million people in A are the same that currently exist:
World A: One million very happy people (welfare level 100).
World A+: The same one million people, slightly happier (welfare level 101), plus 99 million new people with lives barely worth living (welfare level 1).
World Z: The same 100 million people as in A+, but all with lives slightly better than the worse-off group in A+ (welfare level 3).
On the other hand, it seems to me that person-affecting intuitions should generally recommend against Z, or at least not make it obligatory, because A seems better than it on person-affecting intuitions. It's better for the present or necessary people.
The welfare level of each person can represent their aggregate welfare over their whole life (past, present and future), with low levels resulting from early death. In a direct comparison between Z and A, Z can therefore mean killing and replacing the people of A with the extra people, as in replacement and replaceability thought experiments (Singer, 1979a, Singer, 1979b, Jamieson, 1984, Knutsson, 2021). The extra people have lives barely worth living, but still worth living, and in far greater number.
In binary choices, i.e. if you were choosing just between two outcomes, on most person-affecting views, and in particular additive ones, I would expect:
A < A+, because A+ is better for the necessary/present people (the people who exist in both outcomes), and no worse for the others, who still have positive overall in A+ as opposed to not existing in A.
A+ < Z, because they have the same set of people, but Z has higher average, total and minimum welfare, and a fairer distribution of welfare.[1]
A > Z, because it’s worse for the necessary/present people in A, but not worse for the contingent/future people, on person-affecting intuitions.
This would give a cycle: A < A+, A+ < Z, Z < A+. Huemer (2008, JSTOR) takes the first two together to be an argument for the repugnant conclusion,[2] and calls the step A < A+ benign addition.[3]
What would we do when all three options are available?
Two person-affecting responses
According to most person-affecting intuitions, I'd guess Z seems like the worst option.
Presentist and necessitarian views would recommend A+. A+ is best for the present people, the people alive at the time of the choice. A+ is best for necessary people, the people who will exist (or will ever exist or have existed) no matter which is chosen.
Or, if we ruled Z out first to reduce it to a binary choice, then we’d be left deciding between A and A+, and then we’d pick A+.
However, I suspect we should pick A instead. With Z available, A+ seems too unfair to the contingent people and too partial to the necessary/present people. Once the contingent people exist, Z would have been better than A+. And if Z is still an option at that point, we’d switch to it. So, anticipating this reasoning, whether or not we can later make the extra people better off later, I suspect we should rule out A+ first, and then select A over Z.
I can imagine myself as one of the original necessary people in A. If we picked A+, I'd judge that to be too selfish of us and too unkind to the extra people relative to the much fairer Z. All of us together, with the extra people, would collectively judge Z to have been better. From my impartial perspective, I would then regret the choice of A+. On the other hand, if we (the original necessary people) collectively decide to stick with A to avoid Z and the unkindness of A+ relative to Z, it's no one else's business. We only hurt ourselves relative to A+. The extra people won't be around to have any claims.
Dasgupta’s view (Dasgupta, 1994, section VI and Broome, 1996, section 5) captures similar reasoning. A wide-ish version of Dasgupta’s view can be described more generally, as a two-step procedure:
Rule out any outcome that’s worse than another with exactly the same number of people who ever exist.[4]
Of the remaining outcomes, pick any which is best for the necessary people.[5][6] A person is necessary if for each outcome, they ever exist in that outcome.
Applying this, we get A:
A+ is ruled out because Z is better than it in a binary choice, for the same set (or number) of people. No other option is ruled out for being worse in a binary choice over the same set (or number) of people at this step.
We're left with A and Z, and A is better for the necessary people.
Tentatively, among person-affecting views, I prefer something like Dasgupta’s view, although I would modify it to accommodate the procreation asymmetry, so that adding apparently bad lives is bad, but creating apparently good lives is not good. There are probably multiple ways to do this.[7] Presentist and necessitarian views can also be modified into asymmetric views, e.g. by including both necessary/present lives and contingent/future bad lives, or like the less antinatalist asymmetric views in Thomas, 2019 or Thomas, 2023.
However, it’s unclear how to extend this version of Dasgupta’s view, especially the first step, to uncertainty about the number of people who will exist.
Furthermore, the first step doesn't rule out enough options even in deterministic cases. For example, if we include even just one extra person in Z (not in A or A+), then step 1 does nothing, and A+ is recommended instead. Rather than requiring the same number, step 1 should rule out any option that seems pretty unambiguously worse than another option, and A+ would still seem pretty unambiguously worse than Z, even if Z had one extra person. And we'd need A to not be pretty unambiguously worse than A+.[8]
A promising approach for the first step of the two-step procedure would be to fix a set of axiologies and rule out options by unanimous agreement across the axiologies, e.g. X > Y if X beats Y according to both average utilitarianism and total utilitarianism, or X > Y if X beats Y according to every critical-level utilitarian view in a set of them.[9] The latter is essentially a critical-range theory from Chappell et al., 2023 and both similar to the views in Thomas, 2023. This could be motivated by dissatisfaction with all (complete, transitive and independent of irrelevant alternatives) welfarist/population axiologies, in light of impossibility theorems, largely due to Gustaf Arrhenius (e.g. Arrhenius, 2000, Arrhenius, 2003 (pdf), Arrhenius, 2011 (pdf), Thornley, 2021, Arrhenius & Stefánsson, 2023; for another presentation of multiple of theorems of Arrhenius together, see Thomas, 2016).
We can generalize the two-step procedure as follows, after fixing some set S of axiologies, and defining X ≺ Y and "X is beaten by Y" if for all axiologies < in the set S, X < Y:
Rule out any option beaten by an option in your original set of available options.
Of the remaining available options, pick any which is best for the necessary people.
Of the most plausible axiologies, e.g. those satisfying almost all of the conditions of the impossibility theorems (or the benign addition argument), there's a near-consensus in favour of Z>A+, with the dissidents being "anti-egalitarian" in comparing Z and A+, basically the opposite of egalitarian or prioritarian.[10] Z>A+ also follows from Harsanyi's utilitarian theorem, extensions to variable population cases and other ~utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023;Blackorby et al., 2002, Theorem 3.
If each of the axiologies used can be captured as utility functions over which we can take expected values or otherwise make ex ante comparisons between probability distributions of outcomes, then this also extends the two-step procedure to probability distributions over outcomes.
Responses across views
Here are the verdicts of various views when all three options are available:
Presentist and necessitarian person-affecting views recommend A+, because A+ is best for the necessary/present people.
Dasgupta’s view (Dasgupta, 1994, section VI and Broome, 1996, section 5, or a version of it), which is person-affecting, recommends A.
Meacham (2012)’s harm-minimization view, a person-affecting view, recommends A, because the people who exist in it are harmed the least in total, where the harm to an individual in an outcome is measured by the difference between their welfare in it and their maximum welfare across outcomes.
However, this view ends up implausibly antinatalist, even with no future people experiencing any negative in their lives, only positive, just not as positive as it could be (pointed out by Michelle Hutchinson, in Koehler, 2021).
Weak actualism (Hare, 2007, Spencer, 2021, section 6), a person-affecting view, recommends Z. An option/outcome is only permissible if and only if it's no worse than any other for the people who (ever) exist in the outcome.
It rules out A, because A+ is better for the people who exist in Z.
It rules out A+, because Z is better for the people who exist in A+.
It does not rule out Z, because Z is better than both A and A+ for the people who exist in Z.
Thomas (2023)’s asymmetric person-affecting views recommend Z:
The views hold that exactly the undominated options — those not worse than any other in a binary comparison — are permissible, and X>Y if two conditions hold simultaneously:
The total harm to people in X is less than the total harm to the people in Y, where the harm to a person in one outcome compared to another is the difference between their maximum welfare across the two outcomes and their welfare in that outcome, but 0 if they don’t exist in that outcome.
X has higher total welfare than Y, i.e. X is better than Y according to total utilitarianism.
A+>A, Z>A+ and Z and A are incomparable. Only Z is undominated.
Condition ii is transitive, so any option with maximum total welfare will be undominated and permissible. Condition i can have cycles, and it does for A, A+ and Z. In general, if you define X>Y as X beating Y in all binary choices across a fixed set of views, and permit only undominated options, you will privilege the acyclic (and transitive) views in your set of views. In general, the repugnant conclusion will be permissible if one of the conditions you rank with is total welfare.
Thomas (2019)’s asymmetric person-affecting views, which extend necessitarian binary choices using Schulze’s beatpath voting method, can be made to recommend something like Z, even with A available, by adjusting the numbers somewhat.[11]
Totalism/the total view recommends Z, because it has the highest total utility.
Averagism/the average view recommends A, because it has the highest average utility.
Critical-level utilitarianism recommends A, if the critical level is at least ~2: subtract the critical level from each person’s welfare level, and then rank based on the sum of these (or, equivalently take the total welfare, subtract critical level*number of people and rank based on this).
What negative (total) utilitarianism recommends depends only on the total negative value (e.g. total suffering, total preference or desire frustration), which I have not specified separately above. It could recommend any of the three, depending on the details of the thought experiment. It could also deny the framing, by denying the possibility of positive welfare.
The views above that recommend Z (perhaps other than negative utilitarianism) can also be made to recommend the very repugnant conclusion with three choices, where the original necessary lives are badly net negative in Z, because they are all additive, and the gains to the extra people will outweigh the harms to the original necessary people, with enough extra people.
Replacement with better off beings
In Huemer’s worlds above, the additional people are worse off than the original in A in each of A+ and Z, but we can make them better off in Z instead. For example, to capture replacement by artificial minds, consider the following possible futures, A, A+ and B:
World A
8 billion current humans with very good welfare and very long lives (welfare level 1,000 each), plus
1 trillion future humans with similarly very good welfare and very long lives (welfare level 1,000 each), but no artificial minds. (Possibly still with advanced artificial intelligence, just not conscious.)
World A+
the same as A, but all 1.008 trillion humans are even better off (welfare level 2,000 each), plus
an additional 10 trillion artificial minds serving humans, all with lives barely worth living, but still positive overall (welfare level 1 each).
World B
The 8 billion humans are killed early (welfare level 10 each),
the 10 trillion artificial minds have far better lives than even the humans would have had in 2 (welfare level 10,000 each), because they’re much more efficient at generating positive welfare and better at avoiding negative welfare when in control, and
the 1 trillion future humans are never born, and instead there are 1 trillion more artificial minds (also welfare level 10,000 each).
(The artificial minds are not mind uploads of the humans. The 8 billion humans are dead and gone forever.)
Presentist and narrow necessitarian views still recommend A+. The wide-ish and narrow[4] versions of Dasgupta’s view still recommend A, but the fully wide version[6] recommends B. The other views listed in the previous section recommend B (and negative utilitarianism can recommend B).
Indeed, additive wide views and non-person-affecting views should usually recommend B, or something suitably similar in a similar thought experiment. Just between A and B, replacing 1 trillion future humans with 1 trillion far better off artificial minds is a huge benefit between matched counterparts, not just on aggregate, but also for each of those pairs of counterparts, which should be enough to outweigh the early deaths of the 8 billion humans, unless we prioritize humanity or the worse off.
And this is also not that counterintuitive. Humans today should make some sacrifices to ensure better welfare for future generations, even if these future people are contingent and their identities will be entirely different if we do make these sacrifices. Why should we care that these future people are humans or artificial minds?
On the other hand, maybe B is too unfair to the necessary humans. We are left too badly off and give up too much. A+ is of course even less fair by comparison with B. A prioritarian or egalitarian with a wide person-affecting view could recommend A. Or, we could go with something like the wide-ish version of Dasgupta's view to recommend A.
This is a wide-ish version. For a fully narrow version, rule out any outcome that’s worse than another with exactly the same set of people who ever exist. The narrow version would tell you to be indifferent (or take as incomparable) between a) creating someone with an amazing life and b) creating someone else with a life that would be worse than that, no matter how much worse, whether just good, marginal or bad. See the nonidentity problem (Roberts, 2019).
The second step could instead choose what’s best for
a. both necessary people and those with bad lives (necessary or contingent) together, or
b. necessary people and those with bad lives, offsetting the contingent bad lives with contingent good lives, i.e. adding the sum of the welfare of contingent people, but replacing it with 0 if positive, and then adding this to the sum of welfare for necessary people, similar to Thomas (2019)’s hard asymmetric views.
The first (a) is quite antinatalist, because contingent bad lives can’t be made up for with contingent good lives. However, I personally find this intuitive. The second (b) allows this offsetting: as long as contingent people have on aggregate net positive lives, contingent bad lives won't count against an outcome at step 2.
Or, we could define X ≥ Y, "X is at least as good as Y" as X is at least as good as Y on each axiology, so X > Y would mean X is at least as good as Y on each axiology, and strictly better on at least one.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents usually have in my view more plausible counterparts that support Z>A+.
In particular, it seems
1. increasing marginal returns to additional welfare is less plausible than decreasing marginal returns (prioritarianism),
2. maximax/leximax is less plausible than maximin/leximin,
3. geometrism is less plausible than rank-discounted utiliarianism (or maybe similarly plausible)
4. views with positive lexical thresholds are less plausible than views without lexical thresholds or with only negative lexical thresholds
a) Replace the welfare level 101 in A+ with 201, so giving A+ total utility 300 (million) and average 3, and b) replace the welfare level 3 in Z with 4, so giving Z total utility 400 (million) and average 4. Then,
1. Z indirectly beats A (Z>A+ by 400-300=100, A+>A by 201-100=101; take the minimum of the two as 100) more than A directly beats Z (A>Z by 100-4=96).
2. Z directly beats A+ (Z>A+ by 400-300=100) more than A+ indirectly beats Z (A+>A by 201-100=101, and A>Z by 100-4=96; take the minimum of the two as 96).
See also this comment by Stijn arguing that the view can recommend the Very Repugnant Conclusion.
Summary
Benign addition
Suppose there are currently one million people in the world, and you have to choose between the following 3 outcomes, as possible futures:
Huemer (2008, JSTOR) describes them as follows, and we will assume that the one million people in A are the same that currently exist:
On the other hand, it seems to me that person-affecting intuitions should generally recommend against Z, or at least not make it obligatory, because A seems better than it on person-affecting intuitions. It's better for the present or necessary people.
The welfare level of each person can represent their aggregate welfare over their whole life (past, present and future), with low levels resulting from early death. In a direct comparison between Z and A, Z can therefore mean killing and replacing the people of A with the extra people, as in replacement and replaceability thought experiments (Singer, 1979a, Singer, 1979b, Jamieson, 1984, Knutsson, 2021). The extra people have lives barely worth living, but still worth living, and in far greater number.
In binary choices, i.e. if you were choosing just between two outcomes, on most person-affecting views, and in particular additive ones, I would expect:
This would give a cycle: A < A+, A+ < Z, Z < A+. Huemer (2008, JSTOR) takes the first two together to be an argument for the repugnant conclusion,[2] and calls the step A < A+ benign addition.[3]
What would we do when all three options are available?
Two person-affecting responses
According to most person-affecting intuitions, I'd guess Z seems like the worst option.
Presentist and necessitarian views would recommend A+. A+ is best for the present people, the people alive at the time of the choice. A+ is best for necessary people, the people who will exist (or will ever exist or have existed) no matter which is chosen.
Or, if we ruled Z out first to reduce it to a binary choice, then we’d be left deciding between A and A+, and then we’d pick A+.
However, I suspect we should pick A instead. With Z available, A+ seems too unfair to the contingent people and too partial to the necessary/present people. Once the contingent people exist, Z would have been better than A+. And if Z is still an option at that point, we’d switch to it. So, anticipating this reasoning, whether or not we can later make the extra people better off later, I suspect we should rule out A+ first, and then select A over Z.
I can imagine myself as one of the original necessary people in A. If we picked A+, I'd judge that to be too selfish of us and too unkind to the extra people relative to the much fairer Z. All of us together, with the extra people, would collectively judge Z to have been better. From my impartial perspective, I would then regret the choice of A+. On the other hand, if we (the original necessary people) collectively decide to stick with A to avoid Z and the unkindness of A+ relative to Z, it's no one else's business. We only hurt ourselves relative to A+. The extra people won't be around to have any claims.
Dasgupta’s view (Dasgupta, 1994, section VI and Broome, 1996, section 5) captures similar reasoning. A wide-ish version of Dasgupta’s view can be described more generally, as a two-step procedure:
Applying this, we get A:
Tentatively, among person-affecting views, I prefer something like Dasgupta’s view, although I would modify it to accommodate the procreation asymmetry, so that adding apparently bad lives is bad, but creating apparently good lives is not good. There are probably multiple ways to do this.[7] Presentist and necessitarian views can also be modified into asymmetric views, e.g. by including both necessary/present lives and contingent/future bad lives, or like the less antinatalist asymmetric views in Thomas, 2019 or Thomas, 2023.
However, it’s unclear how to extend this version of Dasgupta’s view, especially the first step, to uncertainty about the number of people who will exist.
Furthermore, the first step doesn't rule out enough options even in deterministic cases. For example, if we include even just one extra person in Z (not in A or A+), then step 1 does nothing, and A+ is recommended instead. Rather than requiring the same number, step 1 should rule out any option that seems pretty unambiguously worse than another option, and A+ would still seem pretty unambiguously worse than Z, even if Z had one extra person. And we'd need A to not be pretty unambiguously worse than A+.[8]
A promising approach for the first step of the two-step procedure would be to fix a set of axiologies and rule out options by unanimous agreement across the axiologies, e.g. X > Y if X beats Y according to both average utilitarianism and total utilitarianism, or X > Y if X beats Y according to every critical-level utilitarian view in a set of them.[9] The latter is essentially a critical-range theory from Chappell et al., 2023 and both similar to the views in Thomas, 2023. This could be motivated by dissatisfaction with all (complete, transitive and independent of irrelevant alternatives) welfarist/population axiologies, in light of impossibility theorems, largely due to Gustaf Arrhenius (e.g. Arrhenius, 2000, Arrhenius, 2003 (pdf), Arrhenius, 2011 (pdf), Thornley, 2021, Arrhenius & Stefánsson, 2023; for another presentation of multiple of theorems of Arrhenius together, see Thomas, 2016).
We can generalize the two-step procedure as follows, after fixing some set S of axiologies, and defining X ≺ Y and "X is beaten by Y" if for all axiologies < in the set S, X < Y:
Of the most plausible axiologies, e.g. those satisfying almost all of the conditions of the impossibility theorems (or the benign addition argument), there's a near-consensus in favour of Z>A+, with the dissidents being "anti-egalitarian" in comparing Z and A+, basically the opposite of egalitarian or prioritarian.[10] Z>A+ also follows from Harsanyi's utilitarian theorem, extensions to variable population cases and other ~utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
If each of the axiologies used can be captured as utility functions over which we can take expected values or otherwise make ex ante comparisons between probability distributions of outcomes, then this also extends the two-step procedure to probability distributions over outcomes.
Responses across views
Here are the verdicts of various views when all three options are available:
The views above that recommend Z (perhaps other than negative utilitarianism) can also be made to recommend the very repugnant conclusion with three choices, where the original necessary lives are badly net negative in Z, because they are all additive, and the gains to the extra people will outweigh the harms to the original necessary people, with enough extra people.
Replacement with better off beings
In Huemer’s worlds above, the additional people are worse off than the original in A in each of A+ and Z, but we can make them better off in Z instead. For example, to capture replacement by artificial minds, consider the following possible futures, A, A+ and B:
Presentist and narrow necessitarian views still recommend A+. The wide-ish and narrow[4] versions of Dasgupta’s view still recommend A, but the fully wide version[6] recommends B. The other views listed in the previous section recommend B (and negative utilitarianism can recommend B).
Indeed, additive wide views and non-person-affecting views should usually recommend B, or something suitably similar in a similar thought experiment. Just between A and B, replacing 1 trillion future humans with 1 trillion far better off artificial minds is a huge benefit between matched counterparts, not just on aggregate, but also for each of those pairs of counterparts, which should be enough to outweigh the early deaths of the 8 billion humans, unless we prioritize humanity or the worse off.
And this is also not that counterintuitive. Humans today should make some sacrifices to ensure better welfare for future generations, even if these future people are contingent and their identities will be entirely different if we do make these sacrifices. Why should we care that these future people are humans or artificial minds?
On the other hand, maybe B is too unfair to the necessary humans. We are left too badly off and give up too much. A+ is of course even less fair by comparison with B. A prioritarian or egalitarian with a wide person-affecting view could recommend A. Or, we could go with something like the wide-ish version of Dasgupta's view to recommend A.
Non-aggregative views, views that prioritize the better off and views with positive lexical thresholds may reject this.
Assuming transitivity and the independence of irrelevant alternatives.
It's also called dominance addition, e.g. in Arrhenius, 2003 (pdf).
This is a wide-ish version. For a fully narrow version, rule out any outcome that’s worse than another with exactly the same set of people who ever exist. The narrow version would tell you to be indifferent (or take as incomparable) between a) creating someone with an amazing life and b) creating someone else with a life that would be worse than that, no matter how much worse, whether just good, marginal or bad. See the nonidentity problem (Roberts, 2019).
Or just the present people, or just the necessary moral agents, or just the present moral agents, or just the actual decision-makers.
Or best for the minimum number of people who will ever exist. For an additive view, rank outcomes by the sum of the following two terms:
1. The total welfare of the necessary people.
2. The average welfare of the contingent people in that outcome multiplied by the minimum number of contingent people across all outcomes.
This would give a fully wide version. This is inspired by Thomas, 2019.
The second step could instead choose what’s best for
a. both necessary people and those with bad lives (necessary or contingent) together, or
b. necessary people and those with bad lives, offsetting the contingent bad lives with contingent good lives, i.e. adding the sum of the welfare of contingent people, but replacing it with 0 if positive, and then adding this to the sum of welfare for necessary people, similar to Thomas (2019)’s hard asymmetric views.
The first (a) is quite antinatalist, because contingent bad lives can’t be made up for with contingent good lives. However, I personally find this intuitive. The second (b) allows this offsetting: as long as contingent people have on aggregate net positive lives, contingent bad lives won't count against an outcome at step 2.
We'd also want to avoid strict cycles in general, e.g. A1 < A2 < ... < An < A1, or else we could eliminate all options in the first step.
Or, we could define X ≥ Y, "X is at least as good as Y" as X is at least as good as Y on each axiology, so X > Y would mean X is at least as good as Y on each axiology, and strictly better on at least one.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents usually have in my view more plausible counterparts that support Z>A+.
In particular, it seems
1. increasing marginal returns to additional welfare is less plausible than decreasing marginal returns (prioritarianism),
2. maximax/leximax is less plausible than maximin/leximin,
3. geometrism is less plausible than rank-discounted utiliarianism (or maybe similarly plausible)
4. views with positive lexical thresholds are less plausible than views without lexical thresholds or with only negative lexical thresholds
a) Replace the welfare level 101 in A+ with 201, so giving A+ total utility 300 (million) and average 3, and b) replace the welfare level 3 in Z with 4, so giving Z total utility 400 (million) and average 4. Then,
1. Z indirectly beats A (Z>A+ by 400-300=100, A+>A by 201-100=101; take the minimum of the two as 100) more than A directly beats Z (A>Z by 100-4=96).
2. Z directly beats A+ (Z>A+ by 400-300=100) more than A+ indirectly beats Z (A+>A by 201-100=101, and A>Z by 100-4=96; take the minimum of the two as 96).
See also this comment by Stijn arguing that the view can recommend the Very Repugnant Conclusion.