I haven't seen the very repugnant conclusion mentioned much here, so I thought I'd add it, as I need it as an example in a subsequent post.

Basically, the repugnant conclusion says:

  • Let be a world filled with very happy people leading meaningful lives. Then, according to total utilitarianism, there is a world which is better than , where everyone has lives barely worth living - but the population is huge.

Some people come to accept the repugnant conclusion, sometimes reluctantly. More difficult to accept is the very repugnant conclusion:

  • Let be a world filled with very happy people leading meaningful lives. Then, according to total utilitarianism, there is a world which is better than , where there is a population of suffering people much larger than the total population of , and everyone else has lives barely worth living - but the population is very huge.

This one feels more negative than the standard repugnant conclusion, maybe because it strikes at our egalitarian and prioritarian instincts, or maybe because of the nature of suffering.

Anyway, my motto on these things is generally:

  • When you find morally wrong outcomes that contradict your moral theory, then enrich your moral theory rather than twisting your moral judgements.
New Comment
20 comments, sorted by Click to highlight new comments since:
[-]NRW130

I think the conclusion is 'repugnant' because people don't want to admit the extent to which they are anti-natalists.

There is a big proportion of current lives that, if I could instantiate an infinity of, I wouldn't think it was a good thing. And the 'repugnant conclusion' is just them realizing they actually feel this way when asked if they would like to shut up and multiply a negative number by infinity.

Btw, when it comes to any practical implications, both of these repugnant conclusions depend on likely incorrect aggregating of utilities. If we aggregate utilities with logarithms/exponentiation in the right places, and assume the resources are limited, the answer to the question “what is the best population given the limited resources” is not repugnant.

If your theory leads you to an obviously stupid conclusion, you need a better theory.

Total utilitarianism is boringly wrong for this reason, yes.

What you need is non-stupid utilitarianism.

First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. This is before going into the details where, for example, preference utilitarianism is a model where each preference is its own axis, and so is each dispreference. Those axes are sometimes orthogonal and sometimes they trade off against each other, a little or a lot. The numbers are fuzzy and imprecise, and the weighting of the needs/preferences/goals/values also changes over time: for example, it is impossible to maximize for sleep because if you sleep all the time, you starve and die and if you maximize for food then you die of eating too much or never sleeping or whatever. We are not maximizers, we are satisficers, and trying to maximize any need/goal/value by trading it off against all the others leads to a very stupid death. We are more like feedback-based control systems that need to keep a lot of parameters in the good boundaries.

Second, interpersonal comparison are between hazardous and impossible. Going back to the example of preference utilitarianism, people have different levels of enjoyment of the same things (in addition to those degrees also changing over time intrapersonally).

Third, there are limits to the disutility that a person will endure before it rounds off to infinite disutility. Under sufficient torture, people will prefer to die rather than bearing it for any length of time longer; at this point, it can be called subjective infinite disutility (simplifying so as to not get bogged down in discussing discounting rates and limited lifespan).

Third and a halfth, it is impossible to get so much utility that it can be rounded off to positive infinity, short of maybe FiO or other form of hedonium/orgasmium/eudaimonium/whatever of the sort. It is not infinite, but it is "whatever is the limit for a sapient mind" (which is something like "all the thermostat variables satisfied including those that require making a modicum of effort to satisfy the others", because minds are a tool to do that and seem to require doing it to some, intra- and interpersonally varying, extent).

Fourth and the most important point to refute total utilitarianism, you need to account for the entire distribution. Even assuming, very wrongly as explained above, that you can actually measure the utility that one person gets and compare it to the utility that an other person gets, you can still have the bottom of your distribution of utility being sufficiently low that the bottom whatever% of the population would prefer to die immediately, which is (simplified) infinite disutility and can not be traded for the limit of positive utility. (Torture and dust specks: no finite amount of dust specks can trade off for the infinite disutility of a degree of torture sufficient to make even one single victim prefer to die.) (This still works even if the victim dies in a day, because you need to measure over all of the history from the beginnning of when your moral theory begins to take effect.) (For the smartasses in the back row: no, that doesn't mean that there having been that level of torture in the past absolves you from not doing it in the future under the pretext that the disutility over all of history already sums to infinity. Yes it does, and don't you make it worse.)

But alright. Assuming you can measure utility Correctly, let's say you have the floor of the distribution of it at least epsilon above the minimum viable. What then? Job done? No. You also want to maximize the entire area under the curve, raising it as high as possible, which is the point that total utilitarianism actually got right. And, in a condition of scarcity, that may require having not too many people. At least, having the rise in amount of people being slower than the rise in distributable utility.

[+][comment deleted]10

This is pretty easily fixed with declining marginal moral weight as quantity increases. And this matches my intuitions pretty well. Basically, accept and make use of scope insensitivity (though probably at higher numbers than evolved into you).

An additional happy person carries less weight going from 1000000000 happy people to 1000000001 that it does when going from 100 to 101. Same for suffering, whether you think suffering is comparable with happiness or think it's another dimension - one more sufferer is worse when it's rare and there are few, and less bad (but not actively good) when nothing materially changes.

When you find morally wrong outcomes that contradict your moral theory, then enrich your moral theory rather than twisting your moral judgements.

I don't think this generalizes. If your moral intuition (snap judgements) contradicts your moral theory (considered judgements), you need to expend effort to figure out which one is more likely to apply.

That doesn't fix it, it just means you need bigger numbers before you run into the problem.

Maybe if you have an asymtote, but I fully expect that you still run into problems then.

Geometric discounting could fix this, as the sum of the series converges.

I once had a (prioritarian) idea where you order people's utility from lowest to highest, and apply geometric discounting starting at the lowest. It's not particularly elegant or theoretically grounded, but it does avoid the repugnant conclusion (indeed I think geometric discounting, applied in any order, removes the RC).

Erik Carlson called this the Moderate Trade-off Theory. See also Sider's Geometrism and Carlson's discussion of it here.

One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another's, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.

Something like or or in general (for decreasing, continuous ) could work, I think.

won't converge as more people (with good lives or not) are added, so it doesn't avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.

Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).

In fact, anything of the form for increasing will allow dust specks to outweigh torture for a large enough population, and if , will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if , it will lead to the Sadistic Conclusion, and if , then it's good to add lives not worth living, all else equal). If we only allow to depend on the population size, , as by multiplying by some factor which depends only on , then (regardless of the value of ), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If depends on in some more complicated way, I'm not sure that it would necessarily lead to torture over dust specks.

I had in mind something like weighting by where is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.

What I might like is to weight by something like for , where the utilities are labelled in increasing (nondecreasing) order, but if are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to . Similarly, if there are clustered utilities, they should each receive weight close to the average of the weights we'd give them in the original Moderate Trade-off Theory.

The utility of the universe should not depend on the order that we assign to the population. We could say that there is a space of lives one could live, and each person covers some portion of that space, and identical people are either completely redundant or only reinforce coverage of their region, and our aim should be to cover some swath of this space.

The world W2 could be our contemporary world, with 7.5 billion people, and a lot of sufferings, and W1 is the world of just one tribe of happy "primitive" people, like the one Sentinel island people. I prefer W2, as it is much more interesting and diverse.

What would you think of , much bigger, sadder, and blander?

Anyway the point of the repugnant conclusion is that any world , no matter how ideal, has a corresponding .

This is a central argument of Phil Torres' paper against space colonisation: there will be space wars!

Probably, we should include in the calculation not only averaged wellbeing of the individuals, which is "goodharted" measure of social wellbeing, but also the properties of the whole world, to which any individual may have access.

Why is average wellbeing a goodharted measure?

Offing those with low wellbeing increases average wellbeing.

That in itself can be solved (if you break the symmetry between killing / not allowing to live), but it still remains that a tiny super-happy population (of one person in the limit) is what's aimed at.

It ignores many important aspect of human wellbeing:

1) preference to stay alive, even if it is unpleasant + some other preferences

2) time relation between observer-moments: e.g. short intense pain is not very important

3) non linear preferences about intensity of pleasure and pain. people could work a lot of time for short pleasure

4) social values (family), intellectual values (knowledge, diversity of experiences)

Declining marginal moral weight answers this with a built-in preference for diversity. More reference classes are good, more quantity isn't worth very much trade-off in intensity.

This is an example of the more-maximum fallacy. Actually utilitarianism suggests the maximum happiness, not more happiness. This presents a false dichotomy between two non-utilitarian worlds, the actual utilitarian world is the unwritten third option - one in which the greater number of people is happy. Notice this third utilitarian world is also more palatable for other moral theory followers.