I want to thank Irgy for this idea.

As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.

The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).

But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.

Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.

This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.

It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.

New Comment
48 comments, sorted by Click to highlight new comments since:
[-]pallas110

I generally don't see why the conclusion is considered to be repugnant not only as a reaction of gut-feelings but also upon reflection, since we simply deal with another case of "dust speck vs torture", an example that illustrates how our limbic system is not adapted in a way that it could scale up emotions linearly and prevent intransitive dispositions.

We can imagine a world in which evolutionary mechanisms brought forth human brains that by some sort of limbic limitation simply cannot imagine the integer "17", whereas all the other numbers from 1 to 20 can be imagined just as we would expect it. In such a world a repugnant conclusion against total utilitarianism could sound somewhat like "Following total utilitarianism you had to prefer a world A where 5 people are being tortured to a world B where "only" 17 people are being tortured. This seems to be absurd." In both cases we deal with intransitive dispositions. In the first case people tend to adjust downward the disutility of a single dust speck so that when we incrementally examine possible trades between dust speck and torture people find that A<B<C<D<E and E<A. The same goes for the second case. People think that 5 people being tortured is less bad than 10 people, 10 people is less bad than 15 people, but 15 is worse than 17 as the last outcome cannot be imagined as vividly as the others.

I don't want to make the case that some moral theory seems to be "true". I don't know what that even could mean. Though I think can descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing total utilitarianism in another world where we are bad at imagining "17" and where we find it absurd that 17 people being tortured could be considered as worse than 5 peope being tortured.

since we simply deal with another case of "dust speck vs torture"

There is no contradiction to rejecting total utilitarianism and choosing torture. Choosing torture and becoming a total utilitarian both involve bullet biting in ways that feel similar. But choosing torture is the natural consequences of almost all preferences, once you make them consistent and accept to make the choice. Becoming total ut is not (for instance, average utilitarians would also choose torture, but would obviously feel no compulsion to change their population ethics).

Though I think can descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing total utilitarianism in another world where we are bad at imagining "17" and where we find it absurd that 17 people being tortured could be considered as worse than 5 peope being tortured.

You can also descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing deontology because we've realise that two deontological absolutes can contradict each other. Or, more simply, refusing X because of A is structurally the same as refusing X' because of A'.

Just because one can reject total utilitarianism (or anything) for erroneous reasons, does not mean that every reason for rejecting total utilitarianism must be an error.

There is no contradiction to rejecting total utilitarianism and choosing torture.

For one thing, I compared choosing torture with the repugnant conclusion, not with total utilitarianism. For another thing, I didn't suspect there to be any contradiction. However, agents with intransitive dispositions are exploitable.

You can also descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing deontology because we've realise that two deontological absolutes can contradict each other. Or, more simply, refusing X because of A is structurally the same as refusing X' because of A'.

My fault, I should have been more precise. I wanted to say that the two repugnant conclusions (one based on dust specks the other one based on "17") are similiar because quite some people would, upon reflection, refuse any kind of scope neglect that renders one intransitive.

Just because one can reject total utilitarianism (or anything) for erroneous reasons, does not mean that every reason for rejecting total utilitarianism must be an error.

I agree. Again, I didn't claim the contrary to be true. I didn't argue against the rejection of total utilitarianism. However, I argued against the repugnant conclusion, since it simply repeats that evolution brought about limbic systems that make human brains choose in intransitive ways. For the case that we in the dust speck example considered this to be a bias, the same would apply in the repugnant conclusion.

There is no contradiction to rejecting total utilitarianism and choosing torture.

However, agents with intransitive dispositions are exploitable.

Transitive agents (eg average utilitarians) can reject the repugnant conclusion and choose torture. These things are not the same - many consistent, unexploitable agents reach different conclusions on them. Rejection of the repugnant conclusion does not come from scope neglect.

I have tested the theory that scope insensitivity is what makes the RC repugnant, and I have found it wanting. This is because the basic moral principles that produce the RC still produce repugnant conclusions in situations where the population is very small (only two people in the case of killing one person and replacing them with someone else). My reasoning in full is here.

There's a simple way to kill the RC once and for all: Reject the Mere Addition Principle. That's what I did, and what I think most people do intuitively.

To elaborate, there is an argument, that is basically sound, that if you accept the following two principles, you must accept the RC:

  1. Mere Addtion: Adding a new life of positive welfare without impacting the welfare of other people always makes the world better.

  2. Nonantiegalitarianism: Redistributing welfare so that those with the least amount of welfare get more than they had before is a good thing, providing their gains outweigh the losses of those it was redistributed from.

These two principles get you to the RC by enabling the Mere Addition Paradox. You add some people with very low welfare to a high welfare world. You redistribute welfare to the low welfare people. They get 1.01 units of welfare for every 1 unit taken from the high welfare people. Repeat until you get the RC.

So we need to reject one of these principles to avoid the RC. Which one? Nonantiegalitarianism seems unplesant to reject. Pretty much everyone believe in charity to some extent, that it's good to give up some of your welfare if it helps someone else who needs it more. So we should probably reject Mere Addition. We need to acknowledge that sometimes adding people to the world makes it worse.

Does rejecting Mere Addition generate any counterintuitive results? I would argue that at first it seems to, but actually it doesn't.

A philosopher named Arrenhius argued that rejecting Mere Addition leads to the Sadistic Conclusion. Basically, he argues that if adding lives of positive welfare can be bad, then it might be better to add a life of negative welfare than a huge amount of lives with positive welfare.

This seems bad, at first, but then I got to thinking. Arrhenius' Sadistic Conclusion was actually a special case of a larger, more general principle, namely that if it is bad to add lives of positive welfare, it might be better to do some other bad thing to prevent lives from being added. Adding a life of negative welfare is an especially nasty example, but there are other examples, people could harm themselves to avoid having children, or spend money on ways to prevent having children instead of on fun stuff.

Do people in fact do that? Do they harm themselves to avoid having children? All the time! People buy condoms instead of candy, and have surgeries performed to sterilize themselves! And they don't do this for purely selfish reasons, most people seem to think that they have a moral duty to not have children unless they can provide them with a certain level of care.

The reason the Sadistic Conclusion seems counterintuitive at first is that Arrhenius used an especially nasty, vivid example. It's the equivalent of someone using the Transplant case to argue against the conclusions of the trolley problem. Let's put it another way: Imagine that you could somehow compensate all the billions of people on Earth for the pain and suffering they undergo, and the opportunities for happiness they forgo, to make sure they don't have children. All you'd have to do is create on person with a total lifetime welfare of -0.01. That doesn't seem any less reasonable to me than tolerating a certain amount of car crashes so everyone benefits from transportation.

So we should accept the Sadistic Conclusion and reject Mere Addition.

What should we replace it with Mere Addition with? I think some general principle that a small population consisting of people with high welfare per capita is better than one with a large population with a low level of welfare per capita, even if the total amount of welfare in the larger population is greater. That's not very specific, I know, but it's on the right track. And it rejects both the Repugnant Conclusion, and the Kill and Replace Conclusion.

Really like your phrasing, here! I may reuse similar formulations in the future, if that's ok.

Sure, it's fine. Glad I could help!

[-]gjm30

This is a bit of an aside, but:

the other [reason Stuart rejects total utilitarianism] being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness

I think that's a misleading way of putting it. Conditional on having given birth to someone else of equal happiness, total utilitarianism says not to kill the other person. What it does say is that if your choice is between killing-and-giving-birth and doing nothing, those are of equal merit. (You also need to suppose that the killing-and-giving-birth option leaves others' net utility the same -- e.g., because the newly born equally happy person will contribute as much to others' happiness as the person getting killed.) I agree that that's counterintuitive, but it doesn't seem obviously insane. What would be insane would be to say that because you gave birth, you're exempt from criticism for the killing -- but that's not true at all; if you can give birth and then not kill, that's much better than either of the other options according to total utilitarianism.

What would be insane would be to say that because you gave birth, you're exempt from criticism for the killing

Well, if you gave birth to someone happier than the person you killed, then you're not as good as the non-killing-birthers, but you're certainly better than non-killing-non-birthers, and certainly need to be complimented for being better than them... Or alternatly, the non-killing-non-birthers should be told to look up to you. Or serial killers reluctant to reproduce should be offered a free kill in exchange for a few babies.

[-]gjm40

I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions. The action of giving birth to a happier person is (for total utilitarians) a good action. The action of killing the first person is (for total utilitarians) a bad action. If these two actions are (as they would be in just about any actually-credible scenario) totally unrelated, then what a total utilitarian might do is to praise one of the actions and condemn the other; or tell non-killing-non-birthers to emulate one of those actions but not the other.

The last suggestion is an interesting one, in that it does actually describe a nasty-sounding policy that total utilitarians really might endorse. But if we're going to appeal to intuition here we'd better make sure that we're not painting an unrealistic picture (which is the sort of thing that enables the Chinese Room argument to fool some people).

For the nasty-sounding policy actually to be approved by a total utilitarian in a given case, we need to find someone who very much wants to kill people but can successfully be prevented from doing so; who could, if s/he so chose, produce children who would bring something like as much net happiness to the world as the killings remove; who currently chooses not to produce such children but would be willing to do so in exchange for being allowed to kill; and there would need not to be other people capable of producing such children at a substantially lower cost to society. Just about every part of this is (I think) very implausible.

It may be that there are weird possible worlds in which those things happen, in which case indeed a total utilitarian might endorse the policy. But "it is possibly to imagine really weird possible worlds in which this ethical system leads to conclusions that we, living in the quite different actual world, find strange" is not a very strong criticism of an ethical system. I think in fact such criticisms can be applied to just about any ethical system.

I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions.

I think the best way to do this is to "naturalize" all the events involved. Instead of having someone kill or create someone else, imagine the events happened purely because of natural forces.

As it happens, in the case of killing and replacing a person, my intuitions remain the same. If someone is struck by lightning, and a new person pops out of a rock to replace them, my sense is that, on the net, a bad thing has happened, even if the new person has a somewhat better life than the first person. It would have been better if the first person hadn't been struck by lightning, even if the only way to stop that from happening would also stop the rock from creating the new person.

[-]gjm10

Unless the new person's life is a lot better, I think most total utilitarians would and should agree with you. Much of the utility associated with a person's life happens in other people's lives. If you get struck by lightning, others might lose a spouse, a parent, a child, a friend, a colleague, a teacher, etc. Some things that have been started might never be finished. For this + replacement to be a good thing just on account of your replacement's better life, the replacement's life would need to be sufficiently better than yours to outweigh all those things. I would in general expect that to be hard.

Obviously the further we get away from familiar experiences the less reliable our intuitions are. But I think my intuition remains the same, even if the person in question is a hermit in some wilderness somewhere.

How about a more reasonable scenario then: for fixed resources, total utilitarians (and average ones, in fact) would be in favour of killing the least happy members of society to let them be replaced with happier ones, so far as this is possible (and if they designed a government, they would do their upmost to ensure this is possible). In fact, they'd want to replace them with happier people who don't mind being killed or having their friends killed, as that makes it easier to iterate the process.

Also, total utilitarians (but not average ones) would be in favour of killing the least efficient members of society (in terms of transforming resources into happiness) to let them be replaced with more efficient ones.

Now, practical considerations may preclude being able to do this. But a genuine total utilitarian must be filled with a burning wish, if only it were possible, to kill off so many people and replace them in this ideal way. If only there were a way...

[-]gjm00

(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you're interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)

So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn't suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn't be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.

In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.

Note also that although you say "killing X, to let them be replaced with Y", all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they've gradually been getting better so that they produce better and happier and nicer and more productive people.

must be filled with a burning wish

Er, no.

Also: it's only "practical considerations" that would produce the kind of situation you describe, one of fixed total resources.

(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you're interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)

I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs - the equivalent of those who argue that "maybe the world isn't worth saving" while never dreaming of letting people they know or even random strangers just die in front of them.

you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.

Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.

The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they've gradually been getting better so that they produce better and happier and nicer and more productive people.

In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That's the only reason I care about this - there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I'm not sure we can over-rely on practical considerations to keep these conclusions at bay.

I tend to think that exact duplication doesn't double utility. More of exactly the same isn't really making things better. So I don't think millions of exactly identical villages of a few thousand, isolated from one another (else their relationships would undermine the perfect identities between them; they'd be at different places in the pattern of relationships) are more valuable than just one instance of the same village, and if one village is slightly happier than any of the millions of identical villages, the one village is preferable. But between a more normal world of billions of unique, diverse, barely worth living lives, and one village of thousands of almost but not quite a million times happier lives, I guess I think the billions may be the better world if that's how the total utilitarian math works out. Further, though, I think that while it doesn't take very much difference for me to think that an additional worthwhile life is an improvement, once you get very, very close to exact duplication, it again stops being as much of an improvement to add people. When you're talking about, say, a google people instead of mere billions, it seems likely that some of them are going to be getting close enough to being exact duplicates that the decreased value of mere duplication may start affecting the outcome.

I tend to think that exact duplication doesn't double utility.

I agree.

I guess I think the billions may be the better world if that's how the total utilitarian math works out.

You don't have to resign yourself to merely following the math. Total utilitarianism is built on some intuitive ideas. If you don't like the billions of barely worth living lives, that's also an intuition. The repugnant conclusion shows some tension between these intuitions, that's all - you have to decide how to resolve the tension (and it you think that exact duplication doesn't double utility, you've already violated total utilitarian intuitions). "The math" doesn't dictate how you'll resolve this - only your choices do.

What I meant is that if the utilitarian math favors the billions, that seems intuitively reasonable enough that I have no difficulty accepting it.

That's fine - you've made your population ethics compatible with your intuitions, which is perfectly ok.

I think the repugnant conclusion is exhaustively defeated by separating out what resources are used to build something with how those resources are used.

Given a fixed set of resources R, there are a variety of things W that you can do, and you can evaluate the effectiveness of doing so with an ethics system S. There's some W that is "best' according to S, and anything better cannot be accomplished with R.

If you add more resources to R, then you can do something like adding a single person, and you wind up better off. But W + 1 person isn't necessarily the best use of (R + stuff needed to add one person), as evaluated by S.

Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.

I'm pretty sure we're saying the same thing here, now. But I have something to add - preferring additional resources screens off the desire for Y above X. Once you separate world-building into magnitude (resources) and direction (distribution of resources), wanting Y above X means that you prefer larger magnitudes, rather than a different direction.

It seems like there's a fairly simple solution to the problem. Instead of thinking of utilitarianism as the sum of the utility value of all sentient beings, why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.

There's no reason that thinking of it as a sum is inherently more rational then thinking of it as an average. Of course, like I said, you have to add the rule that you can't end the currently existing life of intelligent beings just to increase the average happiness, or else you get even more repugnant conclusions. But with that rule, it seems like you get overall better conclusions then if you think of utility as a sum.

For example, I don't see why we have any specific ethical mandate to bring new intelligent life into the world, and in fact I would think that it would only be ethically justified if that new intelligent being would have a happiness level at least equal to the average for the world as a whole. (IE: you shouldn't have kids unless you think you can raise them at least as well as the average human being would.)

with the caveat that it is also unethical to end the life of any currently existing sentient being.

Is this a deontological rule, or a consequentialist rule? In either case, how easy is it to pin down the meaning of 'end the life'?

It would need to be consequentialist, of course.

In either case, how easy is it to pin down the meaning of 'end the life'?

If we've defined welfare/happiness/personal utility, we could define 'end of life' as "no longer generating any welfare, positive or negative, in any possible future". Or something to that effect, which should be good enough for our purposes.

But then, is not any method which does not prolong a life equivalent to ending it? This then makes basically any plan unethical. If unethical is just a utility cost, like you imply elsewhere, then there's still the possibility that it's ethical to kill someone to make others happier (or to replace them with multiple people), and it's not clear where that extra utility enters the utility function from. If it's the prohibition of plans entirely, then the least unacceptable plan is the one that sacrifices everything possible to extend lives as long as possible- which seems like a repugnant conclusion of its own.

But then, is not any method which does not prolong a life equivalent to ending it?

Yes - but the distinction between doing something through action or inaction seems a very feeble one in the first place.

If unethical is just a utility cost, like you imply elsewhere, then there's still the possibility that it's ethical to kill someone to make others happier

Generally, you don't want to make any restriction total/deontological ("It's never good to do this"), or else it dominates everything else in your morality. You'd want to be able to kill someone for a large enough gain - just not to be able to do continually for slight increases in total (or average) happinesses. Killing people who don't want to die should carry a cost.

why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.

If a consequential ethic has an obvious hole in it, that usually points to a more general divergence between the ethic and the implicit values that it's trying to approximate. Applying a deontological patch over the first examples you see won't fix the underlying flaw, it'll just force people to exploit it in stranger and more convoluted ways.

For example, if we defined utility as subjective pleasure, we might be tempted to introduce an exception for, say, opiate drugs. But maximizing utility under that constraint just implies wireheading in more subtle ways. You can't actually fix the problem without addressing other aspects of human values.

deontological patch

I was never intending a deontological patch, merely a utility cost to ending a life.

That's average utilitarianism, which has its own problems in the literature.

with the caveat that it is also unethical to end the life of any currently existing sentient being.

This is a good general caveat to have.

Average utilitarianism implies the Sadistic Conclusion: if average welfare is very negative, then this rule calls for creating beings with lives of torture not worth living as long as those lives are even slightly better than the average. This helps no one and harms particular individuals.

It's discussed in the SEP article.

Frankly, if the only flaw in that moral theory is that it comes to a weird answer in a world that is already a universally horrible hellscape for all sentient beings, then I don't see that as a huge problem in it.

In any case, I'm not sure that's the wrong answer anyway. If every generation is able to improve the lives of the next generation, and keep moving the average utility in a positive direction, then the species is heading in the right direction, and likely would be better off in the long run then if they just committed mass suicide (like additive utilitarian theory might suggest). For that matter, there's a subjective aspect to utility; a medieval peasant farmer might be quite happy if he is 10% better off then all of his neighbors.

I think you're on the right track. I believe that a small population with high utility per capita is better than a large one with low utility per capita, even if the total utility is larger in the small population. But I think tying that moral intuition to the average utility of the population might be the wrong way to go about it, if only because it creates problems like the one CarlShuman mentioned.

I think a better route might be to somehow attach a negative number to the addition of more people after a certain point, or something like that. Or you can add a caveat that basically says for the system to act like total utilitarianism while the average is negative, and average when it's positive.

Btw, in your original post you mention that we'd need a caveat to stop people from killing existing people to raise the average. A simple solution to that would be to continue to count people in the average even after they are dead.

Average utilitarianism implies the Sadistic Conclusion: if average welfare is very negative, then this rule calls for creating beings with lives of torture not worth living as long as those lives are even slightly better than the average.

That's not the Sadistic Conclusion, that's something else. I think Michael Huemer called it the "Hell Conclusion." It is a valid criticism of average utilitarianism, whatever it's called. Like you, I reject the Hell Conclusion.

The Sadistic Conclusion is the conclusion that, if adding more people with positive welfare to the world is bad, it might be better to do some other bad thing then to add more people with positive welfare. Arrhenius gives as an example, adding one person of negative welfare instead of a huge amount of people with positive welfare. But really it could be anything. You could also harm (or refuse to help) existing people to avoid creating more people.

I accept the Sadistic Conclusion wholeheartedly. I harm myself in all sorts of ways in order to avoid adding more people to the world. For instance, I spend money on condoms instead of on candy, abstain from sex when I don't have contraceptives, and other such things. Most other people seem to accept the SC as well. I think the only reason it seems counterintuitive is that Arrhenius used a particularly nasty and vivid example of it that invoked our scope insensitivity.

SEP:

More exactly, Ng's theory implies the “Sadistic Conclusion” (Arrhenius 2000a,b): For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare

There are two different reasons why these population principles state that it might be preferable to add lives of negative welfare. The first, which I referred to as the "Hell Conclusion," is that a principle that values average welfare might consider it good to add lives with negative welfare in a situation where average welfare is negative, because doing so would up the average. The second, which I referred to as the "Sadistic Conclusion," states that, if adding lives with positive welfare can sometimes be bad, then adding a smaller amount of lives with negative welfare might sometimes be less bad.

I am pretty sure I have my terminology straight. I am pretty sure that the "Sadistic Conclusion" the page you linked to is referring to is the second reason, not the first. That being said, your original argument is entirely valid. Adding tormented lives to raise the average is bad, regardless of you refer to it as the "Sadistic Conclusion" or the "Hell Conclusion." I consider it a solid argument against naive and simple formulations of average utilitarianism.

What I refer to as the Sadistic Conclusion differs from the Hell Conclusion in a number of ways, however. Under the Hell Conclusion adding tormented lives is better than adding nobody, providing the tormented lives are slightly less tormented than average. Under the Sadistic Conclusion adding tormented lives is still a very bad thing, it just may be less bad than adding a huge amount of positive lives.

We should definitely reject the Hell Conclusion, but the Sadistic Conclusion seems correct to me. Like I said, people harm themselves all the time in order to avoid having children. All the traditional form of the SC does is concentrate all that harm into one person, instead of spreading it out among a lot of people. It still considers adding negative lives to be a bad thing, just sometimes less bad than adding vast amounts of positive lives.

Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.

Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn't get you there. It just changes the criteria in bizarre ways.

Are you saying we should maximize the average utility of all humans, or of all sentient beings?

I'm not saying anything, at this point. I believe that the best population ethics is likely to be complicated, just as standard ethics are, and I haven't fully settled on either yet.

[-][anonymous]00

Question: How does time fit into this algorithm?

My understanding is the repugnant conclusion is generally thought of as worse than the alternative, but not bad.

I think that implies if someone offered you 3^^^3 years of the repugnant conclusion, or 1 year of true bliss, and the repugnant conclusion gave happiness a mere trillionth as intense as true bliss, and we are simply multiplying S by time, then 3^^^3 years of the repugnant conclusion is better than a year of true bliss.

But I don't know if it is assumed that we need to simply multiply by time, unadjusted (for instance, in S, the logarithm of population size is used.)

This assumes that adding more people is the same as extending the lives of current people - which is the main point of contention.

[-]lmm00

So S is not utilitarian, right? (At least in your example). So your point is that it's possible to have an agent that accepts the repugnant conclusion but agrees with our intuitions in more realistic cases? Well, sure, but that's not really a defense of total utilitarianism unless you can actually make it work in the case where S is total utilitarianism.

S is utilitarian, in the sense of maximising a utility function. S is not total utilitarian or average utilitarian, however.

I find something like average times log of total to be far more intuitive than either average or total. Is this kind of utility function discussed much in the literature?

As far as I know, I made it up. But there may be similar ones (and the idea of intermediates between average and total is discussed in the literature).

Ah, thanks!

there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure.

No. Lives that are just one tiny iota above being too miserable to endure are far below being worth enduring. In order for Y to be better, or even good, the lives cannot be miserable at all. Parts can be miserable, but only if they are balanced by parts that are equally good.