It occurs to me that the various utility indifference approaches might be usable in population ethics.

One challenge for non-total utilitarians is how to deal with new beings. Some theories - average utilitarianism, for instance, or some other systems that use overall population utility - have no problem dealing with this. But many non-total utilitarians would like to see creating new beings as a strictly neutral act.

One way you could do this is by starting with a total utilitarian framework, but subtracting a certain amount of utility every time a new being B is brought into the world. In the spirit of utility indifference, we could subtract exactly the expected utility that we expect B to enjoy during their life.

This means that we should be indifferent as to whether B is brought into the world or not, but, once B is there, we should aim to increase B's utility. There are two problems with this. The first is that, strictly interpreted, we would also be indifferent to creating people with negative utility. This can be addressed by only doing the "utility correction" if B's expected utility is positive, thus preventing us from creating beings only to have them suffer.

The second problem is more serious. What about all the actions that we could do, ahead of time, in order to harm or benefit the new being? For instance, it would seem perverse to argue that buying a rattle for a child after they are born (or conceived) is an act of positive utility, whereas buying it before they were born (or conceived) would be a neutral act, since the increase in expected utility for the child is cancel out by the above process. Not only is it perverse, but it isn't timeless, and isn't stable under self modification.

What would be needed is a natural, timeless zero for the act of bringing a being into existence. Something that took into account things done before the being is created as well as after. A sort of Rawlsian veil of ignorance about whether the being is created at all.

This suggests another approach, vaguely derived from utility indifference and counterfactuals. What if the agents "believed" that being B wasn't going to be successfully created? If they were certain that its creation or conception would fail? Then they wouldn't be, eg, buying rattles ahead of time.

It seems this could define a natural zero. If agent A decides to bring being B into existence, the natural zero is the expected utility B would face, if agent A expected that that B could not be brought into existence.

Then, given that agent A actually expects that B will be brought into existence, they can freely buy them rattles, or whatever else, either before or after the birth or conception.

 

What do people think of this approach? Does it solve (some of) the issues involved? Could it be improved? (yes)

Though I hope this approach is of use, I'm personally not enamoured of it. My objection to total utilitarianism is entirely to the repugnant conclusion. I feel that bringing a very happy being into existence should be generally a positive act, and prefer systems that weight utilities to prevent repugnant conclusions, rather than ones that zero-out the creation of new beings (there are several ways of doing this).

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:48 PM

According to how I understand the proposed view (which might well be wrong!), there seems to be a difficulty that your natural zero affects how to do tradeoffs with the welfare of pre-existing beings. How would the view deal with the following cases:

Case_1: Agent A has the means to bring being B into existence, but if no further preparations are taken, B will be absolutely miserable. If agent A takes away resources from pre-existing being C in order to later give them to B, thereby causing a great deal of suffering to C, B's life-prospects can be improved to a total welfare of slightly above zero. If the natural zero is sufficiently negative, would such a transaction be permissible?

Case_2: If it's not permissible, it seems that we must penalize cases where the natural zero starts out negative. However, how about a case where the natural zero is just slightly negative, but agent A only needs to invest a tiny effort in order to guarantee being B a hugely positive life. Would that always be impermissible?

This is the tricky issue of dealing with natural zeros that are below the "zero" of happy/meaningful lives (whatever that is).

As I said, this isn't my favourite setup, but I would advocate requiring the natural zero be positive, and not bringing anyone into existence otherwise. That means that I'd have to reject Case_2 - unless there is anyone who would be sufficiently happy about the existence of B with a hugely positive life, that the tiny effort is less than that happiness.

Total utilitarians, your own happiness can make people come into existence even in these non-total ut situations!

Not sure if directly related, but some people (e.g. Alan Carter) suggest having indifference curves. These consist of isovalue curves on a plane with average happiness and amount of happy people as axes, each curve corresponding to the same amount of total utility. The Repugnant Conclusion scenario would be nearly flat on the amount of happy people axis and the a fully satisfied Utility Monster nearly flat on the average happiness axis. It seems this framework produces similar results as yours. Every time you create a being slightly less happy than the average you have a gain in the amount of happy people but a loss in average happiness and might end up with the exact same total utility.

Yep, I've seen that idea. It's quite neat, and allows hyperbolic indifference curves, which are approximately what you want.

Or you could just follow Michael Huemer and embrace the repugnant conclusion .

I could, but see absolutely no reason to.

As ThrustVectoring pointed out in your "Skirting the mere addition paradox" thread, the suggested aggregation function there doesn't allow us to preserve the intuition that adding positive-welfare lives is always positive. Huemer calls the intuition

The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.

Whereas, your new utility-indifference suggestion doesn't allow us to preserve the intuition that it's OK to have kids in the real world. Most actual prospective parents face some small epistemic probability that their child would have some horrible fatal genetic disease that makes a life worse than nothing. Even for parents who do have genetic flaws however, there is typically also a chance that a child, conceived from a healthy egg and sperm, will lead a rewarding life. The lucky child, coming from a different sperm/egg combo, would be a different child than the unlucky one. Most parents reason that the large probability of making a happy child outweighs the tiny chance of making a doomed miserable one. But if we do a "utility correction" for positive lives, and no such correction for negative lives, then the net expectation for having a child is negative.

The correction is the expected utility of the child, not the actual utility.

I believe your second use of the word "the" above is a mistake. Maybe I misunderstood the utility-correction idea, but it seemed to me it was about individual human lives, not acts by agents who might create lives. There is the act of reproduction, but there is (at the time of decision) no such thing as the child.

But there is an expected personal utility for the potential being created.

Sure, but if you interpret your principle that way, I think it loses some plausibility in the original context of average vs total utilitarianism (etc.). When B is a variable ranging over different people, it's no longer so plausible that we should be indifferent when the expected personal utility for B is zero.