Yesterday I wrote about the difficulties of ethics and potential people. Namely, that whether you bring a person into existence or not changes the moral metric by which your decision is measured. At first all I had was the observation suggesting that the issue was complex, but no answer to the question "Well then, what should we do?" I will write now about an answer that came to me.

All theories regarding potential people start by comparing outcomes to find which is most desirable, then moving towards it. However I believe I have shown that there are two metrics regarding such questions, and those metrics can disagree. What then do we do?

We are always in a particular population ourselves, and so we can ask not which outcome is preferable, but if we should move from one situation to another. This allows us to consider the alternate metrics in series. For an initial name more attractive than "my rule" I will refer to the system as Deontological Consequentialism, or DC. I'm open to other suggestions.

Step 1: Consider your action with the metric of new people not coming to be: that is, only the welfare of the people who will exist regardless of your decision.* I will assume in this discussion there are three possibilities: people receive higher utility, lower utility, or effectively unchanged utility. You might dispense with the third option, the results are similar.

First, if you expect reduced utility for existing people from taking an action: do not take that action. This is regardless of how many new people might otherwise exist or how much utility they might have; if we never bring them into existence, we have wronged no one.

This is the least intuitive aspect of this system, though it is also the most critical for avoiding the paradoxes of which I am aware. I think this unintuitive nature mostly stems from automatically considering future people as if they exist. I'd also note that our intuitions are really not used to dealing with this sort of question, but one more approachable example exists. If a couple strongly expects they will be unhappy having children, will derive little meaning from parenthood and few material benefits from their children and we believe them, if in short they really don't want kids, I suspect few will tell them they really ought to have children anyway as long as the children will be happy. I think few people consider how very happy the kids or grandkids might be either, even if the couple would be great parents; if the couple will be miserable we probably advocate they stay childless. Imagining such unhappy parents we also tend to imagine unhappy children, so some effort might be required to keep from ruining the example. See also Eliezer's discussion of the fallibility of intuition in a recent post.

Second, if we expect effectively unchanged utility for existing people it is again perfectly acceptable to not create new people, as you wrong no one by doing so. But as you don't harm yourself either, it's fine for you if you create them, bringing us to

Step 2a: Now we consider the future people. If they will have negative utility, i.e. roughly speaking they wish they hadn't been born, or for their sake we wish they hadn't been born, then we ought not to bring them into existence. We don't get anything and they suffer. If instead they experience entirely neutral lives (somehow), if they have no opinion on their creation, then it really doesn't matter if we create them or not.

If they will experience positive lives, then it would be a good thing to have created them, as it was "no skin off our backs anyway". However I would theoretically hold it's still perfectly acceptable not to bring them into existence, as if we don't they'll never mind that we didn't. But my neutrality towards bringing them into existence is such that I would even accept a rule I didn't agree with, saying that I ought to create the new people.

Now back into step 1, there is the case were existing people will benefit by creating new people. Here we are forced to consider their wellbeing in

Step 2b: Now if the new people have positive utility, or zero utility in totally neutral lives, then we should go ahead and bring them into existence, as we benefit and at least they don't suffer. However if they will have overall negative lives, then we should compare how much utility existing people gain and subtract the negative utility of the new people. You might dislike the idea of this inequality (I do as well) but this is a general issue with utilitarianism separate from potential people; here we’re considering them the same as existing people (as they then would be). If you've got a solution, such as weighting negative utility more heavily or just forcing in egalitarian considerations, apply it here.

 

This concludes Deontological Consequentialism. I won't go over them all here, but this rule seems to avoid unattractive answers to all paradoxes I've seen debated. There is one that threw me for a loop, the largely intractable "Mere Addition Paradox". I'll describe it briefly here, mostly to show how DC (at first) seems to fall prey to it as well.

A classic paradox is the Repugant Conclusion, which takes total utilitarianism to suggest we should prefer a vast population with lives barely worth living more than relatively few lives of very high utility. Using DC, if we are in the high utility population we note that we (existing people) would all experience drastically lower utility by bringing about the second situation, and so avoid it.

In the Mere Addition Paradox, you start from the high utility situation. Then you consider, is it moral to increase the utility of existing people while bringing into being huge numbers of people with very low but positive utility? DC seems to suggest we ought to, as we benefit and they have positive lives as well. Now that we've done that, ought we to reduce our own utility if it would allow the low-utility people to have higher utility, such that the total is drastically increased but everyone still experiences utility not far above zero? Deontological Consequentialism is only about potential people, but here both average and total utilitarianism suggest we should do this, increasing both the total and average.

And with this we find that by taking it in these steps we have arrived at the Repugnant Conclusion! For DC this is accomplished by "slipping in" the new people so that they become existing people, and our consideration of their wellbeing changes. The solution here is to take account of our own future actions: we see that by adding these new people at first, we will then seek to distribute our own utility to much increase theirs, and in the end we actually do experience reduced utility. That is, we see that by bringing them into existence we are in reality choosing the Repugnant Conclusion. Realizing this, we do not create them, and avoid the paradox. (In most conventional situations it seems more likely we can increase the new people's utility without such a decrease to our own.)


*An interesting situation arises when we know new people will come to be regardless of our decision. I suggest here that we average the utility of all new people in each population, treat the situation with fewer total people as our "existing population", and apply DC from there. Unless people are interested in talking about this arguably unusual situation however, I won't go into more detail.

New Comment
17 comments, sorted by Click to highlight new comments since:
[-]gjm20

I don't find the Repugnant Conclusion as Repugnant as some people do. I think it gets some of its Repugnance from ambiguity about what constitutes a life that's "barely worth living". It could mean (1) a life whose bearer finds it just barely better than death, or (2) a life that in absolute terms is just barely better than if it had never been. I think a life of type 1 is probably much worse than a life of type 2, and it's type-2 lives that the Repugnant Conclusion gives us an awful lot of.

Yes that's a viable solution, to just accept the Repugnant Conclusion. As I discussed in my first post I think it's very hard to compare any existence to non-existence, as it depends on whether you exist to make the consideration. At least when the person already exists, the way I would do it is to consider type 1 the measure of type 2. I.e. they are the same to me.

I'm not quite convinced with the solution to the Repugnant Conclusion, since it amounts to saying, "We know it's going to happen, so let's just not let it." It only provides a comparison with a clear causal chain; it doesn't say which world is preferable.

I think the easy way to analyze such worlds is through a veil of ignorance. Everyone in each world exists; thus, we can take existence as a given and ask which world we would prefer to be born into. There's no probabilistic weighting based on population because there's no line of people waiting to get in, and there's obviously no alternative to not being born - there's no "you" to experience the alternative if you're not born.

Basically, it seems odd to say that world A is "better" than world B if, if you had to choose, you'd choose to live in B hands down if you're being born to a random parent. This also works at a more micro level, except intervention at a micro level will generally have absurdly high costs. If you keep in mind that hypothetical changes (like killing everyone below average utility, or heavy restrictions on reproduction, for examples) would actually affect the utility distribution beyond their intended purpose, this approach works quite well, I think. If you define some general near-universals as to what a person is likely to prefer (as opposed to what you prefer) it should work even better.

Elaborating completely would take a top-level length post, so I'll hold off on that for the moment.

In my other post I put forward the argument that you can't coherently say which world is preferable, at least in cases when the alternative metrics disagree. So I have therefore not made any such statements myself.

I think what you propose is a rational view which lacking an alternative I would espouse. This seems regardless to really just be a justification for average utilitarianism, at least if we consider the expected value of utility in a population when evaluating its worth. That faces us with what are often considered oddities of average utilitarianism, such as:

For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984).

To be honest I generally prefer the prescriptions given by average utilitarianism (opposed to total) but I'd lke a theory with fewer seeming paradoxes.

(In the first case considered by Parfit, DC suggests staying in whichever population you are in. In the second, it suggests both populations strive towards the single person population.)

Ideally, an ethics should not contain the concept "person".

We already know the concept of "person" is bankrupt unless you allow degrees of personhood (unless you are content with an ethics that has no problem with torturing dogs, but forbids saving 2 people using the organs from one brain-dead death-row convict with a terminal disease that gives his vegetative torso one month to live).

But even figuring out how much personhood to give each "person" isn't the best solution. If you contemplate different worlds with different amounts and types of joys, pains, and experiences; your judgement of which worlds are preferable shouldn't change according to how someone rather arbitrarily draws the "person" boundaries within it.

Besides, it will all be irrelevant once you've been assimilated anyway.

We already know the concept of "person" is bankrupt unless you allow degrees of personhood (unless you are content with an ethics that has no problem with torturing dogs, but forbids saving 2 people using the organs from one brain-dead death-row convict with a terminal disease that gives his vegetative torso one month to live).

My ethics has the concept of personhood in it and doesn't allow degrees of personhood... but I don't think the brain-dead death-row convict with the terminal disease is a person. (Because they're brain-dead, not because they're on death row or because they're terminal.) Are you making the error that all humans must be persons for all definitions of "person"?

(I also have a problem with torturing dogs. My entire ethics isn't about persons.)

Are you making the error that all humans must be persons for all definitions of "person"?

That is the usual form that this error takes. If you choose to define people differently, but still as a binary predicate, you're just going to have some other horrible results elsewhere in your ethical system.

If you choose to define people differently, but still as a binary predicate, you're just going to have some other horrible results elsewhere in your ethical system.

I'd like to see some support for this.

It's much nicer to be able to say that a sperm+egg only gradually become a person, than to have to argue about when its personhood transitions from 0 to 1.

I don't think this eliminates much real inconvenience, although it might soothe ruffled feathers in very superficial discussions. If you can have fractional people, that creates the following problems:

  • The temptation to do math with them.
  • The same sort of icky skeeviness you get when you read history books and note that at one time, for population count purposes, blacks counted as 3/5 of a person each.
  • Disagreement about whether someone is 1/2 or 5/8 of a person could easily get as heated as disagreement about whether they are a person or not.
  • Depending on what factors increase or decrease personhood, widespread acceptance of a belief in fractional people could lead to attempts to game the system and be "personier".
[-]Cyan20

The idea of fractional people is less common that the idea that personhood is a cluster of properties in thingspace, and various beings partake of those properties to a greater or lesser extent.

This seems to carve reality at its joints more than the idea of fractional people, but is certainly still problematic.

It may seem difficult, but we already, in practice, have many classes of fractional persons and it doesn't seem too problematic.

Juveniles, for example, or the handicapped (who have rights in varying degrees - a permanent vegetable doesn't have the right to not be killed by another person, but a low-functioning autistic most certainly does); and past examples of attempts to make someone personier haven't been too bad. (I think here of Terri Schiavo, and the attempts by right-wingers to make her seem less brain-dead than she was - remember Frist's diagnosis-via-videotape?)

(I also have a problem with torturing dogs. My entire ethics isn't about persons.)

Sure it is. You don't have a (ethical) problem with dogs torturing dogs, do you?

I don't think dogs typically do anything I'd label torture, mostly because it'd have to be more systematic than just fighting. But no, I don't think any ethically wrong act is performed if one dog hurts another dog. I'd probably still try to break them up.

I was using person as the most conventional reference of "a thing woth caring about". I don't draw ethical distinctions of personhood, but work off the extent of pleasure and pain felt by anything that can do so. How you figure that out, or begin to compare it, is of course one hell of a problem.

DC seems like a bad name. "Deontic" and "Consequentialism" are both heavily-loaded words in ethics, and imply a great deal more than simply a 'rule'. Also, combining the names of two inconsistent ethical theories should be reserved for something much cleverer than this.

Point noted. I'd be grateful if you have a more appropriate suggestion.

I don't mind the knock on the theory's cleverness, but could you elaborate? I'd like to improve the theory or see what intractable errors it has if that's not possible. I don't think anyone has commented on why the theory doesn't make sense, if someone disagrees it would be great to hear where they think I've gone wrong. Also, are you unsatisfied in its solution to the Mere Addition Paradox?