Alicorn comments on The Mere Cable Channel Addition Paradox - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
I've been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I'm no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.
Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.
So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.
Now, you might immediately bring up Parfit's classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn't no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.
I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one's fault, no one choose to cause it, so it seems like it's morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:
1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.
2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.
I don't know about you, but I sure as hell wouldn't push that button, even though it does not differ from the geographic separation argument in any important way.
Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it's possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:
1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.
2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.
3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.
Of course, as I said before, these are all practical objections that don't affect the principle of the thing. If the whole "possibly harming distant aliens by reproducing" thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.
You've been one of my best critics in this, so please let me know if you think I'm onto something, or if I'm totally off-base.
Aside: Another objection to the Benign Addition Paradox I've come up with goes like this.
A: 10 human beings at wellbeing 10.
A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.
B: 10 human beings at wellbeing -10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).
All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:
1) The "conferring a moral obligation on someone harms them" argument I already elucidated.
2) Not counting any utility derived from sadism towards the total.
I'm interested in what you think.
It is traditionally held in ethics that "ought implies can" - that is, that you don't have to do any things that you cannot in fact do.
That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like "obligation," in a consequentialist discussion.
I'll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: "Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off."
We can instead say: "An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them."
Instead of saying: "It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it."
We can instead say: "It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action."
If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don't think that the second premise is at all controversial, but the first one might be.
I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it's a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn't.