EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
Yes, but I would argue that the fact that they can't actually do that yet makes a difference.
Yes, if I was actually going to be addicted. But it was a bad thing that I was addicted in the first place, not a good thing. What I meant when I said I "do not care in the slightest" was that the strength of that desire was not a good reason to get addicted to heroin. I didn't mean that I wouldn't try to satisfy that desire if I had no choice but to create it.
Similarly, in the case of adding lots of people with short lives, the fact that they would have desires and experience pain and pleasure if they existed is not a good reason to create them. But it is a good reason to try to help them extend their lives, and lead better ones, if you have no choice but to create them.
Thinking about it further, I realized that you were wrong in your initial assertion that "we have to introduce a fudge factor that favors people (such as us) who are or were alive." The types of "fudge factors" that are being discussed here do not, in fact do that.
To illustrate this, imagine Omega presents you with the following two choices:
Everyone who currently exists receives a small amount of additional utility. Also, in the future the amount of births in the world will vastly increase, and the lifespan and level of utility per person will vastly decrease. The end result will be the Repugnant Conclusion for all future people, but existing people will not be harmed, in fact they will benefit from it.
Everyone who currently exists loses a small amount of their utility. In the future far fewer people will be born than in Option 1, but they will live immensely long lifespans full of happiness. Total utility is somewhat smaller than in Option 1, but concentrated in a smaller amount of people.
Someone using the fudge factor Kaj proposes in the OP would choose 2, even though it harms every single existing person in order to benefit people who don't exist yet. It is not biased towards existing persons.
I basically view adding people to the world in the same light as I view adding desires to my brain. If a desire is ego-syntonic (i.e. a desire to read a particularly good book) then I want it to be added and will pay to make sure it is. If a desire is ego-dystonic (like using heroin) I want it to not be added and will pay to make sure it isn't. Similarly, if adding a person makes the world more like my ideal world (i.e. a world full of people with long eudaemonic lives) then I want that person to be added. If it makes it less like my ideal world (i.e. Repugnant Conclusion) I don't want that person to be added and will make sacrifices to stop it (for instance, I will spend money on contraceptives instead of candy).
As long as the people we are considering adding are prevented from ever having existed, I don't think they have been harmed in the same way that that discriminating against an existing person for some reason like skin color or gender harms someone, and I see nothing wrong with stopping people from being created if it makes the world more ideal.
Of course, needless to say, if we fail and these people are created anyway, we have just as much moral obligation towards them as we would towards any preexisting person.
Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.