EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
That should definitely be part of the solution. In fact, I would say that utility functions defined over individual world-states, rather than entire future-histories, should not have ever been considered in the first place. The effects of your actions are not restricted to a single time-slice of the universe, so you cannot maximize expected utility if your utility function takes only a single time-slice as input. (Also because special relativity.)
These are kludge-y answers to special cases of a more general issue: we care about the preferences existing people have for the future. Presumably X himself would prefer a future in which he keeps his 50 points of well-being over a future where he has 25 and Y pops into existence with 25 as well, whereas Y is not yet around to have a preference. I don't see what the peak well-being that X has ever experienced has anything to do with it. If we were considering whether to give X an additional 50 units of well-being (for a total of 100), or bring into existence Y with 50 units of well-being, it seems to me that exactly the same considerations would come into play.