EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who's died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn't care about death by a factor of "t", and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that's too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person's death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren't afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we'll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can't change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of "lifespan juice", which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as "dividing a single person up", by killing them partway through their otherwise lifespan and replacing them with a new person.