EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can't keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don't object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don't care to stop it). Or to put it in less weird terms, we won't object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.
There's also the incentive for an agent with this system to self-modify to stop changing their utility function over time.
Yes, this is at first glance in conflict with our current understanding of the universe. However, it is probably one of the strategies with the best hope of finding a way out of that universe.