Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:
Assuming I am a forward looking agent who aims to maximize long term, not short term utility.
What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?
Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?
Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?
EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions: