We can fix this by incorporating a history to the utility function.
I think this is a sensible modelling as we value life exactly because of the continuity over time.
This does complicate matters a lot thought because it is not clear how the history should be taken into account. At least no obvious model as for the UFUs suggests itself (except for the trivial one to ignore the history).
Your examples sound plausible but I guess that trying to model human intuition of this leads to very complex functions.
Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:
My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.
Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.
EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions: