I've written earlier that utilitarianism* completely breaks down if you try to go specific enough.
Just consider brain simulators, where a new state is computer from the current state, then the current state is overwritten. So the utility of new state has to be greater than the utility of current state. At which point you'd want to find a way to compute maximum utility state without computing intermediate steps. The trade-offs between needs of different simulators of different ages also end up working wrongly.
EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions: