I think we should stop talking about utility functions.
In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and which actively makes them worse at reasoning about ethics. To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.
The funny part is that the failure mode I worry the most about is already an entrenched part of the Sequences: it's fake utility functions. The soft failure is people who think they know what their utility function is and say bizarre things about what this implies that they, or perhaps all people, ought to do. The hard failure is people who think they know what their utility function is and then do bizarre things. I hope the hard failure is not very common.
It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.
Talking about utility functions can be useful if one believes any of the following about ideal rationality, as a concrete example of what one means if nothing else.
I guess when you say you don't "endorse utility functions" you mean that you don't endorse 1 or 2. Do you endorse any of the others, and if so what would you use instead of utility functions to illustrate what you mean?
It's hard for me to know that 4 and 5 really mean since they are so abstract. I definitely don't endorse 1 or 2 and I'm pretty sure I don't endorse 4 either (integrating over uncertainty in what you meant). I'm uncertain about 3; it seems plausible but far from clear. I'm certainly not consequentialist and don't want to be, but maybe I would want to be in some utopian future. Again, I'm not really sure what you mean by 5, it seems almost tautological since everything is a mathematical object.