I think we should stop talking about utility functions.
In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and which actively makes them worse at reasoning about ethics. To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.
The funny part is that the failure mode I worry the most about is already an entrenched part of the Sequences: it's fake utility functions. The soft failure is people who think they know what their utility function is and say bizarre things about what this implies that they, or perhaps all people, ought to do. The hard failure is people who think they know what their utility function is and then do bizarre things. I hope the hard failure is not very common.
It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.
Firstly, I thought we were just appealing to consequentialism, not utilitarianism?
So I think I agree with you that believing you have a utility function if you in fact don't might suck, and that baseline humans in fact don't. I was trying to distinguish that from:
a) believing one ought to have a utility function, in which case I might seek to self-modify appropriately if it became possible; so something a bit stronger than the "pretending" you suggested.
b) believing one should strive to act as if one did, while knowing that I'll fall short because I don't.
The second you addressed by saying
Did you have the same position re. Trying to Try?
I have one group of intuitions here that claim impossibility in a moral code is a feature, not a bug, because it helps avoid deluding youself that you've finished the job and are now perfect; and why would I expect the right action to be healthy anyway? But this seems like a line of thinking that is specific to coping with being an inconsistent human, in the absence of an engineering fix for that.