I think we should stop talking about utility functions.
In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and which actively makes them worse at reasoning about ethics. To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.
The funny part is that the failure mode I worry the most about is already an entrenched part of the Sequences: it's fake utility functions. The soft failure is people who think they know what their utility function is and say bizarre things about what this implies that they, or perhaps all people, ought to do. The hard failure is people who think they know what their utility function is and then do bizarre things. I hope the hard failure is not very common.
It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.
For our universe, other models have been extremely succesful. Therefore, the generality of wave functions clearly is not required. In case of (human) preferences, it is unclear whether another model suffices.
What you are saying seems to me a bit like: "Turing machines are difficult to use. Nobody would simulate this certain X with a Turing machine in practice. Therefore Turing-machines are generally useless." But of course on some level of practical application, I totally agree with you, so mabye there is no real disagreement in the use of utility functions here - at least I would never say something like "my utility funtion is ..." and I do not attempt to write a C-Compiler on a Turing machine.
I do not think that the statement "utility functions can model human preferences" has a formal meaning, however, if you say that it is not true, I would really be very interested in how you prefer to model human preferences.