Perplexed comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
It was my impression that it was LW orthodoxy that at "reflective equilibrium", the values and preferences of rational humans can be represented by a utility function. That is:
... if we or our AI surrogate ever reach that point, then humans have a utility function that captures what we want morally and hedonistically. Or so I understand it.
Yes, our current god-shatter-derived inconsistent values can not be described by a utility function, even as an abstraction. But it seems to me that most of the time what we are actually talking about is what our values ought to be rather than what they are. So, I don't think that a utility function is a ridiculous abstraction - particularly for folk who strive to be rational.
Actually, yes they can. Any computable agent's values can be represented by a utility function. That's one of the good things about modelling using utility functions - they can represent any agent. For details, see here: