Perplexed comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 16 June 2011 01:44:49AM 1 point [-]

It was my impression that it was LW orthodoxy that at "reflective equilibrium", the values and preferences of rational humans can be represented by a utility function. That is:

if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted

... if we or our AI surrogate ever reach that point, then humans have a utility function that captures what we want morally and hedonistically. Or so I understand it.

Yes, our current god-shatter-derived inconsistent values can not be described by a utility function, even as an abstraction. But it seems to me that most of the time what we are actually talking about is what our values ought to be rather than what they are. So, I don't think that a utility function is a ridiculous abstraction - particularly for folk who strive to be rational.

Comment author: timtyler 16 June 2011 09:57:34PM *  0 points [-]

Yes, our current god-shatter-derived inconsistent values can not be described by a utility function, even as an abstraction.

Actually, yes they can. Any computable agent's values can be represented by a utility function. That's one of the good things about modelling using utility functions - they can represent any agent. For details, see here:

Any agent can be expressed as an O-maximizer (as we show in Section 3.1)