timtyler comments on The Domain of Your Utility Function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
The arguments in the posts themselves seem unimpressive to me in this context. If there are strong arguments that human actions cannot, in principle, be modelled well by using a utility function, perhaps they should be made explicit.
Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that "utility" did not equal happiness, or anything simple like that.
I tend to think that the best candidate in most organisms is "expected fitness". It's probably reasonable to expect fairly heavy correlations with reward systems in brains - if the organisms have brains.
Agents which can't be modelled by a utility-based framework are:
AFAIK, there's no good evidence that either kind of agent can actually exist. Counter-arguments are welcome, of course.