Manfred comments on The Human's Hidden Utility Function (Maybe) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
It's a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can't easily compare the values of different creatures if you can't even model those values in the same framework.
Well, you can define your terms however you like - if you explain what you are doing. "Utility" and "maximizer" are ordinary English words, though.
It seems to be impossible to act as though you don't have a utility function, (as was originally claimed) though. "Utility function" is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents - that seems to be roughly the concept that you are looking for.
So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.
Sorry, replace "model" with "emulation you can use to predict the emulated thing."
I'm talking about looking inside someone's head and finding the right algorithms running. Rather than "what utility function fits their actions," I think the point here is "what's in their skull?"
The point made by the O.P. was:
It discussed actions - not brain states. My comments were made in that context.