# timtyler comments on The Human's Hidden Utility Function (Maybe) - Less Wrong

43 23 January 2012 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 25 January 2012 08:40:54PM *  2 points [-]

Although this "just so" utility function is valid, it doesn't peek inside the skull - it's not useful as a model of humans.

It's a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can't easily compare the values of different creatures if you can't even model those values in the same framework.

The reason I wouldn't call all agents "utility maximizers" is because I want utility maximizers to have a certain causal structure - if you change the probability balance of two options and leave everything else equal, you want it to respond thus.

Well, you can define your terms however you like - if you explain what you are doing. "Utility" and "maximizer" are ordinary English words, though.

It seems to be impossible to act as though you don't have a utility function, (as was originally claimed) though. "Utility function" is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents - that seems to be roughly the concept that you are looking for.

So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.

Comment author: 26 January 2012 01:23:58AM 2 points [-]

Although this "just so" utility function is valid, it doesn't peek inside the skull - it's not useful as a model of humans.

It's a model of any computable agent.

Sorry, replace "model" with "emulation you can use to predict the emulated thing."

There may be slightly more concise methods of modelling some agents - that seems to be roughly the concept that you are looking for.

I'm talking about looking inside someone's head and finding the right algorithms running. Rather than "what utility function fits their actions," I think the point here is "what's in their skull?"

Comment author: 05 August 2012 12:30:12PM -1 points [-]

I'm talking about looking inside someone's head and finding the right algorithms running. Rather than "what utility function fits their actions," I think the point here is "what's in their skull?"

The point made by the O.P. was:

Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don't act like they have utility functions)

It discussed actions - not brain states. My comments were made in that context.