timtyler comments on The Human's Hidden Utility Function (Maybe) - Less Wrong

44 Post author: lukeprog 23 January 2012 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 25 January 2012 03:39:54PM 0 points [-]

The point is about how humans make decisions, not about what decisions humans make.

Comment author: timtyler 25 January 2012 06:30:35PM *  0 points [-]

The point is about how humans make decisions, not about what decisions humans make.

Er, what are you talking about? Did you not understand what was wrong with Luke's sentence? Or what are you trying to say?

Comment author: Manfred 25 January 2012 07:39:29PM 4 points [-]

The way I know to assign a utility function to an arbitrary agent is to say "I assign what the agent does utility 1, and everything else utility less than one." Although this "just so" utility function is valid, it doesn't peek inside the skull - it's not useful as a model of humans.

What I meant by "how humans make decisions" is a causal model of human decision-making. The reason I wouldn't call all agents "utility maximizers" is because I want utility maximizers to have a certain causal structure - if you change the probability balance of two options and leave everything else equal, you want it to respond thus. As gwern recently reminded me by linking to that article on Causality, this sort of structure can be tested in experiments.

Comment author: timtyler 25 January 2012 08:40:54PM *  2 points [-]

Although this "just so" utility function is valid, it doesn't peek inside the skull - it's not useful as a model of humans.

It's a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can't easily compare the values of different creatures if you can't even model those values in the same framework.

The reason I wouldn't call all agents "utility maximizers" is because I want utility maximizers to have a certain causal structure - if you change the probability balance of two options and leave everything else equal, you want it to respond thus.

Well, you can define your terms however you like - if you explain what you are doing. "Utility" and "maximizer" are ordinary English words, though.

It seems to be impossible to act as though you don't have a utility function, (as was originally claimed) though. "Utility function" is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents - that seems to be roughly the concept that you are looking for.

So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.

Comment author: Manfred 26 January 2012 01:23:58AM 2 points [-]

Although this "just so" utility function is valid, it doesn't peek inside the skull - it's not useful as a model of humans.

It's a model of any computable agent.

Sorry, replace "model" with "emulation you can use to predict the emulated thing."

There may be slightly more concise methods of modelling some agents - that seems to be roughly the concept that you are looking for.

I'm talking about looking inside someone's head and finding the right algorithms running. Rather than "what utility function fits their actions," I think the point here is "what's in their skull?"

Comment author: timtyler 05 August 2012 12:30:12PM -1 points [-]

I'm talking about looking inside someone's head and finding the right algorithms running. Rather than "what utility function fits their actions," I think the point here is "what's in their skull?"

The point made by the O.P. was:

Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don't act like they have utility functions)

It discussed actions - not brain states. My comments were made in that context.