timtyler comments on Utility is unintuitive - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
I used to think I understood this stuff, but now jsteinhardt has me confused. Could you, or someone else familiar with economic orthodoxy, please tell me whether the following is a correct summary of the official position?
A lottery ticket offers one chance in a thousand to win a prize of $1,000,000. The ticket has an expected value of $1000. If you turn down a chance to purchase such a ticket for $900 you are said to be money risk averse.
A rational person can be money risk averse.
The "explanation" for this risk aversion in a rational person is that the person judges that money has decreasing marginal utility with wealth. That is, the person (rationally) judges that $1,000,000 is not 1000 times as good (useful) as $1000. An extra dollar means less to a rich man, than to a poor man.
This shifting relationship between money and utility can be expressed by a "utility function". For example, it may be the case for this particular rational individual that one util corresponds to $1. But $1000 corresponds to 800 utils and $1,000,000 corresponds to 640,000 utils.
And the rationality of not buying the lottery ticket can be seen by considering the transaction in utility units. The ticket costs 800 utils, but the expected utility of the ticket is only 640 utils. A rational, expected utility maximizing agent will not play this lottery.
ETA: One thing I forgot to insert at this point. How do we create a utility function for an agent? I.e. how do we know that $1,000,000 is only worth 640,000 utils to him. We do so by offering a lottery ticket paying $1,000,000 and then adjusting the odds until he is willing to pay $1 (equal to 1 util by definition) for the ticket. In this case, he buys the ticket when the odds improve to 640,000 to 1.
Now imagine a lottery paying 1,000,000 utils, again with 0.001 probability of winning. The ticket costs 900 utils. An agent who turns down the chance to buy this ticket could be called utility risk averse.
An agent who is utility risk averse is irrational. By definition. Money risk aversion can be rational, but that is explained by diminishing utility of money. There is no such thing as diminishing utility of utility.
That is my understanding of the orthodox position. Now, the question that jsteinhardt asks is whether it is not time to challenge that orthodoxy. In effect, he is asking us to change our definition of "rational". (It is obvious, of course, that humans are not always "rational" by this definition - it is even true that they have biases which make them systematically deviate from rationality, for reasons which seem reasonable to them. But this, by itself, is not reason to change our definition of "rationality".)
Recall that the way we rationalized away money risk aversion was to claim that money units become less useful as our wealth increases. Is there some rationalization which shows that utility units become less pleasing as happiness increases? Strikes me as a question worth looking into.
That's the issue of the usefulness of the Axiom of Independence - I believe.
You can drop that - though you are still usually left with expected utility maximisation.
Then you become a money pump.
It is the most commonly dropped axiom. Dropping it has the advantage of allowing you use the framework to model a wider range of intelligent agents - increasing the scope of the model.
What is the issue? Where, in my account, does AoI come into play? And why do you suggest that AoI only sometimes makes a difference?
My comments about independence were triggered by:
The independence axiom says "no" - I think - though it is "just" an axiom.
For the last question, if you drop axioms you are still usually left with expected utility maximisation - though it depends on exactly how much you drop at once. Maybe it will just be utility maximisation that is left - for example.