taw comments on Measures, Risk, Death, and War - Less Wrong

11 Post author: Vaniver 20 December 2011 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: taw 21 December 2011 12:32:13AM 1 point [-]

Here's fundamental impossibility result for modeling risk aversion in expected utility framework.

Utility functions are a wrong abstraction, and you'll be better off if you abandon them.

Comment author: Vaniver 21 December 2011 02:53:29AM *  3 points [-]

Here's fundamental impossibility result for modeling risk aversion in expected utility framework.

I'm not sure we're reading the same paper. Rabin argues that people are (should be) roughly risk-neutral when stakes are small, as massively concave utility functions get ridiculous- which is what I argue:

Local risk neutrality, though, is the norm- zoom in on any utility function close enough and it'll be roughly flat.

The meat of the paper also rests on a very strong assumption: that the person rejects the gamble at any wealth level. He discusses a narrower case (what I would call my "lunch" case) where you know they reject the value at anything below a certain point, but nothing about their risk attitude above that point. In my example, that would be choosing not to gamble (for a small yield) if it puts you under $3. For his example, the threshold is rather high: $350k. He calculates that someone who turns down a gamble that replaces their wealth of {1 340,000} with {.5 339,900, .5 340,105} is insanely cautious. I agree- I don't expect a sane person to behave that way. That's not an indictment of expected utility theory, that's an indictment of the parameters chosen.

When I help someone pick out a function to model their preferences, I don't elicit it the way he does. We pick some gambles that are easy to wrap their head around, find indifference values, fit it to a function like log or exponential, and then sanity check the output. If we got values like the ones he's getting from a fitted function, I would suspect they miscalculated their indifference values and we would play around some more, possibly adding thresholds and making it a piecewise function. It's not so much a "fundamental impossibility result" as it is "if things look like this, you're not doing anything useful."

(There's a separate, descriptive question- "is a EU calculation with a consistent utility function why people refuse modest gambles?"- which I think is secondary. They might refuse a gamble because they're bad at math, or they have a massive case of status quo bias, or so on. I don't think we should care much about predicting that sort of behavior compared to prescribing carefully planned behavior.)

Utility functions are a wrong abstraction, and you'll be better off if you abandon them.

I'm not sure what you mean here, so I'll state my reaction to some possible meanings. I affirm that utility functions are a calculation method useful for capturing risk attitudes but shouldn't be given philosophical importance. I deny that utility functions cannot be a useful calculation method.