dlthomas comments on Morality is not about willpower - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (144)
I don't have the time to examine the paper in depth just now (I certainly will later, it looks interesting) but it appears our proximate disagreement is over what you meant when you said "risk aversion" - I was taking it to mean a broader "demanding a premium to accept risk", whereas you seem to have meant a deeper "the magnitudes of risk aversion we actually observe in people for various scenarios." Assuming the paper supports you (and I see no reason to think otherwise), then my original objection does not apply to what you were saying.
I am still not sure I agree with you, however. It has been shown that, hedonically, people react much more strongly to loss than to gain. If taking a loss feels worse than making a gain feels good, then I might be maximizing my expected utility by avoiding situations where I have a memory of taking a loss over and above what might be anticipated looking only at a "dollars-to-utility" approximation of my actual utility function.
The only reason expected utility framework seems to "work" for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points - no bet, bet fail, bet win.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you'll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Could you provide a simple (or at least, near minimally complex) example?
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
You are being frustrating.
Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point.
I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia - not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion.
Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people?
If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range.
Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00.
It also gets to infinities if there's a risk of dollar worth below -$10.
Yes, it is weak risk aversion - but is it not still risk aversion, as I had initially meant (and initially thought you to mean)?
Yes, of course. I'd considered this irrelevant for reasons I can't quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?