lmm comments on Open thread, Feb. 9 - Feb. 15, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (321)
I found this exercise surprising and useful. Supposing we accept the standard model that our utility is logarithmic in money. Let's suppose we're paid $100,000 a year, and somewhat arbitrarily use that as the baseline for our utility calculations. We go out for a meal with 10 people where each spends $20 on food. At the end of the meal, we can either all put in $20 or we can randomize it and have one person pay $200. All other things being equal, how much should we be prepared to pay to avoid randomization?
Take a guess at the rough order of magnitude. Then look at this short Python program until you're happy that it's calculating the amount that you were trying to estimate, and then run it to see how accurate your estimate was.
Incidentally I discovered this while working out the (trivial) formula for an approximation to this following conversations with Paul Christiano and Benja Fallenstein.
EDITED TO ADD: If you liked this, check out Expectorant by Bethany Soule of Beeminder fame.
This seems to disregard time preferences. Losing $200 now hurts a lot more than the joy of earning $200 over the course of the following year.
If I set w to "amount currently in my checking account that I consider available for random impulse buys" - say $400 - then I get an answer that's almost exactly in line with my intuition.