You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Median utility rather than mean? - Less Wrong Discussion

6 Post author: Stuart_Armstrong 08 September 2015 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread.

Comment author: Lumifer 08 September 2015 05:17:23PM *  6 points [-]

Then E(u|πm) is within one standard deviation (using dmu) of the median value of dmu.

As the Wikipedia says, "If the distribution has finite variance". That's not necessarily a good assumption.

Consider a policy with three possible outcomes: one pony; two ponies; the universe is converted to paperclips. What's the median outcome? One pony. Don't you want a pony?

The median is a robust estimator meaning that it's harder for outliers to screw you up. The price for that, though, is indifference to the outliers which I am not sure is advisable in the utility context.

Comment author: V_V 09 September 2015 09:08:14AM -1 points [-]

As the Wikipedia says, "If the distribution has finite variance". That's not necessarily a good assumption.

In fact, "Pascal's mugging" scenarios tend to pop up when you allow for utility distributions with infinite variance.

Comment author: Lumifer 09 September 2015 04:30:09PM 1 point [-]

For Pascal's Muggings I don't think you care that much about variance -- what you want is a gargantuan skew.

Comment author: Stuart_Armstrong 08 September 2015 05:21:42PM -1 points [-]

Indeed. But the argument about convergence when you get more and more options still applies.

Comment author: Lumifer 08 September 2015 05:36:10PM *  2 points [-]

Still -- only is you true underlying distribution has finite variance. Check some plots of, say, a Cauchy distribution -- it doesn't take much of heavy tails to have no defined variance (or mean, for that matter).

Not everything converges to a Gaussian.

Comment author: Stuart_Armstrong 09 September 2015 09:19:41AM *  0 points [-]

You did notice that I mentioned the Cauchy distribution by name and link in the text, right?

And the Cauchy distribution is the worst possible example for defending the use of the mean - because it doesn't have one. Not even, a la St Petersburg paradox, an infinite mean, just no mean at all. But it does have a median, exactly placed in the natural middle.

Your argument works somewhat better with one of the stable distributions with an alpha between 1 and 2. But even there, you need a non-zero beta or else median=mean! The standard deviation is an upper bound on the difference, not necessarily a sharp one.

It would be interesting to analyse the difference between mean and median for stable distributions with non-zero beta; I'll get round to that some day. My best guess is that you could use some fractional moment to bound the difference, instead of (the square root of) the variance.

EDIT: this is indeed the case, you can use Jensen's inequality to show that the q-th root of the q-th absolute value central moment, for 1<q<2, can be substituted as a bound between mean and moment. For q<alpha, this should be finite.

Comment author: Lumifer 09 September 2015 04:35:29PM 1 point [-]

I only brought up Cauchy to show that infinite-variance distributions don't have to be weird and funky. Show a plot of a Cauchy pdf to someone who had, like, one undergrad stats course and she'll say something like "Yes, that's a bell curve" X-/

Comment author: Stuart_Armstrong 09 September 2015 06:46:22PM 0 points [-]

Actually, there's no need for higher central moments. The mean absolute deviation around the mean (which I would have called the first absolute central moment) bounds the difference between mean and median, and is sharper than the standard deviation.