You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Median utility rather than mean? - Less Wrong Discussion

6 Post author: Stuart_Armstrong 08 September 2015 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 09 September 2015 03:44:38AM *  1 point [-]

I came up with an algorithm that compromises between them.

I am not sure of the point. If you can "sample ... from your probability distribution" then you fully know your probability distribution including all of its statistics -- mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful.

If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.

Comment author: evand 09 September 2015 02:37:58PM 0 points [-]

If you can "sample ... from your probability distribution" then you fully know your probability distribution

That's not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that's exactly the process the parent comment is suggesting.

Comment author: Lumifer 09 September 2015 05:02:09PM 0 points [-]

A fair point, though I don't think it makes any difference in the context. And I'm not sure the utility function is amenable to MCMC sampling...

Comment author: evand 10 September 2015 03:30:35AM 0 points [-]

I basically agree. However...

It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form "make a small change and compare the result to the status quo", which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...)

In practice, the symmetry constraint isn't as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for "nearby possible world" (and that the symmetry constraint and other parts are pretty easy after that).

Comment author: Lumifer 10 September 2015 02:41:12PM *  0 points [-]

Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...

In that case we can have wonderful debates about which sub-space to sample our hypotheticals from, and once a bright-eyed and bushy-tailed acolyte breates out "ALL of it!" we can pontificate about the boundaries of all :-)

P.S. In about a century philosophy will discover the curse of dimensionality and there will be much rending of clothes and gnashing of teeth...

Comment author: Houshalter 09 September 2015 04:21:39AM 0 points [-]

I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.

This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)

Comment author: Lumifer 09 September 2015 04:53:32AM *  4 points [-]

This gives a tradeoff between mean and median.

There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.

Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you'll be optimizing your single-point summary you can't really talk about what's better and what's worse.

Comment author: Houshalter 09 September 2015 09:02:59PM *  0 points [-]

This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)

The criteria we are optimizing is just "how closely does it match the behavior we actually want."

EDIT: Stuart Armstrong's idea is much better: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/

Comment author: Lumifer 09 September 2015 09:07:45PM 1 point [-]

And what is "the behavior we actually want"?