tl;dr A median maximiser will expect to win. A mean maximiser will win in expectation. As we face repeated problems of similar magnitude, both types take on the advantage of the other. However, the median maximiser will turn down Pascal's muggings, and can say sensible things about distributions without means.

Prompted by some questions from Kaj Sotala, I've been thinking about whether we should use the median rather than the mean when comparing the utility of actions and policies. To justify this, see the next two sections: why the median is like the mean, and why the median is not like the mean.

 

Why the median is like the mean

The main theoretic justifications for the use of expected utility - hence of means - are the von Neumann Morgenstern axioms. Using the median obeys the completeness and transitivity axioms, but not the continuity and independence ones.

It does obey weaker forms of continuity; but in a sense, this doesn't matter. You can avoid all these issues by making a single 'ultra-choice'. Simply list all the possible policies you could follow, compute their median return, and choose the one with the best median return. Since you're making a single choice, independence doesn't apply.

So you've picked the policy πm with the highest median value - note that to do this, you need only know an ordinal ranking of worlds, not their cardinal values. In what way is this like maximising expected utility? Essentially, the more options and choices you have - or could hypothetically have - the closer this policy must be to expected utility maximalisation.

Assume u is a utility function compatible with your ordinal ranking of the worlds. Then πu = 'maximise the expectation of u' is also a policy choice. If we choose πm, we get a distribution dmu of possible values of u. Then E(u|πm) is within the absolute deviation (using dmu) of the median value of dmu. This absolute deviation always exists for any distribution with an expectation, and is itself bounded by the standard deviation, if it exists.

Thus maximising the median is like maximising the mean, with an error depending on the standard deviation. You can see it as a risk averse utility maximising policy (I know, I know - risk aversion is supposed to go in defining the utility, not in maximising it. Read on!). And as we face more and more choices, the standard deviation will tend to fall relative to the mean, and the median will cluster closer and closer to the mean.

For instance, suppose we consider the choice of whether to buckle our seatbelt or not. Assume we don't want to die in a car accident that a seatbelt could prevent; assume further that the cost of buckling a seatbelt is trivial but real. To simplify, suppose we have an independent 1/Ω chance of death every time we're in a car, and that a seatbelt could prevent this, for some large Ω. Furthermore, we will be in a car a total of ρΩ, for ρ < 0.5. Now, it seems, the median recommends a ridiculous policy: never wear seatbelts. Then you pay no cost ever, and your chance of dying is less than 50%, so this has the top median.

And that is indeed a ridiculous result. But it's only possible because we look at seatbelts in isolation. Every day, we face choices that have small chances of killing us. We could look when crossing the street; smoke or not smoke cigarettes; choose not to walk close to the edge of tall buildings; choose not to provoke co-workers to fights; not run around blindfolded. I'm deliberately including 'stupid things no-one sensible would ever do', because they are choices, even if they are obvious ones. Let's gratuitously assume that all these choices also have a 1/Ω chance of killing you. When you collect together all the possible choices (obvious or not) that you make in your life, this will be ρ'Ω choice, for ρ' likely quite a lot bigger than 1.

Assume that avoiding these choices has a trivial cost, incommensurable with dying (ie no matter how many times you have to buckle your seatbelt, it still better than a fatal accident). Now median-maximisation will recommend taking safety precautions for roughly (ρ'-0.5)Ω of these choices. This means that the decision of a median maximiser will be close to those of a utility maximiser - they take almost the same precautions - though the outcomes are still pretty far apart: the median maximiser accepts a 49.99999...% chance of death.

But now add serious injury to the mix (still assume the costs are incommensurable). This has a rather larger probability, and the median maximiser will now only accept a 49.99999...% chance of serious injury. Or add light injury - now they only accept a 49.99999...% chance of light injury. If light injuries are additive - two injuries are worse than one - then the median maximiser becomes even more reluctant to take risks. We can now relax the assumption of incommensurablility as well; the set of policies and assessments becomes even more complicated, and the median maximiser moves closer to the mean maximiser.

The same phenomena tends to happen when we add lotteries of decisions, chained decisions (decisions that depend on other decisions), and so on. Existential risks are interesting examples: from the selfish point of view, existential risks are just other things that can kills us - and not the most unlikely ones, either. So the median maximiser will be willing to pay a trivial cost to avoid an xrisk. Will a large group of median maximisers be willing to collectively pay a large cost to avoid an xrisk? That gets into superrationality, which I haven't considered yet in this context.

But let's turn back to the mystical utility function that we are trying to maximise. It's obvious that humans don't actually maximise a utility function; but according to the axioms, we should do so. Since we should, people on this list tend to often assume that we actually have one, skipping over the process of constructing it. But how would that process go? Let's assume we've managed to make our preferences transitive, already a major good achievement. How should we go about making them independent as well? We can do so as we go along. But if we do it ahead of time, chances are that we will be comparing hypothetical situations ("Do I like chocolate twice as much as sex? What would I think of a 50% chance of chocolate vs guaranteed sex? Well, it depends on the situation...") and thus construct a utility function. This is where we have to make decisions about very obscure and unintuitive hypothetical tradeoffs, and find a way to fold all our risk aversion/risk love into the utility.

When median maximising, we do exactly the same thing, except we constrain ourselves to choices that are actually likely to happen to us. We don't need a full ranking of all possible lotteries and choices; we just need enough to decide in the situations we are likely to face. You could consider this a form of moral learning (or preference learning). From our choices in different situations (real or possible), we decide what our preferences are in these situations, and this determines our preferences overall.

 

Why the median is not like the mean

Ok, so the previous paragraph argues that median maximising, if you have enough choices, functions like a clunky version of expected utility maximising. So what's the point?

The point is those situations that are not faced sufficiently often, or that have extreme characteristics. A median maximiser will reject Pascal's mugging, for instance, without any need for extra machinery (though they will accept Pascal's muggings if they face enough independent muggings, which is what we want - for stupidly large values of "enough"). They cope fine with distributions that have no means - such as the Cauchy distribution or a utility version of the St Petersburg paradox. They don't fall into paradox when facing choices with infinite (but ordered) rewards.

In a sense, median maximalisation is like expected utility maximalisation for common choices, but is different for exceptionally unlikely or high impact choices. Or, from the opposite perspective, expected utility maximising gives high probability of good outcomes for common choices, but not for exceptionally unlikely or high impact choices.

Another feature of the general idea (which might be seen as either a plus or a minus) is that it can get around some issues with total utilitarianism and similar ethical systems (such as the repugnant conclusion). What do I mean by this? Well, because the idea is that only choices that we actually expect to make matter, we can say, for instance, that we'd prefer a small ultra happy population to a huge barely-happy one. And if this is the only choice we make, we need not fear any paradoxes: we might get hypothetical paradoxes, just not actual ones. I won't put too much insistence on this point, I just thought it was an interesting observation.

 

For lack of a Cardinal...

Now, the main issue is that we might feel that there are certain rare choices that are just really bad or really good. And we might come to this conclusion by rational reasoning, rather than by experience, so this will not show up in the median. In these cases, it feels like we might want to force some kind of artificial cardinal order on the worlds, to make the median maximiser realise that certain rare events must be considered beyond their simple ordinal ranking.

In this case, maybe we could artificially add some hypothetical choices to our system, making us address these questions more than we actually would, and thus drawing them closer to the mean maximising situation. But there may be other, better ways of doing this.

 

Anyway, that's my first pass at constructing a median maximising system. Comments and critics welcome!

 

EDIT: We can use the absolute deviation (technically, the mean absolute deviation around the mean) to bound the distance between median and mean. This itself is bounded by the standard deviation, if it exists.

New to LessWrong?

New Comment
86 comments, sorted by Click to highlight new comments since: Today at 7:11 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Then E(u|πm) is within one standard deviation (using dmu) of the median value of dmu.

As the Wikipedia says, "If the distribution has finite variance". That's not necessarily a good assumption.

Consider a policy with three possible outcomes: one pony; two ponies; the universe is converted to paperclips. What's the median outcome? One pony. Don't you want a pony?

The median is a robust estimator meaning that it's harder for outliers to screw you up. The price for that, though, is indifference to the outliers which I am not sure is advisable in the utility context.

0Stuart_Armstrong9y
Indeed. But the argument about convergence when you get more and more options still applies.
1Lumifer9y
Still -- only is you true underlying distribution has finite variance. Check some plots of, say, a Cauchy distribution -- it doesn't take much of heavy tails to have no defined variance (or mean, for that matter). Not everything converges to a Gaussian.
0Stuart_Armstrong9y
You did notice that I mentioned the Cauchy distribution by name and link in the text, right? And the Cauchy distribution is the worst possible example for defending the use of the mean - because it doesn't have one. Not even, a la St Petersburg paradox, an infinite mean, just no mean at all. But it does have a median, exactly placed in the natural middle. Your argument works somewhat better with one of the stable distributions with an alpha between 1 and 2. But even there, you need a non-zero beta or else median=mean! The standard deviation is an upper bound on the difference, not necessarily a sharp one. It would be interesting to analyse the difference between mean and median for stable distributions with non-zero beta; I'll get round to that some day. My best guess is that you could use some fractional moment to bound the difference, instead of (the square root of) the variance. EDIT: this is indeed the case, you can use Jensen's inequality to show that the q-th root of the q-th absolute value central moment, for 1<q<2, can be substituted as a bound between mean and moment. For q<alpha, this should be finite.
0Lumifer9y
I only brought up Cauchy to show that infinite-variance distributions don't have to be weird and funky. Show a plot of a Cauchy pdf to someone who had, like, one undergrad stats course and she'll say something like "Yes, that's a bell curve" X-/
0Stuart_Armstrong9y
Actually, there's no need for higher central moments. The mean absolute deviation around the mean (which I would have called the first absolute central moment) bounds the difference between mean and median, and is sharper than the standard deviation.
-1V_V9y
In fact, "Pascal's mugging" scenarios tend to pop up when you allow for utility distributions with infinite variance.
2Lumifer9y
For Pascal's Muggings I don't think you care that much about variance -- what you want is a gargantuan skew.

It's obvious that humans don't actually maximise a utility function; but according to the axioms, we should do so.

Given a choice between "change people" and "change axioms", I'd be inclined to change axioms.

2DanielLC9y
If you're a psychologist and you care about describing people, change the axioms. If you're a rationalist and you care about getting things done, change yourself.

This seems to be a case of trying to find easy solutions to hard abstract problems at the cost of failing to be correct on easy and ordinary ones. It's also fairly trivial to come up with abstract scenarios where this fails catastrophically, so it's not like this wins on the abstract scenarios front either. It just fails on a new and different set of problems - ones that aren't talked about because no-one's ever found a way to fail on them before.

Also, all of the problems you list it solving are problems which I would consider to be satisfactorily solved a... (read more)

0Houshalter9y
Median utility does fail trivially. But it opens the door to other systems which might not. He just posted a refinement on this idea, Mean of Quantiles. IMO this system is much more robust than expected utility. EU is required to trade away utility from the majority of possible outcomes to really rare outliers, like the mugger. Median utility will get you better outcomes at least 50% of the time. And tradeoffs like the one above, will get you outcomes that are good in the majority of possible outcomes, ignoring rare outliers. I'm not satisfied it's the best possible system, so the subject is still worth thinking about and debating. I don't think any of your paradoxes are solved. You can't get around Pascal's mugging by modifying your probability distribution. The probability distribution has nothing to do with your utility function or decision theory. Besides being totally inelegant and hacky, there might be practical consequences. Like you can't believe in the singularity now. The singularity could lead to vastly high utility futures, or really negative ones. Therefore it's probability must be extremely small. The St Petersburg casino is silly of course, but there's no reason a real thing couldn't produce a similar distribution. If you have some sequence of probabilities dependent on each other, that each have 1/2 probability, and give increasing utility.
0Irgy9y
I do acknowledge that my comment was overly negative, certainly the ideas behind it might lead to something useful. I think you misunderstand my resolution of the mugging (which is fair enough since it wasn't spelled out). I'm not modifying a probability, I'm assigning different probabilities to different statements. If the mugger says he'll generate 3 units of utility difference that's a more plausible statement than if the mugger says he'll generate 3^^^3, etc. In fact, why would you not assign a different probability to those statements? So long as the implausibility grows at least as fast as the value (and why wouldn't it?) there's no paradox. Re St Petersburg, sure you can have real scenarios that are "similar", it's just that they're finite in practice. That's a fairly important difference. If they're finite then the game has a finite value, you can calculate it, and there's no paradox. In which case median utility can only give the same answer or an exploitably wrong answer.
0Houshalter9y
The whole point of the Pascal's Mugging scenario is that the probability doesn't decrease faster than the reward. If for example, you decrease the probability by half for each additional bit it takes to describe, 3^^^3 still only takes a few bits to write down. Do you believe it's literally impossible that there is a matrix? Or that it can't be 3^^^3 large? Because when you assign these things so low probability, you are basically saying they are impossible. No amount of evidence could convince you otherwise. I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit the data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans. Therefore anyone alive today has a small probability of affecting large amounts of simulated people. Since that is impossible, the theory must be wrong. It doesn't matter if it's simple or if it fits the data perfectly. Even in finite case, I believe it can grow quite large as the number of iterations increases. It's one expected dollar each step. Each step having half the probability of the previous step, and twice the reward. Imagine the game goes for n finite steps. An expected utility maximizer would still spend $n to play the game. A median maximizer would say "You are never going to win in the liftetime of the universe and then some, so no thanks." The median maximizer seems correct to me.
0Irgy9y
Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely. And one that's also been solved, certainly to my satisfaction. Logarithmic utility and/or the Kelly Criterion will both tell you not to bet if the payout is in money, and for the right reasons rather than arbitrary, value-ignoring reasons (in that they'll tell you exactly what you should pay for the bet). If the payout is directly in utility, well I think you'd want to see what mindbogglingly large utility looked like before you dismiss it. It's pretty hard if not impossible to generate that much utility with logarithmic utility of wealth and geometric discounting. But even given that, a one in a triillion chance at a trillion worthwhile extra days of life may well be worth a dollar (assuming I believed it of course). I'd probably just lose the dollar, but I wouldn't want to completely dismiss it without even looking at the numbers. Re the mugging, well I can at least accept that there are people who might find this convincing. But it's funny that people can be willing to accept that they should pay but still don't want to, and then come up with a rationalisation like median maximising, which might not even pay a dollar for the mugger not to shoot their mother if they couldn't see the gun. If you really do think it's sufficiently plausible, you should actually pay the guy. If you don't want to pay I'd suggest it's because you know intuitively that there's something wrong with the rationale and refuse to pay a tax on your inability to sort it out. Which is the role the median utility is trying to play here, but to me it's a case of trying to let two wrongs make a right. Personally though I don't have this problem. If you want to define "impossible" as "so unlikely that I will correctly never account for it in any decision I ever mak
0Houshalter9y
Well there are two separate points of the St Petersburg paradox. One is the existence of relatively simple distributions that have no mean. It doesn't converge on any finite value. Another example of such a distribution, which actually occurs in physics, is the Cauchy distribution. Another, which the original Pascal's Mugger post was intended to address, was Solomonoff induction. The idealized prediction algorithm used in AIXI. EY demonstrated that if you use it to predict an unbounded value like utility, it doesn't converge or have a mean. The second point is just that the paying more than a few bucks to pay the game is silly. Even in a relatively small finite version of it. The probability of losing is very high. Even though it has a positive expected utility. And this holds even if you adjust the payout tables to account for utility != dollars. You can bite the bullet and say that if the utility is really so high, you really should take that bet. And that's fine. But I'm not really comfortable betting away everything on such tiny probabilities. You are basically guaranteed to lose and end up worse than not betting. You can do a tradeoff between median maximizing and expected utility with mean of quantiles. This basically gives you the best average outcome ignoring incredibly unlikely outcomes. Even median maximizing by itself, which seems terrible, will give you the best possible outcome >50% of the time. The median is fairly robust. Whereas expected utility could give you a shitty outcome 99% of the time or 99.999% of the time, etc. As long as the outliers are large enough. If you are assigning 1/3^^^3 probability to something, then no amount of evidence will ever convince you. I'm not saying that unbounded computing power is likely. I'm saying you shouldn't assign infinitely small probability to it. The universe we live in runs on seemingly infinite computing power. We can't even simulate the very smallest particles because of how large the number of com

The main theoretic justifications for the use of expected utility - hence of means - are the von Neumann Morgenstern axioms. Using the median obeys the completeness and transitivity axioms, but not the continuity and independence ones. It does obey weaker forms of continuity; but in a sense, this doesn't matter. You can avoid all these issues by making a single 'ultra-choice'. Simply list all the possible policies you could follow, compute their median return, and choose the one with the best median return. Since you're making a single choice, independenc

... (read more)
1Stuart_Armstrong9y
The independence axiom derives most of it intuitive strength from the fact that if you violate it, you can be money pumped when presented with a sequence of decisions. When making a single decision over policy, independence has far less intuitive strength, as violating it has no actual cost.
0V_V9y
If your preferences aren't transitive, then even your one-shot decision making system is completely broken, since it can't even yield an action that is "preferred" in a meaningful sense. Vulnerability to money pumping would be the last of your concerns in this case. Money pumping is an issue in sequential decision making with time-discounting and/or time horizons: any method to aggregate future utilities other than exponential discounting ( * ) over an infinite time horizon yields dynamic inconsistency which could, in principle, be exploited for money pumping. The intuitive justification for the independence axiom is the following: * What would you like for dessert, sir? Ice cream or cake? * Ice cream. * Oh sorry, I forgot! We also have fruit. * Then cake. This decision making example looks intuitively irrational. If you prefer ice cream to cake when they are the only two alternatives, then why would you prefer cake to ice cream when a third, inferior, alternative is included? The independence axiom formalizes this intuition about rational behavior. ( * with no discounting being a special case of exponential discounting)
4AlexMennen9y
You're thinking of a different meaning of "independence". A violation of the independence axiom of VNM would look more like this: * What would you like for dessert, sir? Ice cream or cake? * Ice cream. * Oh sorry, I forgot! There is a 50% chance that we are out of both ice cream and cake (I know we have either both or neither). But I'll go check, and if we're not out of dessert, I'll get you your ice cream. * Oh, in that case I'll have cake instead.
-1V_V9y
Yes, I believe that this is a stronger version. Median utility satisfies the weaker version of the axiom but not the stronger one.
1Stuart_Armstrong9y
But notice you had two decision points there. Intransitivity breaks your decision system with a single decision point; dependence does not. Hence a single policy decision has to be transitive, but need not be independent.
-1V_V9y
The first decision is immediately canceled and has no effect on your utility, hence it isn't really a relevant decision point. More generally, the independence axiom makes sure that the outcome of your decision process is not affected by bad options that are available to you.
1Stuart_Armstrong9y
Except that median-maximising respects independence for options that are available to you (or can be trivially tweaked to do so). It only violates independence for hypothetical bad options that will never be available to you.
0Jiro9y
It can be rational to do this. There's a paradox publicized by Martin Gardner demonstrating how. Unfortunately the best link I could easily find was a Reddit comment, but try https://www.reddit.com/r/fffffffuuuuuuuuuuuu/comments/gxwqe/why_i_hate_people/c1r5203 .

I posted this exact idea a few months ago. There was a lot of discussion about it which you might find interesting. We also discussed it recently on the irc channel.

Median utility by itself doesn't work. I came up with an algorithm that compromises between them. In everyday circumstances it behaves like expected utility. In extreme cases, it behaves like median utility. And it has tunable parameters:

sample n counterfactuals from your probability distribution. Then take the average of these n outcomes, [EDIT: and do this an infinite amount of times, and t

... (read more)
1Lumifer9y
I am not sure of the point. If you can "sample ... from your probability distribution" then you fully know your probability distribution including all of its statistics -- mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful. If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.
0evand9y
That's not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that's exactly the process the parent comment is suggesting.
0Lumifer9y
A fair point, though I don't think it makes any difference in the context. And I'm not sure the utility function is amenable to MCMC sampling...
0evand9y
I basically agree. However... It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form "make a small change and compare the result to the status quo", which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...) In practice, the symmetry constraint isn't as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for "nearby possible world" (and that the symmetry constraint and other parts are pretty easy after that).
0Lumifer9y
In that case we can have wonderful debates about which sub-space to sample our hypotheticals from, and once a bright-eyed and bushy-tailed acolyte breates out "ALL of it!" we can pontificate about the boundaries of all :-) P.S. In about a century philosophy will discover the curse of dimensionality and there will be much rending of clothes and gnashing of teeth...
0Houshalter9y
I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that. This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)
6Lumifer9y
There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is. Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you'll be optimizing your single-point summary you can't really talk about what's better and what's worse.
0Houshalter9y
This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.) The criteria we are optimizing is just "how closely does it match the behavior we actually want." EDIT: Stuart Armstrong's idea is much better: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/
0Lumifer9y
And what is "the behavior we actually want"?

I don't understand your argument that the median utility maximizer would buckle its seat belt in the real world. It seemed kind of like you might be trying to argue that median utility maximizers and expected utility maximizers would always approximate each other under realistic conditions, but since you then argue that the alleged difference in their behavior on the Pascal's mugging problem is a reason to prefer median utility maximizers (implying that Pascal's mugging-type problems should be accepted as realistic, or at least that getting them correct is... (read more)

0Stuart_Armstrong9y
It derives from the fact that median maximalisation doesn't consider decisions independently, even if their gains and losses are independent. For illustration, compare the following deal: you pay £q, and get £1 with probability p. There are n independent deals (assume your utility is linear in £). If n=1, the median maximiser accepts the deal iff q0.5. Not a very good performance! Now let's look at larger n. For m < n, accepting m deals gets you an expected reward of m(p-q). The median is a bit more complicated (see https://en.wikipedia.org/wiki/Binomial_distribution#Mode_and_median ), but it's within £1 of the mean reward. So if pq, it will accept all n deals. For pq, it will accept at least n - 1/(p-q) deals. In all cases, its expected loss, compared with the mean maximiser, is less than £1. There's a similar effect going on when considering the seat-belt situation. Aggregation concentrates the distribution in a way that moved median and mean towards each other.
0AlexMennen9y
You appear to now be making an argument that you already conceded was incorrect in OP: You then go on to say that if the agent also faces many decisions of a different nature, it won't do that. That's where I get lost.
0Stuart_Armstrong9y
The median maximiser accepts a 49.99999...% chance of death, only because "death", "trivial cost" and "no cost" are the only options here. If I add "severe injury" and "light injury" to the outcomes, the maximiser will now accept less than a 49.9999...% chance of light injury. If we make light injury additive, and make the trivial cost also additive and not incomparable to light injuries, we get something closer to my illustrative example above.
1AlexMennen9y
Suppose it comes up with 2 possible policies, one of which involves a 49% chance of death and no chance of injury, and another which involves a 49% chance of light injury, and no chance of heavy injury or death. The median maximizer sees no reason to prefer the second policy if they have the same effects the other 51% of the time.
0Stuart_Armstrong9y
Er, yes, constructing single choice examples when the median behaves oddly/wrongly is trivial. My whole point is about what happens to median when you aggregate decisions.
-1AlexMennen9y
You were claiming that in a situation where a median-maximizing agent has a large number of trivially inconvenient action that prevent small risks of death, heavy injury, or light injury, then it would accept a 49% chance of light injury, but you seemed to imply that it would not accept a 49% chance of death. I was trying to point out that this appears to be incorrect.
1Stuart_Armstrong9y
I'm not entirely sure what your objection is; we seem to be talking at cross purposes. Let's try it simpler. If we assume that the cost of buckling seat belts is incommensurable (in practice) with light injury (and heavy injury, and death), then the median maximising agent will accept a 49.99..% chance of (light injury or heavy injury or death), over their lifetime. Since light injury is much more likely than death, this in effect forces the probability of death down to a very low amount. It's just an illustration of the general point that median maximising seems to perform much better in real-world problems than its failure in simple theoretical ones would suggest.
-3AlexMennen9y
No, it doesn't. That does not address the fact that the agent will not preferentially accept light injury over death. Adopting a policy of immediately committing suicide once you've been injured enough to force you into the bottom half of outcomes does not decrease median utility. The agent has no incentive to prevent further damage once it is in the bottom half of outcomes. As a less extreme example, the value of house insurance to a median maximizer is 0, just because loosing your house is a bad outcome even if you get insurance money for it. This isn't a weird hypothetical that relies on it being an isolated decision; it's a real-life decision that a median maximizer would get wrong.
0Stuart_Armstrong9y
A more general way of stating how multiple decisions improve median maximalisation: the median maximaliser is indifferent of outcomes not at the median (eg suicide vs light injury). But as the decision tree grows and the number of possible situations does as well, the probability increases that outcomes not at the median in a one shot, will affect the median in the more complex situation.
0AlexMennen9y
This argument relies on your utility being a sum of effects from each of the decisions you made, but in reality, your decisions interact in much more complicated ways, so that isn't a realistic model. Also, if your defense of median maximization consists entirely of an argument that it approximates mean maximization, then what's the point of all this? Why not just use expected utility maximization? I'm expecting you to bring up Pascal's mugging here, but since VNM-rationality does not force you to pay the mugger, you'll have to do better than that.
0Stuart_Armstrong9y
It doesn't require that in the least. I don't know if, eg, quadratic of higher order effects would improve or worsen the situation. The consensus at the moment seems to be that if you have unbounded utility, it does force you to pay some muggers. Now, I'm perfectly fine with bounding your utility to avoid muggers, but that's the kind of non-independent decision some people don't like ;-) The real problem is things like the Cauchy distribution, or any function without an expectation value at all. Saying "VNM works fine as long as we don't face these difficult choices, then it breaks down" is very unsatisfactory. I'm also interested in seeing what happens when "expect to win" and "win in expectation" become quite distinct - a rare event, in practice.
0AlexMennen9y
The more concrete argument you made previous does rely on it. If what you're saying now doesn't, then I guess I don't understand it. I don't follow. Maximizing the expected value of a bounded utility functions does respect independence.
0Stuart_Armstrong9y
That was an example. There's another one in http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/ which relies on "not risk loving". That post doesn't mention the median, but it does mention the standard deviation, and we know the mean must be within one SD of the mean (and often much closer). Choosing to bound an unbounded utility function to avoid muggers does not.
0AlexMennen9y
That example also relies on your utility being the sum of components that are determined from your various actions. To be clear, I was not suggesting that you have an unbounded utility function that it would make sense for you to maximize if it weren't for Pascal's mugger, so you should bound it when there might be a Pascal's mugger around. I was suggesting that the utility function it makes sense for you to maximize is bounded. Unbounded utility functions are so loony they never should have been seriously considered in the first place; Pascal's mugger is merely a dramatic illustration of that fact. Edit: I probably shouldn't rely on the theoretical reasons to prefer bounded utility functions, since they are not completely airtight and actual human preferences are more important anyway. So let's look at actual human preferences. Suppose you've got a rational agent with preference relation "<", and you want to test whether its utility function is bounded or unbounded. Here's a simple test: First find outcomes A and B such that A<B (if you can't even do that, its utility function is constant, hence bounded). Then pick an absurdly tiny probability p>0. Now see if you can find such a terrible C and such a wonderful D that pC+(1-p)B < pD + (1-p)A. If, for every p>0 you can find such C and D, then its utility function is unbounded. But if for some p>0, you cannot find any C and D that will suffice, even when you probe the extremes of goodness and badness, then its utility function is bounded. This test should sound familiar. What I'm getting at here is that one does not bound their unbounded utility function so that they don't have to pay Pascal's mugger; your preferences were simply bounded all along, and your response to Pascal's mugger is proof.
0Stuart_Armstrong9y
Look, we're arguing past each other here. My logical response here would be to add more options to the system, which would remove the problem you identified (and I don't understand your house insurance example - this is just the seat-belt decision again as a one-shot, and I would address it by looking at all the financial decisions you make in your life - and if that's not enough, all the decisions, including all the "don't do something clearly stupid and pointless" ones). What I think is clear is: a) Median maximalisation makes bad decisions in isolated problems. b) If we combine all the likely decisions that a median maximiser will have to make, the quality of the decisions increase. If you want to argue against it, either say that a) is bad enough we should reject the approach anyway, even if it decides well in practice, or find examples where a real world median maximaliser will make bad decisions even in the real world (if you would pay Pascal's mugger, then you could use that as an example).
0AlexMennen9y
We were modeling the seat-belt decision as something that makes the difference between being dead and being completely fine in the event of an accident (which I suppose is not very realistic, but whatever). I was trying to point to a situation where an event can happen which is bad enough to put in the bottom half of outcomes either way, so that nothing that happens conditional on the event can affect the median outcome, but a decision you can make ahead of time would make the difference between bad and worse. I do think that a) is bad enough, because a decision procedure that does poorly in isolated problems is wrong, and thus cannot be expected to do well in realistic situations, as I mentioned previously. I guess b) is probably technically true, but it is not enough for the quality of the decisions to increase when the number increases; it should actually increase towards a limit that isn't still awful, and come close to achieving that limit (I'm pretty sure it fails on at least one of those, though which step it fails on might depend on how you make things precise). I've given examples where median maximizers make bad decisions in the real world, but you've dismissed them with vague appeals to "everything will be fine when you consider it in the context of all the other decisions it has to make".
0Stuart_Armstrong9y
And I've added in the specific other decisions needed to achieve this effect. I agree it's not clear what exactly the median maximalisation converge on in the real world, but the examples you've produced are not sufficient to show it's bad. My take on this is that counterfactual decision count as well. ie if humans look not only at the decisions they face, but the ones they can imagine facing, then median maximalisation is improved. My justification for this line of thought is - how do you know that one chocolate cake is +10 utility while one coffee is +2 (and two coffees is +3, three is +2, and four is -1)? Not just the ordinal ranking, but the cardinality. I'd argue that you get this by either experiencing circumstances where you choose a 20% chance of a cake over coffee, or imagining yourself in that circumstance. And if imagination and past experiences are valid for the purpose of constructing your utility function, they should be valid for the purpose of median-maximalisation.
0AlexMennen9y
That you claim achieve that effect. But as I said, unless the are choices you can make that would protect you from light injury involve less inconvenience per % reduction in risk than the choices you can make that would protect you from death, it doesn't work. However, I did think of something which seems to sort of achieve what you want: if you have high uncertainty about what the value of your utility function will be, then adding something to it with some probability will have a significant effect on the median value, even if the probability is significantly less than 50%. For instance, a 49% chance of death is very bad because if there's a 49% chance you die, then the median outcome is one in which you're alive but in a worse situation than all but 1/51 of the scenarios in which you die. It may be that this is what you had in mind, and adding future decisions that involve uncertainty was merely a mechanism by which large uncertainty about the outcome was introduced, in which case future-you actually getting to make any choices about them was a red herring. I still don't find this argument convincing either, though, both because it still undervalues protection from risks of losses that are large relative to the rest your uncertainty about the value of the outcome (for instance, note that when valuing reductions in risk of death, there is still a weird discontinuity around 50%), and because it assumes that you can't make decisions that selectively have significant consequences only in very good or very bad outcomes (this is what I was getting at with the house insurance example). I don't understand what you're saying here. Is it that you can maximize the median value of the mean of the values of your utility function in a bunch of hypothetical scenarios? If so, that sounds kind of like Houshalter's median of means proposal, which approaches mean maximization as the number of samples considered approaches infinity.
0Stuart_Armstrong9y
The observation I have is that when facing many decisions, median maximialisation tends to move close to mean maximalisation (since the central limit theorem has "convergence in the distribution", the median will converge to the mean in the case of averaging repeated independent processes; but there are many other examples of this). Therefore I'm considering what happens if you add "all the decisions you can imagine making" to the set of actual decisions you expect to make. This feels like it should move the two even closer together.
0AlexMennen9y
Ah, are you saying you should use your prior to choose a policy that maximizes your median utility, and then implementing that policy, rather than updating your prior with your observations and then choosing a policy that maximizes the median? So like UDT but with medians? It seems difficult to analyze how it would actually behave, but it seems likely to be true that it acts much more similarly to mean utility maximization than it would if you updated before choosing the policy. Both of these properties (difficulty to analyze, and similarity to mean maximization) make it difficult to identify problems that it would perform poorly on. But this also makes it difficult to defend its alleged advantages (for instance, if it ends up being too similar to mean maximization, and if you use an unbounded utility function as you seem to insist, perhaps it pays Pascal's mugger).
0Stuart_Armstrong9y
Ouch! Sorry for not being clear. If you missed that, then you can't have understood much of what I was saying!
0Houshalter9y
How do you know that it's right to buckle your seatbelt? If you are only going to ride in a car once, never again. And there are no other risks to your life, and so no need to make a general policy against taking small risks? I'm not confident that it's actually the wrong choice. And if it is, it shouldn't matter much. 99.99% of the time, the median will come out with higher utility than the EU maximizer. This is generalizable. If there was a "utility competition" between different decision policies in the same situations, the median utility would usually come out on top. As the possible outcomes become more extreme and unlikely, expected utility will do worse and worse. With pascal's mugging at the extreme. That's because EU trades away utility from the majority of possible outcomes, to really really unlikely outcomes. Outliers can really skew the mean of a distribution, and EU is just the mean. Of course median can be exploited too. Perhaps there is some compromise between them that gets the behavior we want. There are an infinite number of possible policies for deciding which distribution of utilities to prefer. EU was chosen because it is the only one that meets a certain set of conditions and is perfectly consistent. But if you allow for algorithms that select overall policies instead of decisions, like OP does, then you can make many different algorithms consistent. So there is no inherent reason to prefer mean over median. It just comes down to personal preference, and subjective values. What probability distribution of utilities do you prefer?
0AlexMennen9y
I do think that the isolation of the decision is a red herring, but for the sake of the point I was trying to make, it is probably easier to replace the example with a structurally similar one in which the right answer is obvious: suppose you have the opportunity to press a button that will kill you will 49% probability, and give you $5 otherwise. This is the only decision you will ever make. Should you press the button? As I was saying in my previous comment, I think that's the wrong approach. It isn't enough to kludge together a decision procedure that does what you want on the problems you thought of, because then it will do something you don't want on something you haven't thought of. You need a decision procedure that will reliably do the right thing, and in order to get that, you need it to do the right thing for the right reasons. EU maximization, applied properly, will tell you to do the correct things, and will do so for the correct reasons. Actually, there is: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem
0Houshalter9y
Yes I said that median utility is not optimal. I'm proposing that there might be policies better than both EU or median. Please reread the OP and my comment. If you allow selection over policies instead of individual decisions, you can be perfectly consistent. EU and median are both special cases of ways to pick policies, based on the probability distribution of utility they produce. There is no law of the universe that some procedures are correct and others aren't. You just have to pick one that you like, and your choice is going to be arbitrary. If you go with EU you are pascal muggable. If you go with median you are muggable in certain cases as well (though you should usually, with >50% probability, end up with better outcomes in the long run. Whereas EU could possibly fail 100% of the time. So it's exploitable, but it's less exploitable at least.)
0AlexMennen9y
I don't see how selecting policies instead of actions removes the motivation for independence. Ultimately, it isn't the policy that you care about; it's the outcome. So you should pick a policy because you like the probability distributions over outcomes that you get from implementing it more than you like the probability distributions over outcomes that you would get from implementing other policies. Since there are many decision problems to use your policy on, this quite heavily constrains what policy you choose. In order to get a policy that reliably picks the actions that you decide are correct in the situations where you can tell what the correct action is, it will have to make those decisions for the same reason you decided that it was the best action (or at least something equivalent to or approximating the same reason). So no, the choice of policy is not at all arbitrary. That is not true. EU maximizers with bounded utility functions reject Pascal's wager.
1Stuart_Armstrong9y
There are two reasons to like independence. First of all, you might like it for philosophical/aesthetic reasons: "these things really should be independent, these really should be irrelevant". Or you could like it because it prevents you from being money pumped. When considering policies, money pumping is (almost) no longer an issue, because a policy that allows itself to be money-pumped is (almost) certainly inferior to one that doesn't. So choosing policies removes one of the motivations for independence, to my mind the important one.
0AlexMennen9y
While it's true that this does not tell you to pay each time to switch the outcomes around in a circle over and over again, it still falls prey to one step of a similar problem. Suppose their are 3 possible outcomes: A, B, and C, and there are 2 possible scenarios: X and Y. In scenario X, you get to choose between A and B. In scenario Y, you can attempt to choose between A and B, and you get what you picked with 50% probability, and you get outcome C otherwise. In each scenario, this is the only decision you will ever make. Suppose in scenario X, you prefer A over B, but in scenario Y, you prefer (B+C)/2 over (A+C)/2. But suppose you had to pay to pick A in scenario X, and you had to pay to pick (B+C)/2 in scenario Y, and you still make those choices. If Y is twice as likely as X a priori, then you are paying to get a probability distribution over outcomes that you could have gotten for free by picking B given X, and (A+C)/2 given Y. Since each scenario only involves you ever getting to make one decision, picking a policy is equivalent to picking a decision.
0Houshalter9y
Your example is difficult to follow, but I think you are missing the point. If there is only one decision, then it's actions can't be inconsistent. By choosing a policy only once - one that maximizes it's desired probability distribution of utility outcomes - it's not money pumpable, and it's not inconsistent. Now by itself it still sucks because we probably don't want to maximize for the best median future. But it opens up the door to more general policies for making decisions. You no longer have to use expected utility if you want to be consistent. You can choose a tradeoff between expected utility and median utility (see my top level comment), or a different algorithm entirely.
0AlexMennen9y
If there is only one decision point in each possible world, then it is impossible to demonstrate inconsistency within a world, but you can still be inconsistent between different possible worlds. Edit: as V_V pointed out, the VNM framework was designed to handle isolated decisions. So if you think that considering an isolated decision rather than multiple decisions removes the motivation for the independence axiom, then you have misunderstood the motivation for the independence axiom.
1Stuart_Armstrong9y
I understand the two motivations for the independence axiom, and the practical one ("you can't be money pumped") is much more important that the theoretical one ("your system obeys this here philosophically neat understanding of irrelevant information"). But this is kind of a moot point, because humans don't have utility functions. And therefore we will have to construct them. And the process of constructing them is almost certainly going to depend on facts about the world, making the construction process almost certainly inconsistent between different possible worlds.
0AlexMennen9y
It shouldn't. If your preferences among outcomes depend on what options are actually available to you, then I don't see how you can justify claiming to have preferences among outcomes, as opposed to tendencies to make certain choices.
1Stuart_Armstrong9y
Then define me a process that takes people's current mess of preferences, makes these into utility functions, and, respecting bounded rationality, is independent of options available in the real world. Even then, we have the problem that this mess of preferences is highly dependent on real world experiences in the first place. If I always go left at a road, I have tendency to make certain choices. If I have a full model of the entire universe with labelled outcomes ranked on a utility function, and use it with unbounded rationality to make decisions, I have preferences among outcomes. The extremes are clear. I feel that a bounded human being with a crude mental model that is trying to achieve some goal, imperfectly (because of ingrained bad habits, for instance) is better described as having preferences among outcomes. You could argue that they have mere tendencies, but this seems to stretch the term. But in any case, this is a simple linguistic dispute. Real human beings cannot achieve independence.
0AlexMennen9y
Define me a process with all those properties except the last one. If you can't do that either, it's not the last constraint that is to blame for the difficulty. Yes, different agents have different preferences. The same agent shouldn't have its preferences change when the available outcomes do. If you are neutral between .4A+.6C and .4B+.6C, then you don't have a very good claim to preferring A over B.
0Stuart_Armstrong9y
Well, there's my old idea here: http://lesswrong.com/lw/8qb/cevinspired_models/ . I don't think it's particularly good, but it does construct a utility function, and might be doable with good enough models or a WBE. More broadly, there's the general "figure out human preferences from their decisions and from hypothetical questions and fit a utility function to it", which we can already do today (see "inverse reinforcement learning"); we just can't do it well enough, yet, to get something generally safe at the other end. None of these ideas have independent variants (not technically true; I can think of some independent versions of them, but they're so ludicrously unsafe in our world that we'd rule them out immediately; thus, this would be a non-independent process). ? If I actually do prefer A over B (and my behaviour reflects that in (1- ɛ)A+ ɛC versus (1-ɛ)B+ ɛC cases), then I have an extremely good claim to preferring A over B, and an extremely poor claim to independence.
0AlexMennen9y
I assumed accuracy was implied by "making a mess of preferences into a utility function". I'm somewhat skeptical of that strategy for learning utility functions, because the space of possible outcomes is extremely high-dimensional, and it may be difficult to test extreme outcomes because the humans you're trying to construct a utility function for might not be able to understand them. But perhaps this objection doesn't get to the heart of the matter, and I should put it aside for now. I am admittedly not well-versed in inverse reinforcement learning, but this is a perplexing claim. Except for a few people like you suggesting alternatives, I've only ever heard "utility function" used to refer to a function you maximize the expected value of (if you're trying to handle uncertainty), or a function you just maximize the value of (if you're not trying to handle uncertainty). In the first case, we have independence. In the second case, the question of whether or not we obey independence doesn't really make sense. So if inverse reinforcement learning violates independence, then what exactly does it try to fit to human preferences? Then if the only difference between two gambles is that one might give you A when the other might give you B, you'll take the one that might give you something you like instead of something you don't like.
0Stuart_Armstrong9y
To be clear, I am saying the process of constructing the utility function violates independence, not that subsequently maximising it does. Similarly, choosing a median-maximising policy P violates independence, but there is (almost certainly) a utility u such that maximising u is the same as following P. Once the first choice is made, we have independence in both cases; before it is made, we have it in neither. The philosophical underpinning of independence in single decisions therefore seems very weak.
0AlexMennen9y
Feel free to tell me to shut up and learn how inverse reinforcement learning works before bothering you with such questions, if that is appropriate, but I'm not sure what you mean. Can you be more precise about what property you're saying inverse reinforcement learning doesn't have?
0Stuart_Armstrong9y
Inverse reinforcement learning relies on observation of humans performing specific actions, and drawing the "right" conclusion as to what their preferences. Indirectly, it relies on humans tinkering with its code to remove "errors", ie things that don't fit with the mental image that human programmers of what preferences should be. Given that human desires are not independent (citation not needed), this process, if it produces a utility function, involves constructing something independent from non-independent input. However, to establish this utility function, the algorithm has access only to the particular problems given to it, and the particular mental images of its programmers. It is almost certain that the end result would be somewhat different if it was trained on different problems, or if its programmers had different intuitions. Therefore the process itself cannot be independent.
0AlexMennen9y
Ah, I see what you mean, and you're right; the utility function constructed will depend on how the data points are sampled. This isn't quite the same as saying that the result will depend on what results are actually available, though, unless knowledge about what results will be available is used to determine how to sample the data. This still seems like somewhat of a defect of inverse reinforcement learning, unless there ends up being a good case that some particular way of sampling the data is optimal for revealing underlying preferences and ignoring biases, or something like that. That's probably true, but on the other hand, you seem to want to pin the deviations of human behavior from VNM rationality on violations of the independence axiom, and it isn't clear to me that this is the case (I don't think the point you were making relies on this, so if you weren't trying to make that claim then you can ignore this; it just seemed like you might be). There are situations where there are large framing effects (that is, whether A or B is preferred depends on how the options are presented, even if no other outcome C is being mixed in with them), and likely also violations of transitivity (where someone would say A>B, B>C, and C>A whenever you ask them about 2 of them without bringing up the third). It seems likely to me that most paradoxes of human decision-making have more to do with these than they do to violations of independence.
0Houshalter9y
It can't be inconsistent within a world no matter how many decisions points there are. If we agree it's not inconsistent, then what are you arguing against? I don't care about the VNM framework. As you said, it is designed to be optimal for decisions made in isolation. Because we don't need to make decisions in isolation, we don't need to be constrained by it.
0AlexMennen9y
No. Inconsistency between different possible worlds is still inconsistency. The difference doesn't matter that much in practice. If there are multiple decision points, you can combine them into one by selecting a policy, or by considering them sequentially and using your beliefs about what your choices will be in the future to compute the expected utilities of the possible decisions available to you now. The reason that the VNM framework was designed for one-shot decisions is that it makes things simpler without actually constraining what it can be applied to.
0Houshalter9y
It's perfectly consistent in the sense that it's not money pumpable, and always makes the same decisions given the same information. It will make different decisions in different situations, given different information. But that is not inconsistent by an reasonable definition of "inconsistent". It makes a huge difference. If you want to get the best median future, then you can't make decisions in isolation. You need to consider every possible decision you will have to make, and their probability. And choose a decision policy that selects the best median outcome.
0AlexMennen9y
As in my previous example (sorry about it being difficult to follow, though I'm not sure yet what I could say to clarify things), it is inconsistent in the sense that it can lead you to pay for probability distributions over outcomes that you could have achieved for free. Right. As I just said, "you can... consider them sequentially and use your beliefs about what your choices will be in the future to compute the expected utilities of the possible decisions available to you now." (edited to fix grammar). This reduces iterated decisions to isolated decisions: you have certain beliefs about what you'll do in the future, and now you just have to make a decision on the issue facing you now.

Median expected behavior is simple which makes it easy to calculate.

As an electrical engineer when I design circuits I start off by assuming that all my parts behave exactly as rated. If a resistor says it's 220+10% Ohms then I use 220 for my initial calculations. Assuming median behavior works wonderfully in telling me what my circuit probably will do.

In fact that's good enough info for me to base my design decision on for a lot of purposes (given a quick verification of functionality, of course).

But what about that 10%? What if it might matter? On... (read more)

0Houshalter9y
Worst case isn't a great metric either. E.g. you are required to pay the mugger, because it's the worst possible case. Average case doesn't solve it either, because the utility the mugger is promising is even greater than improbability he's right. Rare outliers can throw off the average case by a lot. We need to invent some kind of policy to decide what actions to prefer, given a set of the utilities and probabilities of each possible outcome. Expected utility isn't good enough. Median utility isn't either. But there might be some compromise between them that gets what we want. Or a totally different algorithm altogether.
0Stuart_Armstrong9y
That's why I find it interesting that mean and median converge in many cases of repeated choices.

In finance we use medians a lot more than means.

1Lumifer9y
The rather important question is: For which purpose?

"Assume that avoiding these choices has a trivial cost, incommensurable with dying (ie no matter how many times you have to buckle your seatbelt, it still better than a fatal accident)."

Suppose you had a choice: die in a plane crash, or listen to those plane safety announcements one million times. I choose dying in a plane crash.

0Stuart_Armstrong9y
The incommensurability assumption is for illustration only, and is dropped later on.