You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Median utility rather than mean?

6 Post author: Stuart_Armstrong 08 September 2015 04:35PM

tl;dr A median maximiser will expect to win. A mean maximiser will win in expectation. As we face repeated problems of similar magnitude, both types take on the advantage of the other. However, the median maximiser will turn down Pascal's muggings, and can say sensible things about distributions without means.

Prompted by some questions from Kaj Sotala, I've been thinking about whether we should use the median rather than the mean when comparing the utility of actions and policies. To justify this, see the next two sections: why the median is like the mean, and why the median is not like the mean.

 

Why the median is like the mean

The main theoretic justifications for the use of expected utility - hence of means - are the von Neumann Morgenstern axioms. Using the median obeys the completeness and transitivity axioms, but not the continuity and independence ones.

It does obey weaker forms of continuity; but in a sense, this doesn't matter. You can avoid all these issues by making a single 'ultra-choice'. Simply list all the possible policies you could follow, compute their median return, and choose the one with the best median return. Since you're making a single choice, independence doesn't apply.

So you've picked the policy πm with the highest median value - note that to do this, you need only know an ordinal ranking of worlds, not their cardinal values. In what way is this like maximising expected utility? Essentially, the more options and choices you have - or could hypothetically have - the closer this policy must be to expected utility maximalisation.

Assume u is a utility function compatible with your ordinal ranking of the worlds. Then πu = 'maximise the expectation of u' is also a policy choice. If we choose πm, we get a distribution dmu of possible values of u. Then E(u|πm) is within the absolute deviation (using dmu) of the median value of dmu. This absolute deviation always exists for any distribution with an expectation, and is itself bounded by the standard deviation, if it exists.

Thus maximising the median is like maximising the mean, with an error depending on the standard deviation. You can see it as a risk averse utility maximising policy (I know, I know - risk aversion is supposed to go in defining the utility, not in maximising it. Read on!). And as we face more and more choices, the standard deviation will tend to fall relative to the mean, and the median will cluster closer and closer to the mean.

For instance, suppose we consider the choice of whether to buckle our seatbelt or not. Assume we don't want to die in a car accident that a seatbelt could prevent; assume further that the cost of buckling a seatbelt is trivial but real. To simplify, suppose we have an independent 1/Ω chance of death every time we're in a car, and that a seatbelt could prevent this, for some large Ω. Furthermore, we will be in a car a total of ρΩ, for ρ < 0.5. Now, it seems, the median recommends a ridiculous policy: never wear seatbelts. Then you pay no cost ever, and your chance of dying is less than 50%, so this has the top median.

And that is indeed a ridiculous result. But it's only possible because we look at seatbelts in isolation. Every day, we face choices that have small chances of killing us. We could look when crossing the street; smoke or not smoke cigarettes; choose not to walk close to the edge of tall buildings; choose not to provoke co-workers to fights; not run around blindfolded. I'm deliberately including 'stupid things no-one sensible would ever do', because they are choices, even if they are obvious ones. Let's gratuitously assume that all these choices also have a 1/Ω chance of killing you. When you collect together all the possible choices (obvious or not) that you make in your life, this will be ρ'Ω choice, for ρ' likely quite a lot bigger than 1.

Assume that avoiding these choices has a trivial cost, incommensurable with dying (ie no matter how many times you have to buckle your seatbelt, it still better than a fatal accident). Now median-maximisation will recommend taking safety precautions for roughly (ρ'-0.5)Ω of these choices. This means that the decision of a median maximiser will be close to those of a utility maximiser - they take almost the same precautions - though the outcomes are still pretty far apart: the median maximiser accepts a 49.99999...% chance of death.

But now add serious injury to the mix (still assume the costs are incommensurable). This has a rather larger probability, and the median maximiser will now only accept a 49.99999...% chance of serious injury. Or add light injury - now they only accept a 49.99999...% chance of light injury. If light injuries are additive - two injuries are worse than one - then the median maximiser becomes even more reluctant to take risks. We can now relax the assumption of incommensurablility as well; the set of policies and assessments becomes even more complicated, and the median maximiser moves closer to the mean maximiser.

The same phenomena tends to happen when we add lotteries of decisions, chained decisions (decisions that depend on other decisions), and so on. Existential risks are interesting examples: from the selfish point of view, existential risks are just other things that can kills us - and not the most unlikely ones, either. So the median maximiser will be willing to pay a trivial cost to avoid an xrisk. Will a large group of median maximisers be willing to collectively pay a large cost to avoid an xrisk? That gets into superrationality, which I haven't considered yet in this context.

But let's turn back to the mystical utility function that we are trying to maximise. It's obvious that humans don't actually maximise a utility function; but according to the axioms, we should do so. Since we should, people on this list tend to often assume that we actually have one, skipping over the process of constructing it. But how would that process go? Let's assume we've managed to make our preferences transitive, already a major good achievement. How should we go about making them independent as well? We can do so as we go along. But if we do it ahead of time, chances are that we will be comparing hypothetical situations ("Do I like chocolate twice as much as sex? What would I think of a 50% chance of chocolate vs guaranteed sex? Well, it depends on the situation...") and thus construct a utility function. This is where we have to make decisions about very obscure and unintuitive hypothetical tradeoffs, and find a way to fold all our risk aversion/risk love into the utility.

When median maximising, we do exactly the same thing, except we constrain ourselves to choices that are actually likely to happen to us. We don't need a full ranking of all possible lotteries and choices; we just need enough to decide in the situations we are likely to face. You could consider this a form of moral learning (or preference learning). From our choices in different situations (real or possible), we decide what our preferences are in these situations, and this determines our preferences overall.

 

Why the median is not like the mean

Ok, so the previous paragraph argues that median maximising, if you have enough choices, functions like a clunky version of expected utility maximising. So what's the point?

The point is those situations that are not faced sufficiently often, or that have extreme characteristics. A median maximiser will reject Pascal's mugging, for instance, without any need for extra machinery (though they will accept Pascal's muggings if they face enough independent muggings, which is what we want - for stupidly large values of "enough"). They cope fine with distributions that have no means - such as the Cauchy distribution or a utility version of the St Petersburg paradox. They don't fall into paradox when facing choices with infinite (but ordered) rewards.

In a sense, median maximalisation is like expected utility maximalisation for common choices, but is different for exceptionally unlikely or high impact choices. Or, from the opposite perspective, expected utility maximising gives high probability of good outcomes for common choices, but not for exceptionally unlikely or high impact choices.

Another feature of the general idea (which might be seen as either a plus or a minus) is that it can get around some issues with total utilitarianism and similar ethical systems (such as the repugnant conclusion). What do I mean by this? Well, because the idea is that only choices that we actually expect to make matter, we can say, for instance, that we'd prefer a small ultra happy population to a huge barely-happy one. And if this is the only choice we make, we need not fear any paradoxes: we might get hypothetical paradoxes, just not actual ones. I won't put too much insistence on this point, I just thought it was an interesting observation.

 

For lack of a Cardinal...

Now, the main issue is that we might feel that there are certain rare choices that are just really bad or really good. And we might come to this conclusion by rational reasoning, rather than by experience, so this will not show up in the median. In these cases, it feels like we might want to force some kind of artificial cardinal order on the worlds, to make the median maximiser realise that certain rare events must be considered beyond their simple ordinal ranking.

In this case, maybe we could artificially add some hypothetical choices to our system, making us address these questions more than we actually would, and thus drawing them closer to the mean maximising situation. But there may be other, better ways of doing this.

 

Anyway, that's my first pass at constructing a median maximising system. Comments and critics welcome!

 

EDIT: We can use the absolute deviation (technically, the mean absolute deviation around the mean) to bound the distance between median and mean. This itself is bounded by the standard deviation, if it exists.

Comments (86)

Comment author: Lumifer 08 September 2015 05:17:23PM *  6 points [-]

Then E(u|πm) is within one standard deviation (using dmu) of the median value of dmu.

As the Wikipedia says, "If the distribution has finite variance". That's not necessarily a good assumption.

Consider a policy with three possible outcomes: one pony; two ponies; the universe is converted to paperclips. What's the median outcome? One pony. Don't you want a pony?

The median is a robust estimator meaning that it's harder for outliers to screw you up. The price for that, though, is indifference to the outliers which I am not sure is advisable in the utility context.

Comment author: V_V 09 September 2015 09:08:14AM -1 points [-]

As the Wikipedia says, "If the distribution has finite variance". That's not necessarily a good assumption.

In fact, "Pascal's mugging" scenarios tend to pop up when you allow for utility distributions with infinite variance.

Comment author: Lumifer 09 September 2015 04:30:09PM 1 point [-]

For Pascal's Muggings I don't think you care that much about variance -- what you want is a gargantuan skew.

Comment author: Stuart_Armstrong 08 September 2015 05:21:42PM -1 points [-]

Indeed. But the argument about convergence when you get more and more options still applies.

Comment author: Lumifer 08 September 2015 05:36:10PM *  2 points [-]

Still -- only is you true underlying distribution has finite variance. Check some plots of, say, a Cauchy distribution -- it doesn't take much of heavy tails to have no defined variance (or mean, for that matter).

Not everything converges to a Gaussian.

Comment author: Stuart_Armstrong 09 September 2015 09:19:41AM *  0 points [-]

You did notice that I mentioned the Cauchy distribution by name and link in the text, right?

And the Cauchy distribution is the worst possible example for defending the use of the mean - because it doesn't have one. Not even, a la St Petersburg paradox, an infinite mean, just no mean at all. But it does have a median, exactly placed in the natural middle.

Your argument works somewhat better with one of the stable distributions with an alpha between 1 and 2. But even there, you need a non-zero beta or else median=mean! The standard deviation is an upper bound on the difference, not necessarily a sharp one.

It would be interesting to analyse the difference between mean and median for stable distributions with non-zero beta; I'll get round to that some day. My best guess is that you could use some fractional moment to bound the difference, instead of (the square root of) the variance.

EDIT: this is indeed the case, you can use Jensen's inequality to show that the q-th root of the q-th absolute value central moment, for 1<q<2, can be substituted as a bound between mean and moment. For q<alpha, this should be finite.

Comment author: Lumifer 09 September 2015 04:35:29PM 1 point [-]

I only brought up Cauchy to show that infinite-variance distributions don't have to be weird and funky. Show a plot of a Cauchy pdf to someone who had, like, one undergrad stats course and she'll say something like "Yes, that's a bell curve" X-/

Comment author: Stuart_Armstrong 09 September 2015 06:46:22PM 0 points [-]

Actually, there's no need for higher central moments. The mean absolute deviation around the mean (which I would have called the first absolute central moment) bounds the difference between mean and median, and is sharper than the standard deviation.

Comment author: OrphanWilde 08 September 2015 05:05:44PM 4 points [-]

It's obvious that humans don't actually maximise a utility function; but according to the axioms, we should do so.

Given a choice between "change people" and "change axioms", I'd be inclined to change axioms.

Comment author: DanielLC 18 September 2015 08:16:34PM -1 points [-]

If you're a psychologist and you care about describing people, change the axioms. If you're a rationalist and you care about getting things done, change yourself.

Comment author: V_V 09 September 2015 08:59:25AM *  1 point [-]

The main theoretic justifications for the use of expected utility - hence of means - are the von Neumann Morgenstern axioms. Using the median obeys the completeness and transitivity axioms, but not the continuity and independence ones. It does obey weaker forms of continuity; but in a sense, this doesn't matter. You can avoid all these issues by making a single 'ultra-choice'. Simply list all the possible policies you could follow, compute their median return, and choose the one with the best median return. Since you're making a single choice, independence doesn't apply.

I think you misunderstand the von Neumann-Morgenstern axioms. Von Neumann-Morgenstern theory refers to one-shot decision making, not iterated decision making, hence there is nothing you can fix by taking decisions over policies.

Median utility maximization satisfy the axioms of completeness, transitivity. It does not satisfy continuity and independence of irrelevant alternatives.

Comment author: Stuart_Armstrong 09 September 2015 10:03:27AM 1 point [-]

The independence axiom derives most of it intuitive strength from the fact that if you violate it, you can be money pumped when presented with a sequence of decisions. When making a single decision over policy, independence has far less intuitive strength, as violating it has no actual cost.

Comment author: V_V 09 September 2015 12:47:23PM 0 points [-]

The independence axiom derives most of it intuitive strength from the fact that if you violate it, you can be money pumped when presented with a sequence of decisions.

If your preferences aren't transitive, then even your one-shot decision making system is completely broken, since it can't even yield an action that is "preferred" in a meaningful sense. Vulnerability to money pumping would be the last of your concerns in this case.

Money pumping is an issue in sequential decision making with time-discounting and/or time horizons: any method to aggregate future utilities other than exponential discounting ( * ) over an infinite time horizon yields dynamic inconsistency which could, in principle, be exploited for money pumping.

The intuitive justification for the independence axiom is the following:

  • What would you like for dessert, sir? Ice cream or cake?
  • Ice cream.
  • Oh sorry, I forgot! We also have fruit.
  • Then cake.

This decision making example looks intuitively irrational. If you prefer ice cream to cake when they are the only two alternatives, then why would you prefer cake to ice cream when a third, inferior, alternative is included? The independence axiom formalizes this intuition about rational behavior.

( * with no discounting being a special case of exponential discounting)

Comment author: AlexMennen 11 September 2015 01:16:00AM *  3 points [-]

If you prefer ice cream to cake when they are the only two alternatives, then why would you prefer cake to ice cream when a third, inferior, alternative is included?

You're thinking of a different meaning of "independence". A violation of the independence axiom of VNM would look more like this:

  • What would you like for dessert, sir? Ice cream or cake?
  • Ice cream.
  • Oh sorry, I forgot! There is a 50% chance that we are out of both ice cream and cake (I know we have either both or neither). But I'll go check, and if we're not out of dessert, I'll get you your ice cream.
  • Oh, in that case I'll have cake instead.
Comment author: V_V 11 September 2015 10:39:05AM -1 points [-]

Yes, I believe that this is a stronger version. Median utility satisfies the weaker version of the axiom but not the stronger one.

Comment author: Stuart_Armstrong 09 September 2015 12:49:30PM *  1 point [-]

But notice you had two decision points there.

Intransitivity breaks your decision system with a single decision point; dependence does not. Hence a single policy decision has to be transitive, but need not be independent.

Comment author: V_V 09 September 2015 12:54:29PM -1 points [-]

The first decision is immediately canceled and has no effect on your utility, hence it isn't really a relevant decision point.

More generally, the independence axiom makes sure that the outcome of your decision process is not affected by bad options that are available to you.

Comment author: Stuart_Armstrong 09 September 2015 01:03:31PM 1 point [-]

Except that median-maximising respects independence for options that are available to you (or can be trivially tweaked to do so). It only violates independence for hypothetical bad options that will never be available to you.

Comment author: Jiro 09 September 2015 03:04:54PM 0 points [-]

If you prefer ice cream to cake when they are the only two alternatives, then why would you prefer cake to ice cream when a third, inferior, alternative is included?

It can be rational to do this. There's a paradox publicized by Martin Gardner demonstrating how. Unfortunately the best link I could easily find was a Reddit comment, but try https://www.reddit.com/r/fffffffuuuuuuuuuuuu/comments/gxwqe/why_i_hate_people/c1r5203 .

Comment author: Irgy 10 September 2015 09:06:06AM 1 point [-]

This seems to be a case of trying to find easy solutions to hard abstract problems at the cost of failing to be correct on easy and ordinary ones. It's also fairly trivial to come up with abstract scenarios where this fails catastrophically, so it's not like this wins on the abstract scenarios front either. It just fails on a new and different set of problems - ones that aren't talked about because no-one's ever found a way to fail on them before.

Also, all of the problems you list it solving are problems which I would consider to be satisfactorily solved already. Pascal's mugging fails if the believability of the claim is impacted by the magnitude of the numbers in it, since the mugger can keep naming bigger numbers and simply suffer lower credibility as a result. The St Petersburg paradox is intellectually interesting but impossible to actually construct in practice given a finite universe (versions using infinite time are defeated by bounded utility within a time period and geometric future discounting). The Cauchy distribution is just one of many functions with no mean, all that tells me is that it's the wrong function to model the world with if you know the world should have a mean. And the repungent conclusion, well I can't comment usefully about this because "repungent" or not I've never viewed it to be incorrect in the first place - so to me this potentially justifying smaller but happier populations is an error if anything.

I just think it's worth making the point that the existing, complex solutions to these problems are a good thing. Complexity-influenced priors, careful handling of infinite numbers, bounded utility within a time period, geometric future discounting, integratable functions and correct utility summation and zero-points are all things we want to be doing anyway. Even when they're not resolving a paradox! The paradoxes are good, they teach us things which circumventing the paradoxes in this way would not.

PS People feel free to correct my incomplete resolutions of those paradoxes, but be mindful of whether any errors or differences of opinion I might have actually undermine my point here or not.

Comment author: Houshalter 10 September 2015 11:54:11PM 0 points [-]

Median utility does fail trivially. But it opens the door to other systems which might not. He just posted a refinement on this idea, Mean of Quantiles.

IMO this system is much more robust than expected utility. EU is required to trade away utility from the majority of possible outcomes to really rare outliers, like the mugger. Median utility will get you better outcomes at least 50% of the time. And tradeoffs like the one above, will get you outcomes that are good in the majority of possible outcomes, ignoring rare outliers. I'm not satisfied it's the best possible system, so the subject is still worth thinking about and debating.

I don't think any of your paradoxes are solved. You can't get around Pascal's mugging by modifying your probability distribution. The probability distribution has nothing to do with your utility function or decision theory. Besides being totally inelegant and hacky, there might be practical consequences. Like you can't believe in the singularity now. The singularity could lead to vastly high utility futures, or really negative ones. Therefore it's probability must be extremely small.

The St Petersburg casino is silly of course, but there's no reason a real thing couldn't produce a similar distribution. If you have some sequence of probabilities dependent on each other, that each have 1/2 probability, and give increasing utility.

Comment author: Irgy 11 September 2015 04:12:22AM 0 points [-]

I do acknowledge that my comment was overly negative, certainly the ideas behind it might lead to something useful.

I think you misunderstand my resolution of the mugging (which is fair enough since it wasn't spelled out). I'm not modifying a probability, I'm assigning different probabilities to different statements. If the mugger says he'll generate 3 units of utility difference that's a more plausible statement than if the mugger says he'll generate 3^^^3, etc. In fact, why would you not assign a different probability to those statements? So long as the implausibility grows at least as fast as the value (and why wouldn't it?) there's no paradox.

Re St Petersburg, sure you can have real scenarios that are "similar", it's just that they're finite in practice. That's a fairly important difference. If they're finite then the game has a finite value, you can calculate it, and there's no paradox. In which case median utility can only give the same answer or an exploitably wrong answer.

Comment author: Houshalter 19 September 2015 12:29:39PM 0 points [-]

The whole point of the Pascal's Mugging scenario is that the probability doesn't decrease faster than the reward. If for example, you decrease the probability by half for each additional bit it takes to describe, 3^^^3 still only takes a few bits to write down.

Do you believe it's literally impossible that there is a matrix? Or that it can't be 3^^^3 large? Because when you assign these things so low probability, you are basically saying they are impossible. No amount of evidence could convince you otherwise.

I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit the data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans.

Therefore anyone alive today has a small probability of affecting large amounts of simulated people. Since that is impossible, the theory must be wrong. It doesn't matter if it's simple or if it fits the data perfectly.

If they're finite then the game has a finite value, you can calculate it, and there's no paradox. In which case median utility can only give the same answer or an exploitably wrong answer.

Even in finite case, I believe it can grow quite large as the number of iterations increases. It's one expected dollar each step. Each step having half the probability of the previous step, and twice the reward.

Imagine the game goes for n finite steps. An expected utility maximizer would still spend $n to play the game. A median maximizer would say "You are never going to win in the liftetime of the universe and then some, so no thanks." The median maximizer seems correct to me.

Comment author: Irgy 21 September 2015 12:19:44AM 0 points [-]

Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.

And one that's also been solved, certainly to my satisfaction. Logarithmic utility and/or the Kelly Criterion will both tell you not to bet if the payout is in money, and for the right reasons rather than arbitrary, value-ignoring reasons (in that they'll tell you exactly what you should pay for the bet). If the payout is directly in utility, well I think you'd want to see what mindbogglingly large utility looked like before you dismiss it. It's pretty hard if not impossible to generate that much utility with logarithmic utility of wealth and geometric discounting. But even given that, a one in a triillion chance at a trillion worthwhile extra days of life may well be worth a dollar (assuming I believed it of course). I'd probably just lose the dollar, but I wouldn't want to completely dismiss it without even looking at the numbers.

Re the mugging, well I can at least accept that there are people who might find this convincing. But it's funny that people can be willing to accept that they should pay but still don't want to, and then come up with a rationalisation like median maximising, which might not even pay a dollar for the mugger not to shoot their mother if they couldn't see the gun. If you really do think it's sufficiently plausible, you should actually pay the guy. If you don't want to pay I'd suggest it's because you know intuitively that there's something wrong with the rationale and refuse to pay a tax on your inability to sort it out. Which is the role the median utility is trying to play here, but to me it's a case of trying to let two wrongs make a right.

Personally though I don't have this problem. If you want to define "impossible" as "so unlikely that I will correctly never account for it in any decision I ever make" then yes, I do believe it's impossible and so should anyone. Certainly there's evidence that could convince me, even rather quickly, it's just that I don't expect to ever see such evidence. I certainly think there might be new laws of physics, but new laws of physics that lead to that much computing power that quickly is something else entirely. But that's just what I think, and what you want to call impossible is entirely a non-argument, irrelevant issue anyway.

The trap I think is that when one imagines something like the matrix, one has no basis on which to put an upper bound on the scale of it, so any size seems plausible. But there is actually a tool for that exact situation: the ignorance prior of a scale value, 1/n. Which happens to decay at exactly the same rate as the number grows. Not everyone is on board with ignorance priors but I will mention that the biggest problem with the 1/n ignorance prior is actually that it doesn't decay fast enough! Which serves to highlight the fact that if you're willing to have the plausibility decay even slower than 1/n, your probability distribution is ill-formed, since it can't integrate to 1.

Now to steel-man your argument, I'm aware of the way to cheat that. It's by redistributing the values by, for instance, complexity, such that a family of arbitrarily large numbers can have sufficiently high probability assigned while the overall integral remains unity. What I think though - and this is the part I can accept people might disagree with, is that it's a categorical error to use this distribution for the plausibility of a particular matrix-like unknown meta-universe. Complexity based probability distributions are a very good tool to describe, for instance, the plausibility of somebody making up such a story, since they have limited time to tell it and are more likely to pick a number they can describe easily. But being able to write a computer program to generate a number and having the actual physical resources to simulate that number of people are two entirely different sorts of things. I see no reason to believe that a meta-universe with 3^^^3 resources is any more likely than a meta-universe with similarly large but impossible to describe resources.

So I'll stick with my proportional to 1/n likelihood of meta-universe scales, and continue to get the answer to the mugging that everyone else seems to think is right anyway. I certainly like it a lot better than median utility. But I concede that I shouldn't have been quite so discouraging of someone trying to come up with an alternative, since not everyone might be convinced.

Comment author: Houshalter 22 September 2015 10:03:13PM 0 points [-]

Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.

Well there are two separate points of the St Petersburg paradox. One is the existence of relatively simple distributions that have no mean. It doesn't converge on any finite value. Another example of such a distribution, which actually occurs in physics, is the Cauchy distribution.

Another, which the original Pascal's Mugger post was intended to address, was Solomonoff induction. The idealized prediction algorithm used in AIXI. EY demonstrated that if you use it to predict an unbounded value like utility, it doesn't converge or have a mean.

The second point is just that the paying more than a few bucks to pay the game is silly. Even in a relatively small finite version of it. The probability of losing is very high. Even though it has a positive expected utility. And this holds even if you adjust the payout tables to account for utility != dollars.

You can bite the bullet and say that if the utility is really so high, you really should take that bet. And that's fine. But I'm not really comfortable betting away everything on such tiny probabilities. You are basically guaranteed to lose and end up worse than not betting.

not even pay a dollar for the mugger not to shoot their mother if they couldn't see the gun.

You can do a tradeoff between median maximizing and expected utility with mean of quantiles. This basically gives you the best average outcome ignoring incredibly unlikely outcomes. Even median maximizing by itself, which seems terrible, will give you the best possible outcome >50% of the time. The median is fairly robust.

Whereas expected utility could give you a shitty outcome 99% of the time or 99.999% of the time, etc. As long as the outliers are large enough.

Certainly there's evidence that could convince me, even rather quickly, it's just that I don't expect to ever see such evidence.

If you are assigning 1/3^^^3 probability to something, then no amount of evidence will ever convince you.

I'm not saying that unbounded computing power is likely. I'm saying you shouldn't assign infinitely small probability to it. The universe we live in runs on seemingly infinite computing power. We can't even simulate the very smallest particles because of how large the number of computations grows.

Maybe someday someone will figure out how to use that computing power. Or even figure out that we could interact with the parent universe that runs us, etc. You shouldn't use a model that assigns these things 0 probability.

Comment author: Houshalter 09 September 2015 02:56:02AM *  1 point [-]

I posted this exact idea a few months ago. There was a lot of discussion about it which you might find interesting. We also discussed it recently on the irc channel.

Median utility by itself doesn't work. I came up with an algorithm that compromises between them. In everyday circumstances it behaves like expected utility. In extreme cases, it behaves like median utility. And it has tunable parameters:

sample n counterfactuals from your probability distribution. Then take the average of these n outcomes, [EDIT: and do this an infinite amount of times, and take the median of all these means]. E.g. so 50% of the time the average of the n outcomes is higher, and 50% of the time it's lower.

As n approaches infinity it becomes equivalent to expected utility, and as it approaches 1 it becomes median expected utility. A reasonable value is probably a few hundred. So that you select outcomes where you come out ahead the vast majority of the time, but still take low probability risks or ignore low probability rewards.

EDIT: Stuart Armstrong's idea is much better than this and gets about the same results: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/

I believe this more closely matches how humans actually make decisions, and what we actually want, than expected utility. But I am no longer certain of this. Someone suggested that you can deal with most of the expected utility issues by modifying the utility function. And that is somewhat more elegant than this.

As for inconsistency, I proposed a way of dealing with that too. EU is consistent at every single point in time. It's memoryless. If you can precommit yourself to doing certain things in the future, you don't need this property. You can maintain consistency by committing yourself to only take actions that are consistent with your current decision theory.

This is basically the same thing as your policy selection idea.

Comment author: Lumifer 09 September 2015 03:44:38AM *  1 point [-]

I came up with an algorithm that compromises between them.

I am not sure of the point. If you can "sample ... from your probability distribution" then you fully know your probability distribution including all of its statistics -- mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful.

If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.

Comment author: evand 09 September 2015 02:37:58PM 0 points [-]

If you can "sample ... from your probability distribution" then you fully know your probability distribution

That's not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that's exactly the process the parent comment is suggesting.

Comment author: Lumifer 09 September 2015 05:02:09PM 0 points [-]

A fair point, though I don't think it makes any difference in the context. And I'm not sure the utility function is amenable to MCMC sampling...

Comment author: evand 10 September 2015 03:30:35AM 0 points [-]

I basically agree. However...

It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form "make a small change and compare the result to the status quo", which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...)

In practice, the symmetry constraint isn't as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for "nearby possible world" (and that the symmetry constraint and other parts are pretty easy after that).

Comment author: Lumifer 10 September 2015 02:41:12PM *  0 points [-]

Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...

In that case we can have wonderful debates about which sub-space to sample our hypotheticals from, and once a bright-eyed and bushy-tailed acolyte breates out "ALL of it!" we can pontificate about the boundaries of all :-)

P.S. In about a century philosophy will discover the curse of dimensionality and there will be much rending of clothes and gnashing of teeth...

Comment author: Houshalter 09 September 2015 04:21:39AM 0 points [-]

I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.

This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)

Comment author: Lumifer 09 September 2015 04:53:32AM *  4 points [-]

This gives a tradeoff between mean and median.

There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.

Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you'll be optimizing your single-point summary you can't really talk about what's better and what's worse.

Comment author: Houshalter 09 September 2015 09:02:59PM *  0 points [-]

This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)

The criteria we are optimizing is just "how closely does it match the behavior we actually want."

EDIT: Stuart Armstrong's idea is much better: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/

Comment author: Lumifer 09 September 2015 09:07:45PM 1 point [-]

And what is "the behavior we actually want"?

Comment author: AlexMennen 09 September 2015 01:59:28AM 1 point [-]

I don't understand your argument that the median utility maximizer would buckle its seat belt in the real world. It seemed kind of like you might be trying to argue that median utility maximizers and expected utility maximizers would always approximate each other under realistic conditions, but since you then argue that the alleged difference in their behavior on the Pascal's mugging problem is a reason to prefer median utility maximizers (implying that Pascal's mugging-type problems should be accepted as realistic, or at least that getting them correct is important in a way that getting "buckle my seatbelt, given that this is the only decision I will ever make" right isn't), so I guess that's not it.

But anyway, even if you are right that median utility maximizers buckle their seatbelts in the context of a realistic collections of choices, you concede that they do not buckle their seatbelts when the decision is isolated, and that this is the incorrect decision. I think you should take the fact that your proposal gets a really easy problem wrong much more seriously. If it can't get the seatbelt problem right, it is a bad algorithm, and bad algorithms should not be expected to perform well in real-world problems. I would give an example of a real-world problem that it performs poorly on, but I would have said something like the seatbelt problem, and since I don't understand your argument that it gets that right in the real world, I don't know what must be done in order to construct an example to which your argument does not apply.

Furthermore, I am unimpressed that median utility maximizers reject Pascal's mugging. If you take a random function from decision problems to decisions, there is about a 50% chance it will reject Pascal's mugging, but that doesn't make it a good decision theory. And median utility maximizers do not reject Pascal's mugging for correct reasons. To see this, note that if the seatbelt problem is considered in isolation, it looks exactly like the Pascal's mugging problem, in terms of all the information that median utility maximizers pay attention to, so median utility maximizers do analogous actions in each problem (don't bother putting your seatbelt on, and don't pay the mugger, respectively). However, there are important differences between the problems that make it correct to put your seatbelt on but not pay the mugger. Since a median utility maximizer does not consider these differences, its decision not to pay the mugger does not take into account the reasons that it is a good idea not to pay the mugger. It appears to me that you are not even really trying to come up with a way to make the right decisions for the right reasons, and instead you are merely trying to find a way to make the right decisions. I think that this approach is misguided, because the space of possible failure modes for a decision theory is vast, so if you successfully kludge together a decision procedure into performing well on a certain reasonably finite collection of decision problems, without ensuring that it arrives at its decisions in ways that make sense, the chances that it performs well on all decision problems, or even most of them, is vanishingly small.

Since you brought up the iterated Pascal's mugging, perhaps part of your motivation for this was to find something that would not pay in the isolated Pascal's mugging, but pay each time in the iterated Pascal's mugging? First of all, as literally stated, paying each time in the iterated Pascal's mugging isn't even an available option (I don't have $5 billion, so I can't pay off 1 billion muggers), so it is trivially false that the correct action is to pay every time. However, it is true that there are interpretations of what you could mean under which I would agree that paying is the correct action. But in those cases, an expected utility maximizer with a reasonable bounded utility function will pay, even while not paying in the standard Pascal's mugging problem. (The naive model of the situation in which iterating the problem does not change how an expected utility maximizer handles it does not correctly model the interpretation of "iterated Pascal's mugging" in which it makes sense to pay. I'd say what I mean, but actually keeping track of everything relevant to the problem makes it somewhat tedious to explain.)

Comment author: Stuart_Armstrong 09 September 2015 10:40:13AM *  0 points [-]

I don't understand your argument that the median utility maximizer would buckle its seat belt in the real world.

It derives from the fact that median maximalisation doesn't consider decisions independently, even if their gains and losses are independent.

For illustration, compare the following deal: you pay £q, and get £1 with probability p. There are n independent deals (assume your utility is linear in £).

If n=1, the median maximiser accepts the deal iff q<1 and p>0.5. Not a very good performance! Now let's look at larger n. For m < n, accepting m deals gets you an expected reward of m(p-q). The median is a bit more complicated (see https://en.wikipedia.org/wiki/Binomial_distribution#Mode_and_median ), but it's within £1 of the mean reward.

So if p<q, the mean maximiser will reject all deals, and if p>q, it will accept all n deals.

For p<q, the median maximiser will accept at most 1/(q-p) deals. And for p>q, it will accept at least n - 1/(p-q) deals. In all cases, its expected loss, compared with the mean maximiser, is less than £1.

There's a similar effect going on when considering the seat-belt situation. Aggregation concentrates the distribution in a way that moved median and mean towards each other.

Comment author: AlexMennen 09 September 2015 04:24:43PM 0 points [-]

You appear to now be making an argument that you already conceded was incorrect in OP:

This means that the decision of a median maximiser will be close to those of a utility maximiser - they take almost the same precautions - though the outcomes are still pretty far apart: the median maximiser accepts a 49.99999...% chance of death.

You then go on to say that if the agent also faces many decisions of a different nature, it won't do that. That's where I get lost.

Comment author: Stuart_Armstrong 09 September 2015 05:00:47PM 0 points [-]

The median maximiser accepts a 49.99999...% chance of death, only because "death", "trivial cost" and "no cost" are the only options here. If I add "severe injury" and "light injury" to the outcomes, the maximiser will now accept less than a 49.9999...% chance of light injury. If we make light injury additive, and make the trivial cost also additive and not incomparable to light injuries, we get something closer to my illustrative example above.

Comment author: AlexMennen 09 September 2015 08:34:32PM 1 point [-]

Suppose it comes up with 2 possible policies, one of which involves a 49% chance of death and no chance of injury, and another which involves a 49% chance of light injury, and no chance of heavy injury or death. The median maximizer sees no reason to prefer the second policy if they have the same effects the other 51% of the time.

Comment author: Stuart_Armstrong 10 September 2015 08:48:26AM 0 points [-]

Er, yes, constructing single choice examples when the median behaves oddly/wrongly is trivial. My whole point is about what happens to median when you aggregate decisions.

Comment author: AlexMennen 10 September 2015 04:15:00PM -1 points [-]

You were claiming that in a situation where a median-maximizing agent has a large number of trivially inconvenient action that prevent small risks of death, heavy injury, or light injury, then it would accept a 49% chance of light injury, but you seemed to imply that it would not accept a 49% chance of death. I was trying to point out that this appears to be incorrect.

Comment author: Stuart_Armstrong 11 September 2015 08:30:29AM 1 point [-]

I'm not entirely sure what your objection is; we seem to be talking at cross purposes.

Let's try it simpler. If we assume that the cost of buckling seat belts is incommensurable (in practice) with light injury (and heavy injury, and death), then the median maximising agent will accept a 49.99..% chance of (light injury or heavy injury or death), over their lifetime. Since light injury is much more likely than death, this in effect forces the probability of death down to a very low amount.

It's just an illustration of the general point that median maximising seems to perform much better in real-world problems than its failure in simple theoretical ones would suggest.

Comment author: AlexMennen 11 September 2015 04:27:46PM -2 points [-]

Since light injury is much more likely than death, this in effect forces the probability of death down to a very low amount.

No, it doesn't. That does not address the fact that the agent will not preferentially accept light injury over death. Adopting a policy of immediately committing suicide once you've been injured enough to force you into the bottom half of outcomes does not decrease median utility. The agent has no incentive to prevent further damage once it is in the bottom half of outcomes. As a less extreme example, the value of house insurance to a median maximizer is 0, just because loosing your house is a bad outcome even if you get insurance money for it. This isn't a weird hypothetical that relies on it being an isolated decision; it's a real-life decision that a median maximizer would get wrong.

Comment author: Stuart_Armstrong 14 September 2015 11:39:59AM 0 points [-]

A more general way of stating how multiple decisions improve median maximalisation: the median maximaliser is indifferent of outcomes not at the median (eg suicide vs light injury). But as the decision tree grows and the number of possible situations does as well, the probability increases that outcomes not at the median in a one shot, will affect the median in the more complex situation.

Comment author: Stuart_Armstrong 14 September 2015 11:37:58AM *  0 points [-]

Look, we're arguing past each other here. My logical response here would be to add more options to the system, which would remove the problem you identified (and I don't understand your house insurance example - this is just the seat-belt decision again as a one-shot, and I would address it by looking at all the financial decisions you make in your life - and if that's not enough, all the decisions, including all the "don't do something clearly stupid and pointless" ones).

What I think is clear is:

a) Median maximalisation makes bad decisions in isolated problems.

b) If we combine all the likely decisions that a median maximiser will have to make, the quality of the decisions increase.

If you want to argue against it, either say that a) is bad enough we should reject the approach anyway, even if it decides well in practice, or find examples where a real world median maximaliser will make bad decisions even in the real world (if you would pay Pascal's mugger, then you could use that as an example).

Comment author: Houshalter 09 September 2015 04:09:57AM 0 points [-]

How do you know that it's right to buckle your seatbelt? If you are only going to ride in a car once, never again. And there are no other risks to your life, and so no need to make a general policy against taking small risks?

I'm not confident that it's actually the wrong choice. And if it is, it shouldn't matter much. 99.99% of the time, the median will come out with higher utility than the EU maximizer.

This is generalizable. If there was a "utility competition" between different decision policies in the same situations, the median utility would usually come out on top. As the possible outcomes become more extreme and unlikely, expected utility will do worse and worse. With pascal's mugging at the extreme.

That's because EU trades away utility from the majority of possible outcomes, to really really unlikely outcomes. Outliers can really skew the mean of a distribution, and EU is just the mean.

Of course median can be exploited too. Perhaps there is some compromise between them that gets the behavior we want. There are an infinite number of possible policies for deciding which distribution of utilities to prefer.

EU was chosen because it is the only one that meets a certain set of conditions and is perfectly consistent. But if you allow for algorithms that select overall policies instead of decisions, like OP does, then you can make many different algorithms consistent.

So there is no inherent reason to prefer mean over median. It just comes down to personal preference, and subjective values. What probability distribution of utilities do you prefer?

Comment author: AlexMennen 09 September 2015 04:59:45AM 0 points [-]

How do you know that it's right to buckle your seatbelt? If you are only going to ride in a car once, never again.

I do think that the isolation of the decision is a red herring, but for the sake of the point I was trying to make, it is probably easier to replace the example with a structurally similar one in which the right answer is obvious: suppose you have the opportunity to press a button that will kill you will 49% probability, and give you $5 otherwise. This is the only decision you will ever make. Should you press the button?

Perhaps there is some compromise between them that gets the behavior we want.

As I was saying in my previous comment, I think that's the wrong approach. It isn't enough to kludge together a decision procedure that does what you want on the problems you thought of, because then it will do something you don't want on something you haven't thought of. You need a decision procedure that will reliably do the right thing, and in order to get that, you need it to do the right thing for the right reasons. EU maximization, applied properly, will tell you to do the correct things, and will do so for the correct reasons.

So there is no inherent reason to prefer mean over median.

Actually, there is: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem

Comment author: Houshalter 09 September 2015 05:55:02AM 0 points [-]

suppose you have the opportunity to press a button that will kill you will 49% probability, and give you $5 otherwise.

Yes I said that median utility is not optimal. I'm proposing that there might be policies better than both EU or median.

Actually, there is: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem

Please reread the OP and my comment. If you allow selection over policies instead of individual decisions, you can be perfectly consistent. EU and median are both special cases of ways to pick policies, based on the probability distribution of utility they produce.

You need a decision procedure that will reliably do the right thing, and in order to get that, you need it to do the right thing for the right reasons. EU maximization, applied properly, will tell you to do the correct things, and will do so for the correct reasons.

There is no law of the universe that some procedures are correct and others aren't. You just have to pick one that you like, and your choice is going to be arbitrary.

If you go with EU you are pascal muggable. If you go with median you are muggable in certain cases as well (though you should usually, with >50% probability, end up with better outcomes in the long run. Whereas EU could possibly fail 100% of the time. So it's exploitable, but it's less exploitable at least.)

Comment author: AlexMennen 09 September 2015 07:46:52AM 0 points [-]

If you allow selection over policies instead of individual decisions, you can be perfectly consistent.

I don't see how selecting policies instead of actions removes the motivation for independence.

You just have to pick one that you like, and your choice is going to be arbitrary.

Ultimately, it isn't the policy that you care about; it's the outcome. So you should pick a policy because you like the probability distributions over outcomes that you get from implementing it more than you like the probability distributions over outcomes that you would get from implementing other policies. Since there are many decision problems to use your policy on, this quite heavily constrains what policy you choose. In order to get a policy that reliably picks the actions that you decide are correct in the situations where you can tell what the correct action is, it will have to make those decisions for the same reason you decided that it was the best action (or at least something equivalent to or approximating the same reason). So no, the choice of policy is not at all arbitrary.

If you go with EU you are pascal muggable.

That is not true. EU maximizers with bounded utility functions reject Pascal's wager.

Comment author: Stuart_Armstrong 09 September 2015 10:52:25AM 1 point [-]

I don't see how selecting policies instead of actions removes the motivation for independence.

There are two reasons to like independence. First of all, you might like it for philosophical/aesthetic reasons: "these things really should be independent, these really should be irrelevant". Or you could like it because it prevents you from being money pumped.

When considering policies, money pumping is (almost) no longer an issue, because a policy that allows itself to be money-pumped is (almost) certainly inferior to one that doesn't. So choosing policies removes one of the motivations for independence, to my mind the important one.

Comment author: AlexMennen 09 September 2015 08:29:59PM 0 points [-]

While it's true that this does not tell you to pay each time to switch the outcomes around in a circle over and over again, it still falls prey to one step of a similar problem. Suppose their are 3 possible outcomes: A, B, and C, and there are 2 possible scenarios: X and Y. In scenario X, you get to choose between A and B. In scenario Y, you can attempt to choose between A and B, and you get what you picked with 50% probability, and you get outcome C otherwise. In each scenario, this is the only decision you will ever make. Suppose in scenario X, you prefer A over B, but in scenario Y, you prefer (B+C)/2 over (A+C)/2. But suppose you had to pay to pick A in scenario X, and you had to pay to pick (B+C)/2 in scenario Y, and you still make those choices. If Y is twice as likely as X a priori, then you are paying to get a probability distribution over outcomes that you could have gotten for free by picking B given X, and (A+C)/2 given Y. Since each scenario only involves you ever getting to make one decision, picking a policy is equivalent to picking a decision.

Comment author: Houshalter 09 September 2015 09:22:01PM 0 points [-]

Your example is difficult to follow, but I think you are missing the point. If there is only one decision, then it's actions can't be inconsistent. By choosing a policy only once - one that maximizes it's desired probability distribution of utility outcomes - it's not money pumpable, and it's not inconsistent.

Now by itself it still sucks because we probably don't want to maximize for the best median future. But it opens up the door to more general policies for making decisions. You no longer have to use expected utility if you want to be consistent. You can choose a tradeoff between expected utility and median utility (see my top level comment), or a different algorithm entirely.

Comment author: AlexMennen 09 September 2015 11:52:42PM *  0 points [-]

If there is only one decision point in each possible world, then it is impossible to demonstrate inconsistency within a world, but you can still be inconsistent between different possible worlds.

Edit: as V_V pointed out, the VNM framework was designed to handle isolated decisions. So if you think that considering an isolated decision rather than multiple decisions removes the motivation for the independence axiom, then you have misunderstood the motivation for the independence axiom.

Comment author: Stuart_Armstrong 10 September 2015 08:46:45AM 1 point [-]

So if you think that considering an isolated decision rather than multiple decisions removes the motivation for the independence axiom, then you have misunderstood the motivation for the independence axiom.

I understand the two motivations for the independence axiom, and the practical one ("you can't be money pumped") is much more important that the theoretical one ("your system obeys this here philosophically neat understanding of irrelevant information").

But this is kind of a moot point, because humans don't have utility functions. And therefore we will have to construct them. And the process of constructing them is almost certainly going to depend on facts about the world, making the construction process almost certainly inconsistent between different possible worlds.

Comment author: Houshalter 10 September 2015 12:08:00AM 0 points [-]

It can't be inconsistent within a world no matter how many decisions points there are. If we agree it's not inconsistent, then what are you arguing against?

I don't care about the VNM framework. As you said, it is designed to be optimal for decisions made in isolation. Because we don't need to make decisions in isolation, we don't need to be constrained by it.

Comment author: PeterCoin 09 September 2015 01:35:44AM *  0 points [-]

Median expected behavior is simple which makes it easy to calculate.

As an electrical engineer when I design circuits I start off by assuming that all my parts behave exactly as rated. If a resistor says it's 220+10% Ohms then I use 220 for my initial calculations. Assuming median behavior works wonderfully in telling me what my circuit probably will do.

In fact that's good enough info for me to base my design decision on for a lot of purposes (given a quick verification of functionality, of course).

But what about that 10%? What if it might matter? One thing I do is called worst case analysis https://en.wikipedia.org/wiki/Tolerance_analysis#Worst-case

This is the exact opposite of what you're proposing! I look for the cases where everything is off by the greatest amount possible and in the way that combines to form the worst possible outcome. If my circuit has 2 220+10% ohm resistors I'll consider the cases where both are 242ohms, both are 198ohms and even the bizarre cases where one is 198ohms and the other 242ohms. I do that because if I know my circuit will function under those circumstances, then only when the resistors are out of tolerance (and I can blame someone else) there's a problem.

In my view, average expected utility is the true metric. But there are circumstances where it's easier and cheaper to ignore the utility of anything other than the median case, and there are circumstances where it's easier and cheaper to ignore the utility of anything other than the worst cases.

Comment author: Houshalter 09 September 2015 03:23:45AM 0 points [-]

Worst case isn't a great metric either. E.g. you are required to pay the mugger, because it's the worst possible case. Average case doesn't solve it either, because the utility the mugger is promising is even greater than improbability he's right. Rare outliers can throw off the average case by a lot.

We need to invent some kind of policy to decide what actions to prefer, given a set of the utilities and probabilities of each possible outcome. Expected utility isn't good enough. Median utility isn't either. But there might be some compromise between them that gets what we want. Or a totally different algorithm altogether.

Comment author: Stuart_Armstrong 09 September 2015 09:22:24AM 0 points [-]

That's why I find it interesting that mean and median converge in many cases of repeated choices.

Comment author: Larks 09 September 2015 01:11:31AM 0 points [-]

In finance we use medians a lot more than means.

Comment author: Lumifer 09 September 2015 02:30:38AM 2 points [-]

The rather important question is: For which purpose?

Comment author: entirelyuseless 08 September 2015 05:27:14PM 0 points [-]

"Assume that avoiding these choices has a trivial cost, incommensurable with dying (ie no matter how many times you have to buckle your seatbelt, it still better than a fatal accident)."

Suppose you had a choice: die in a plane crash, or listen to those plane safety announcements one million times. I choose dying in a plane crash.

Comment author: Stuart_Armstrong 09 September 2015 09:08:32AM 0 points [-]

The incommensurability assumption is for illustration only, and is dropped later on.