Recently, I found myself in a conversation with someone advocating the use of Knightian uncertainty. He (who I'm anonymizing as Sir Percy) made suggestions that are useful to most bounded reasoners, and which can be integrated into a Bayesian framework. He also claimed preferences that depend upon his Knightian uncertainty and that he's not an expected utility maximizer. Further, he claimed that Bayesian reasoning cannot capture his preferences. Specifically, Sir Percy said he maximizes minimum expected utility given his Knightian uncertainty, using what I will refer to as the "MMEU rule" to make decisions.

In my previous post, I showed that Bayesian expected utility maximizers can exhibit behavior in accordance with his preferences. Two such reasoners, Paranoid Perry and Cautious Caul, were explored. These hypothetical agents demonstrate that it is possible for Bayesians to be "ambiguity averse", e.g. to avoid certain types of uncertainty.

But Perry and Caul are unnatural agents using strange priors. Is this because we are twisting the Bayesian framework to represent behavior it is ill-suited to emulate? Or does the strangeness of Perry and Caul merely reveal a strangeness in the MMEU rule?

In this post, I'll argue the latter: maximization of minimum expected utility is not a good decision rule, for the same reason that Perry and Caul seem irrational. My rejection of the MMEU rule will follow from my rejections of Perry and Caul.

A rejection of Perry

I understand the appeal of Paranoid Perry, the agent that assumes ambiguity is resolved adversarially. Humans often keep the worst-case in mind, avoid big gambles, and make sure that they'll be OK even if everything goes wrong. Perry promises to capture some of this intuitively reasonable behavior.

Unfortunately, this promise is not kept. From the description of the MMEU rule, you might think that Perry is forgoing high utility in the average case to ensure moderate utility in the worst case. But this is not the case: Perry willingly takes huge gambles, so long as those gambles are resolved by "normal" uncertainty rather than "adversarial" uncertainty.

Allow me to reiterate: Maximizing minimum expected utility does not ensure that you do well in the worst case. It merely selects a single type of uncertainty against which to play defensively, and then gambles against the rest. To illustrate, consider the following Game of Drawers:

There are two boxes, Box 1 and Box 2. Each box has two Drawers, drawer A and drawer B. Each drawer contains a bet, as follows:

1A. 99% lose $1000, 1% gain $99300 (expectation: $3)
1B. Gain 2$

2A: 99% lose $1000, 1% gain $99500 (expectation: $5)
2B. Gain $10

You face one of the boxes (you do not know which) and you must choose one of the drawers. Which do you choose?

Imagine that you have "ambiguity" ("Knightian uncertainty") about which box you face, but that you believe the gambles inside the boxes are fair: this is a setup analogous to the Ellsberg urn game, except that it gives the opposite intuition.

In the Game of Drawers, I expect most people would choose drawer B (and win either $2 or $10). However, Paranoid Perry (and Cautious Caul, and Sir Percy) would choose drawer A.

In the Game of Drawers, Perry acts such that 99% of the time it loses $1000.

Wasn't Paranoid Perry supposed to reason so that it does well in the worst case? What went wrong?

Paranoid Perry reasons that nature gets to pick which box it faces, and that nature will force Perry into the worse box. Box 1 is strictly worse than Box 2, so Perry expects to face Box 1. And drawer A in Box 1 has higher expected utility than drawer B in Box 1, so Perry takes drawer A, a gamble that loses Perry $1000 99% of the time!

Perry has no inclination to avoid big gambles. Perry isn't making sure that the worst scenario is acceptable, it's is maximizing expected utility in the worst scenario that nature can pick. Within that least convenient world, Perry (and Caul, and Sir Percy) take standard Bayesian gambles.

Bayesian gambles may seem reckless, but Perry is not the solution: Perry simply divides uncertainty into two classes, and treats one of them too defensively and the other just as recklessly as a Bayesian would. Two flaws don't make a feature.

Perry assumes that some of its uncertainty gets resolved unfavorably by nature regardless of whether or not it is. In some domains this captures caution, yes. In others, it's just a silly waste. As soon as nature actually starts acting adversarial, Bayesians become cautious too. The difference is that Bayesians are not forced to act as if one arbitrary segment of my uncertainty is adversarially resolved.

Perry believes — with absolute certainty, disregarding all evidence no matter how strong — that Nature never cuts any slack.

A rejection of Caul

I understand, too, the appeal of Cautious Caul. Caul reasons about multiple possible worldparts, and attempts to ensure that the Caul-sliver in the least convenient worldpart does well enough. Instead of insisting that convenient things can't happen (like Perry), Caul only cares about the inconvenient parts. Perhaps this better captures our intuition that people should be less reckless than Bayesians?

Expected utility maximizers happily trade utility in one branch of the possibility space for proportional utility in another branch, regardless of which branch had higher utility in the first place. Some people have moral intuitions that say this is wrong, and that we should be unwilling to trade utility away from unfortunate branches and into fortunate branches.

But this moral intuition is flawed, in a way that reveals confusion both about worst cases and about utility.

There's a huge difference between what the average person consider to be a worst case scenario (e.g., losing the bet) and what a Bayesian considers to be the worst case scenario (e.g. physics has been lying to you and this is about to turn into the worst possible world). Or, to put things glibly, the human version of worst case is "you lose the bet", whereas the Bayesian version of worst case is "the bet was a lie and now everybody will be tortured forever".

You can't maximize the actual worst case in any reasonably complex system. There are some small systems (say, software used to control trains) where people actually do worry about the absolute worst case, but upon inspection, these are consistent with expected utility maximization. A train crash is pretty costly.

And, in fact, expected utility maximization can capture caution in general. We don't need Cautious Caul in order to act with appropriate caution in a Bayesian framework. Caution is not implemented by new decision rules, it is implemented in the conversion from money (or whatever) to utility. Allow me to illustrate:

Suppose that the fates are about offer me a bet and then roll a thousand-sided die. The die seems fair (to me) and my information gives me a uniform probability distribution over values between 1 and 1000: my probability mass is about to split into a thousand shards of equal measure. Before the die is rolled, I am given a choice between two options:

  1. No matter what the die rolls, I get $100.
  2. If the die rolls a 1, I pay $898. Otherwise, I get $101.

The second package results in more expected money (by $1), but I would choose the former. Why? Losing $898 is more bad than the extra dollars are good. I'm more than happy to burn one expected dollar in order to avoid the branch where I have to pay $898. In this case, I act cautious in the intuitive sense — and I do this as an expected utility maximizer. How? Well, consider the following bet instead:

  1. No matter what the die rolls, I get 100 utility.
  2. If the die rolls a 1, I lose 898 utility. Otherwise, I gain 101 utility.

Now I pick bet 2 in a heartbeat. What changed? Utils have already factored in everything that I care about.

I am risk-neutral in utils. If I am loss-averse, then the fact that the losing version of me will experience a sharp pang of frustration and perhaps a lasting depression and lowered productivity has already been factored in to the calculation. It's not like I lose 898 utility and then feel bad about it: the bad feelings are included in the number. The fact that all the other versions of me (who get 101 utility) will feel a little sad and remorseful, and will feel a little frustrated because the world is unfair, has already been factored into their utility numbers: it's not like they see that they got 101 utility and then feel remorse. (Similarly, their relief and glee has already been rolled into the numbers too.)

The intuition that we shouldn't trade utility from unfortunate branches mostly stems from a misunderstanding of utility. Utility already takes into account your egalitarianism, your preference curve for dollars, and so on. Once these things are accounted for, you should trade utility from unfortunate branches into fortunate branches: if this feels bad to you, then you haven't got your utility calculations quite right.

Expected utility maximizers can be cautious. They can avoid ruinous bets. They can play it safe. But all of this behavior is encoded in the utility numbers: we don't need Cautious Caul to exhibit these preferences.

A rejection of the MMEU rule

Bets come with stigma. They are traditionally only offered to humans by other humans, and anyone offering a bet is usually either a con artist or a philosophy professor. "Reject bets by default" is a good heuristic for most people.

Advocates of Bayesian reasoning talk about accepting bets without flinching, and that can seem strange. I think this comes down to a fundamental misunderstanding between colloquial bets and Bayesian bets.

Colloquial bets are offered by skeevy con artists who probably know something you don't. Bayesian bets, on the other hand, arise whenever the agent must make a decision. "Rejecting the bet" is not an option: inaction is a choice. You have to weigh all available actions (including "stall" or "gather more information") and bet on which one will serve you best.

This mismatch, I think, is responsible for quite a bit of most people's discomfort with Bayesian decisions. That said, Bayesians are also willing to make really big gambles, gambles which look crazy to most people (who are risk- and loss-averse). Bayesians claim that risk- and loss-aversion are biases that should be overcome, and that we should [shut up and multiply](http://wiki.lesswrong.com/wiki/Shut_up_and_multiply), but this only exacerbates the discomfort.

As such, there's a lot of appeal to a decision rule that looks out for you in the "worst case" and lets you turn down bets instead of making crazy gambles like those Bayesians. The concepts of "Knightian uncertainty" and "the MMEU rule" appeal to this intuition.

But the MMEU rule doesn't work as advertised. And finally, I'm in a position to articulate my objection, in three parts.


The MMEU rule fails to grant me caution. Maximizing minimum expected utility does not help me do well in the worst case. It only helps me pick out types of uncertainty that I expect are adversarial, and maximize my odds given that that uncertainty will be resolved disfavorably.

Which is a little bit like assuming the worst. I can look at the special uncertainty and say "imagine this part is resolved adversarially, what happens?" But I can't do this with all my uncertainty, because there's always some chance that reality has been lying to me and everything is about to get weird. MMEU manages this by limiting its caution to an arbitrary subset of its possibility. This is a poor approximation of caution.

The MMEU rule is not allowing me to reason as if the world might turn against me. Rather, it's forcing me to act as if with certainty an arbitrary segment of my uncertainty will be resolved disfavorably. I'm all for hedging my bets, and I'm very much in favor of playing defensively when there is an Adversary on the field. I can be just as paranoid as Paranoid Perry, given appropriate reason. I'm happy to identify the parts of nature that often resolve disfavorably and hedge the relevant bets. But when nature proves unbiased, I play the odds. Minimum expected utility maximizers are forced to play defensively forever, no matter how hard nature tries to do them favors.

More importantly, though, new decision rules aren't how you capture caution. Remember the game of drawers? The MMEU rule just doesn't correspond to our intuition sense of caution. The way to avoid ruinous bets is not to assume that nature is out to get you. It's to adjust the utilities appropriately.

Imagine the following variant of Sir Percy's coin toss:

  1. Pay $1,000,000 to be paid $2,000,001 if the coin came up heads
  2. Pay $1,000,000 to be paid $2,000,001 if the coin came up tails

I would refuse both bets individually, and accept their conjunction. But not because I can't assign a consistent credence to the event "the coin came up heads", that's ridiculous. Not because I fail to attempt to maximize utility, that's ridiculous too. I reject each bet individually because dollars aren't utility. If you convert the dollars into my utils, you'll see that the downside of either bet taken individually outweighs its upside, but that the downside of both bets taken together is $0 (with an upside of $1).

So yes, I want to be cautious sometimes. Yes, I can reject bets individually that I accept together. I am completely comfortable rejecting many seemingly-good bets. But the MMEU rule is not the thing which grants me these powers.

The MMEU rule fails to grant me humility. One of the original motivations of the MMEU rule is that, as humans, we don't always know what our credence should be (if we were using all our information correctly and were able to consider more hypotheses and so on). In the unbalanced tennis game, we know that our credence for "Anabel wins" should be either really high or really low, but we don't know which.

I can, of course, recognize this fact as a bounded Bayesian reasoner, without any need for a new decision rule. It is useful for me to recognize that my credences are fuzzy and context dependent and that they would be very different if I was a better Bayesian, but I don't need a new decision rule to model these things. In fact, the MMEU rule makes it harder for me to reason about what my credence should be.

Imagine you know the unbalanced tennis game has already occurred, and that your friend (who you trust completely) has handed you a complicated logical sentence that is true if and only if Anabel won. You haven't figured out whether the sentence is true yet (you could see it going either way), but now you seem justified in saying your credence should be either 0 or 1 (though you don't know which yet).

But if your credence for "Anabel won" is either 0% or 100% and you have Knightian uncertainty about which, then you're going to have a bad time. If the eccentric bookie from earlier tries to offer you a bet on a player of your choice, then there are no odds the bookie can offer that would make you take the bet.

Allow me to repeat: if you think the tennis game has already occurred, and have Knightian uncertainty as to whether your credence for "Anabel won" is 0% or 100%, then if you actually use the MMEU rule, you would refuse a bet with 1,000,000,000 to 1 odds in favor of the player of your choice.

Yes, I have meta-level probability distributions over my future credences for object-level events. I am not a perfect Bayesian (nor even a very good one). I regularly misuse the information I have. It is useful for me to be able to reason about what my credence should be, if only to combat various biases such as overconfidence and base rate neglect.

But the MMEU rule doesn't help me with any of these things. In fact, the MMEU rule only makes it harder for me to reason about what my credence should be. It's a broken tool for a problem that I already know how to address.

The MMEU rule sees its uncertainty in the world. Above all, using the MMEU rule requires that you see some of your uncertainty as part of the world, as part of the territory rather than the map. How is the world-uncertainty separated from the mind-uncertainty? Why should I treat them as different types of thing? The MMEU rule divides uncertainty into two arbitrary tasks, and the distinction fails to grant me any useful tools.

I already know how to treat my credences as imprecise: I widen my error bars and expect my beliefs to change (even though I can't predict how). But I still treat the resulting imprecise credences as normal uncertainty. In order to pretend that Knightian uncertainty is fundamentally different from normal uncertainty, we have assume that it lives in the territory rather than the map. It has to either be controlled by an external process (as Perry believes) or have external significance (as Caul believes).

This seems crazy. Insofar as my credences are biased, I will strive to adjust accordingly. But no matter what I do, they will remain imprecise, and I have to deal with this as best I can. Claiming that the imprecision denotes the Adversarial hand of Nature, or that the imprecision denotes actual Worldparts over which I have preferences, doesn't help me address the real problem.


The MMEU rule fails to solve the problems it set out to solve. And I don't need it to solve those problems — I already know how to do that with the tools I have.

Most of the advice from the Knightian uncertainty camp is good. It is good to realize that your credences are imprecise. You should often expect to be surprised. In many domains, you should widen your error bars. But I already know how to do these things.

Sometimes, it is good to reject bets. Often, it is good to delay decisions and seek more information, and to make sure that you do well in the worst case. But I already know how to do these things. I already know how to translate dollars into utilities such that ruinous bets become unappealing.

If the label "Knightian uncertainty" is useful to you, then use it. I won't protest if you want to stick that label on your own imprecision, on your own inability to consider all of the hypotheses that your evidence supports, or on your own expectation that the future will surprise you no matter how long you deliberate. I personally don't think that "Knightian uncertainty" is a useful label for these things, because it is one label that tries to do too much. But if it's useful to you, then use it.

But don't try to tell me that you should treat it differently! To treat it differently is to act like your uncertainty is in the world, not in you.

If nature starts acting adversarial, then identify the parts of reality that nature gets to control and assume they'll act against you. I'll be behind you all the way. If there's an Adversary around, I'll be paranoid as hell. But throughout it all, I'll be maximizing expected utility.

Anything else is either silly, or a misunderstanding of the label "utility".

When MMEU is useful anyway

The MMEU rule is not fit to be a general decision rule in idealized agents, for all the reasons listed above. Expected utility maximization may seem reckless, and MMEU rule attempts to offer a fix. However, the proposed answer is to divide uncertainty into two categories, and then be both excessively defensive and excessively reckless at the same time. Unfortunately, two flaws don't make a feature.

It may appear that a correct decision rule lies somewhere in the middle, somewhere between the "reckless" and "defensive" extremes. Don't be fooled: Bayesian expected utility maximizers naturally grow defensive as they learn that the world is adversarial, and caution can be written into the utility numbers. If ever it looks like your preferences are best met by doing anything other than maximizing expected utility, then you've misplaced your "utility" label.

But, unfortunately for us, we are humans living in the real world, and we happen to have misplaced all our utility labels.

Nobody is offering you bets with payoffs written in clearly delineated utilities. In fact, almost all of the bets that you are offered by nature are delineated in things like money, time, attention, friendship, or various goods and services. Most of us experience diminishing marginal returns on most goods, and most of us are risk averse. As such, naïve Bayesian-style gambling for money or time or attention or any other good is usually a pretty plan.

Almost all of the bets offered to us by other humans are worse, as they tend to come with ulterior motives attached. Unless you really know what you are doing, naïve Bayesian-style gambling at a Casino will get you into a whole lot of trouble.

Furthermore, we are humans. We use a bunch of faulty heuristics, and we are rife with biases. We're overconfident. We succumb to the planning fallacy. People often don't distinguish between their expected case and their best case. When people are facing a bet and you ask them to consider the worst case, they consider things like losing the bet, and they don't consider things like reality being turned into an eternal hellscape because the laws of physics were just kidding. So while it doesn't make sense for an idealized reasoner to try to maximize utility in the worst case, it may make sense for humans to act that way.

If you find that the MMEU rule is a good heuristic for you, then use it. But when you do, remember why you need it: because humans are overconfident, and because most goods have diminishing returns. If we could fully debias you and correctly compute the utility of each action available to you (including actions like "don't take the bet" or "stall", and including preferences for security and stability), then expected utility maximization would be the only sane decision rule to use.

Finally, there are times when we might want to treat uncertainty like it's in the world rather than in our heads. Suppose, for example, that you believe the Many Worlds interpretation of quantum mechanics. It is possible to have preferences over Everett branches that don't treat quantum uncertainty like internal uncertainty, and this isn't necessarily crazy. For example, you could have preferences stating that any non-zero Everett branch in which humanity survives is extremely valuable. In this case, you might be willing to work very hard to expand the branch where humanity survives from zero to something, but be unwilling to work proportionally hard to expand it from large to slightly larger. If you're VNM-rational, this indicates that you treat quantum uncertainty differently from mental uncertainty.

This doesn't mean you should use the MMEU rule over quantum uncertainty, by any means: Cautious Caul is crazy. But it is useful to remember that whenever something uncertainty-ish is in the world, you might end up doing things that don't look like expected utility maximization, and this can be rational.

A closing anecdote

My response to someone actually using the MMEU rule depends upon their answer to a simple question:

Why ain't you rich?

If they sigh and say "because nature resolves all my ambiguity, and nature hates me", then I will show them all the money that I won when playing exactly the same games as them, and I will say

But nature doesn't hate you! In all the games where we had reason to believe that nature was adversarial (like when that bookie scanned our brains and came back two days later offering bets that looked really nice at first glance), I played just as defensively as you did, and I did just as well as you did. I'm behind you all the way when it looks like nature has stacked things against us. But in other games, nature hasn't been adversarial! Remember all those times we played Sir Percy's coin toss? Look how much richer I became!

But this agent will only shake their head and say "I'm sorry, but you don't understand. I know that nature is adversarial, and I am absolutely certain that every shred of ambiguity allowed by my credence distribution will be used against me. I acknowledge that nature is not acting adversarial against you, and I envy you for your luck, but nature is adversarial against me, and I'm eeking out as much utility as I can."

And to that, I will only shake my head and depart, mourning for their broken priors.

If, instead, the agent answers "I may not be rich in this worldpart, but there are other worldparts that showed up in my credence distribution where I am richer than you", then I will shrug.

I care for my Everett-brothers as much as you care for your credence-brothers, and that caring was factored into my utility calculations. And yet still, I am richer.

"Indeed", the agent will respond with a sage nod. "But while you care for your Everett-brothers according to their measure, I care only about the least convenient world consistent with my credence distribution: so yes, I am poorer here, but it is fine, because I am richer there."

Well, maybe. You've maximized the minimum odds, but that doesn't mean that your least convenient sliver did well. Back when we played the Game of Drawers, the sliver of you that faced Box 1 probably lost one thousand dollars, while the sliver of me that faced Box 1 definitely gained two bucks.

"Perhaps. But in expectation over my credence distribution, that sliver of me has more money."

But in expectation overall, considering that Box 2 also exists, I did better than you.

"I understand how you find it strange, but these are my preferences. I care only about the world with the worst odds that happens to fit in my credence distribution."

"Consider the bet with the thousand-sided quantum die", the agent will continue. "In the least convenient world of that game, you lost 898 utility, and there is a version of me asking how you could let yourself fail so."

That Everett-brother of mine knew the risks. His suffering and my sorrow was factored into the utility calculations. Even after adjusting for loss aversion and risk aversion and my preferences for egalitarianism, he traded his utils to us one-for-one or better. He would make the trade again in a heartbeat, as would I to others.

"In that least convenient world", the agent will reply, "my sliver is asking yours, 'and what of your Everett-brothers, who profited so from your despair, knowing that you would be left suffering in these depths. Do you think they shed tears for you?'"

Don't worry,

I'll answer, in the plethora of expected worlds where I am richer.

We do.

New Comment
9 comments, sorted by Click to highlight new comments since:

Colloquial bets are offered by skeevy con artists who probably know something you don't. Bayesian bets, on the other hand, are offered by nature.

That distinction seems a bit unclear, since con artists are a part of nature, and nature certainly knows something you don't.

Here's a toy situation where a Bayesian is willing to state their beliefs, but isn't willing to accept bets on them. Imagine that I flip a coin, look at the result, but don't tell it to you. You believe that the coin came up heads with probability 1/2, but you don't want to make a standing offer to accept either side of the bet, because then I could just take your money.

In the general case, what should a Bayesian do when they're offered a bet? I think they should either accept it, or update to a state of belief that makes the bet unprofitable ("you offered the bet because you know the coin came up heads, so I won't take it"). That covers both bets offered by nature and bets offered by con artists. Also it's useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.

[-]Jiro50

Also it's useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.

The kind of update on your beliefs you might make may not necessarily be the kind of belief the bet is supposed to demonstrate, however. For instance, in your example, you believe that the coin is a fair coin. Someone flips it and says "If you think this is a fair coin, I'll bet you that the coin came up heads". Because he would probably offer you the bet if it came up heads, you should update on the belief that the coin came up heads this time. However, you shouldn't update much, if at all, on the belief that the coin is fair.

But the bet is being presented as a test of your belief that the coin is fair. So the fact that you updated doesn't actually indicate that you have changed your mind on the important aspect of the bet.

I find the 'Bayesians' offering bets to be a very annoying phenomenon for mostly this reason. Let's say I want to convince you that I know something. I can start offering bets on it, trading future money for today persuasion (which is also a resource that can then be used to make more money elsewhere and come up ahead even if the bets were losing. This persuasion can in some cases even be used to try to win the bet after all).

edit: also, with regards to "you offered the bet because you know the coin came up heads, so I won't take it", I can anticipate this and "offer" you a losing bet, knowing that the offer will make you update and the bet won't take a place (or is unlikely).

'Con' in con artists stands for confidence, and acting confidently (offering apparent bets, i.e. bluffing) is a big part of it.

Thanks -- I edited to make it a bit more clear. The hope was to distinguish between "feeling like you're being offered a bet by an adversarial agent" and "feeling like you have to choose between all available actions". It seems to me that most people associate "betting" with the former, while many aspiring Bayesians associate "betting" with the latter.

No, the difference is that con artists are another intelligence, and you are in competition. Anytime you are in competition against a better more expert intelligence, it is an important difference.

The activities of others are important data, because they are often rationally motivated. If a con artist offers me a bet, that tells me that he values his side of the bet more. If an expert investor sells a stock, they must believe the stock is worth less than some alternate investment. So when playing assume that odds are bad enough to justify their actions.

[-][anonymous]00

Not sure where your comment disagrees with mine. I think you're describing the same thing as "update to a state of belief that makes the bet unprofitable".

[This comment is no longer endorsed by its author]Reply
[-]9eB130

This is a very excellent post. Good style with very clear explanations. I especially like the description around risk-neutrality of utilities which is an often confused topic on LW. Thank you for taking the time to write and submit it.

[-][anonymous]00

Your post is very well written, and I agree that the MMEU rule is wrong. That said...

Colloquial bets are offered by skeevy con artists who probably know something you don't. Bayesian bets, on the other hand, are offered by nature.

The distinction seems a bit unclear, because skeevy con artists are also part of nature, and nature certainly knows something you don't ;-)

Sometime ago I came up with a thought experiment that might make it clearer. Imagine that I flip a coin, look at the result, but don't tell it to you. Then it's rational for you to say that your probability of heads is 50%, but it's not rational for you to make a standing offer to accept either side of the bet, because then I would just take your money. So I suppose "Bayesian bets" are bets whose outcome is independent from which side of the bet is offered to you, while "colloquial bets" are everything else. I'm not sure if most real-life bets are "Bayesian" or "colloquial".

[This comment is no longer endorsed by its author]Reply