I've always been annoyed at the notion that the bias-variance decomposition tells us something about modesty or Philosophical Majoritarianism.  For example, Scott Page rearranges the equation to get what he calls the Diversity Prediction Theorem:

Collective Error = Average Individual Error - Prediction Diversity

I think I've finally come up with a nice, mathematical way to drive a stake through the heart of that concept and bury it beneath a crossroads at midnight, though I fully expect that it shall someday rise again and shamble forth to eat the brains of the living.

Why should the bias-variance decomposition be relevant to modesty?  Because, it seems to show, the error of averaging all the estimates together, is lower than the typical error of an individual estimate.  Prediction Diversity (the variance) is positive when any disagreement exists at all, so Collective Error < Average Individual Error.  But then how can you justify keeping your own estimate, unless you know that you did better than average?  And how can you legitimately trust that belief, when studies show that everyone believes themselves to be above-average?  You should be more modest, and compromise a little.

So what's wrong with this picture?

To begin with, the bias-variance decomposition is a mathematical tautology.  It applies when we ask a group of experts to estimate the 2007 close of the NASDAQ index.  It would also apply if you weighed the experts on a pound scale and treated the results as estimates of the dollar cost of oil in 2020.

As Einstein put it, "Insofar as the expressions of mathematics refer to reality they are not certain, and insofar as they are certain they do not refer to reality."  The real modesty argument, Aumann's Agreement Theorem, has preconditions; AAT depends on agents computing their beliefs in a particular way.  AAT's conclusions can be false in any particular case, if the agents don't reason as Bayesians.

The bias-variance decomposition applies to the luminosity of fireflies treated as estimates, just as much as a group of expert opinions.  This tells you that you are not dealing with a causal description of how the world works - there are not necessarily any causal quantities, things-in-the-world, that correspond to "collective error" or "prediction diversity".  The bias-variance decomposition is not about modesty, communication, sharing of evidence, tolerating different opinions, humbling yourself, overconfidence, or group compromise.  It's an algebraic tautology that holds whenever its quantities are defined consistently, even if they refer to the silicon content of pebbles.

More importantly, the tautology depends on a particular definition of "error": error must go as the squared difference between the estimate and the true value.  By picking a different error function, just as plausible as the squared difference, you can conjure a diametrically opposed recommendation:

The professor cleared his throat.  "All right," he said to the gathered students, "you've each handed in your written estimates of the value of this expression here," and he gestured to a rather complex-looking string of symbols drawn on the blackboard.  "Now it so happens," the professor continued, "that this question contains a hidden gotcha.  All of you missed in the same direction - that is, you all underestimated or all overestimated the true value, but I won't tell you which.  Now, I'm going to take the square root of the amount by which you missed the correct answer, and subtract it from your grade on today's homework.  But before I do that, I'm going to give you a chance to revise your answers.  You can talk with each other and share your thoughts about the problem, if you like; or alternatively, you could stick your fingers in your ears and hum.  Which do you think is wiser?"

Here we are taking the square root of the difference between the true value and the estimate, and calling this the error function, or loss function.  (It goes without saying that a student's utility is linear in their grade.)

And now, your expected utility is higher if you pick a random student's estimate than if you pick the average of the class!  The students would do worse, on average, by averaging their estimates together!  And this again is tautologously true, by Jensen's Inequality.

A brief explanation of Jensen's Inequality:

(I strongly recommend looking at this graph while reading the following.)

Jensen's Inequality says that if X is a probabilistic variable, F(X) is a function of X, and E[expr] stands for the probabilistic expectation of expr, then:

E[F(X)] <= F(E[X]) if F is concave (second derivative negative)
E[F(X)] >= F(E[X]) if F is convex (second derivative positive)

Why?  Well, think of two values, x1 and x2.  Suppose F is convex - the second derivative is positive, "the cup holds water".  Now imagine that we draw a line between x=x1, y=F(x1) and x=x2, y=F(x2).  Pick a point halfway along this line.  At the halfway point, x will equal (x1 + x2)/2, and y will equal (F(x1)+F(x2))/2.  Now draw a vertical line from this halfway point to the curve - the intersection will be at x=(x1 + x2)/2, y=F((x1 + x2)/2).  Since the cup holds water, the chord between two points on the curve is above the curve, and we draw the vertical line downward to intersect the curve.  Thus F((x1 + x2)/2) < (F(x1) + F(x2))/2.  In other words, the F of the average is less than the average of the Fs.

So:

If you define the error as the squared difference, F(x) = x^2 is a convex function, with positive second derivative, and by Jensen's Inequality, the error of the average - F(E[X]) - is less than the average of the errors - E[F(X)].  So, amazingly enough, if you square the differences, the students can do better on average by averaging their estimates.  What a surprise.

But in the example above, I defined the error as the square root of the difference, which is a concave function with a negative second derivative.  Poof, by Jensen's Inequality, the average error became less than the error of the average.  (Actually, I also needed the professor to tell the students that they all erred in the same direction - otherwise, there would be a cusp at zero, and the curve would hold water.  The real-world equivalent of this condition is that you think the directional or collective bias is a larger component of the error than individual variance.)

If, in the above dilemma, you think the students would still be wise to share their thoughts with each other, and talk over the math puzzle - I certainly think so - then your belief in the usefulness of conversation has nothing to do with a tautology defined over an error function that happens, in the case of squared error, to be convex.  And it follows that you must think the process of sharing thoughts, of arguing differences, is not like averaging your opinions together; or that sticking to your opinion is not like being a random member of the group.  Otherwise, you would stuff your fingers in your ears and hum when the problem had a concave error function.

When a line of reasoning starts assigning negative expected utilities to knowledge - offers to pay to avoid true information - I usually consider that a reductio.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 1:01 PM

If this were anything like my high school math class, everyone else in the class would decide to copy my answer. In some cases, I have darn good reasons to believe I am significantly better than the average of the group I find myself in. For example, I give one of my freshman chemistry midterms. The test was multiple choice, with five possible answers for each question. My score was an 85 out of 100, among the highest in the class. The average was something like 42. On the final exam in that class, I had such confidence in my own answer that I declared that, for one of the questions, the correct answer was not among the responses offered - and I was right; one of the values in the problem was not what the professor intended it to be. I was also the only one in the class who had enough confidence to raise an objection to the question.

On the other hand, there are situations in which I would reasonably expect my estimate to be worse than average. If I wandered into the wrong classroom and had no idea what the professor was talking about, I'd definitely defer to the other students. If you ask me to predict the final score of a game between two well-known sports teams, I probably wouldn't have heard of either of them and just choose something at random. (The average American can name the two teams playing in the Super Bowl when it occurs. I rarely can, and I don't know whether to be proud or ashamed of this.) I also suspect that I routinely overestimate my chances of winning any given game of Magic. ;)

I'm not a random member of any group; I'm me, and I have a reasonable (if probably biased, given the current state of knowledge in psychology) grasp of my own relative standing within many groups.

Also, when you're told that there is a hidden gotcha, sometimes you can find it if you start looking; this is also new information. Of course, you can often can pick apart any given hypothetical situation used to illustrate a point, but I don't know if that matters.

This is a good point. I think squared errors are often used because they are always positive and also analytic - you can take derivatives and get smooth functions. But for many problems they are not especially appropriate.

Informally problems are often posed with an absolute-value error function. Like the square root, this has a cusp at zero and so will "hold water". If some people miss too high and others miss too low, then in this case it also makes sense to switch to the average. If everyone misses on the same side, then it doesn't help but doesn't hurt to switch to the average. So in general it is a good strategy.

I mentioned the other day one example of the good performance of the average in "guessing beans in a jar" type problems. In this case the average came out 3rd best compared to guesses from a class of 73 students. This implicitly uses an absolute-value error function and the problem was such that people missed on both sides. Jensen's Inequality shows why averages work well in such problems.

Informally problems are often posed with an absolute-value error function. Like the square root, this has a cusp at zero

abs(x) has a corner, not a cusp at zero. For a cusp, the derivative approaches +infinity from one side and -infinity from the other; for a corner, it is undefined, but approaches a finite value from at least one of the sides.

Eliezer, given opinions on some variable X, majoritarianism is not committed to the view that your optimal choice facing any cost function is E[X]. The claim should instead be that the best choice is some average appropriate to the problem. Since you haven't analyzed what is the optimal choice in the situation you offer, how can we tell that majoritarianism in fact gives the wrong answer here?

Hal, the surprising part of the beans-in-a-jar problem is that the guessers must collectively act as an unbiased estimator - their errors must nearly all cancel out, so that variance accounts for nearly all of the error, and systematic bias for none of it. Jensen's Inequality does not account for this surprising fact, it only takes advantage of it.

Robin, I don't claim to frame any general rule for compromising, except for the immodest first-order solution that I actually use: treat other people's verbal behavior as Bayesian evidence whose meaning is determined by your causal model of how their minds work - even if this means disagreeing with the majority. In the situation I framed, I'd listen to the other math students talking, offer my own suggestions, and see if we could find the hidden gotcha. If there is a principle higher than this, I have not seen it.

Robin, on second thought, there's a better answer to your question. Namely, the reason that BVD (bias variance decomposition) is offered as support for majoritarianism is the assumption that adopting the average of the group estimates is in fact what majoritarianism advises; otherwise BVD would contradict majoritarianism by suggesting a superior alternative, namely, adopting the average of group estimates, instead of whatever it is majoritarianism does advise. And if majoritarianism does advise adopting the group average, then I can offer a superior alternative to it in the scenario given; namely, use an estimate from a randomly selected student. And if majoritarianism is said, after the fact, to give whatever advice we painstakingly deduced to be best - so that someone suggests that majoritarianism doesn't command averaging the estimates in this case, only after we worked out from nonmajoritarian reasons that averaging was a bad idea - then I'd like to know what the use is of a philosophy whose recommendations no one can figure out in advance. And also, what happened to the idea that the average opinion was likely to be true, not just useful?

Eliezer, my best reading of majoritarianism is that it advises averaging the most recent individual probability distributions, and then having each person use expected utility with that combined distribution to make his choice.

In your example, you have students pick "estimates," average them, give them new info and a new cost function, and then complain that the average of the old estimates, ignoring the new info, does not optimize the new cost function.

One would have a severe framing problem if one adopted a rule that one's estimate of X should be the average across people of their estimates, E[X]. This is because a translation of variables to F(X) might be just as natural a way to describe one's estimates, but as Eliezer points out, E[F(X)] usually differs from F(E[X]). So I think it makes more sense to average probabilities, rather than point estimates.

Robin, that's a fair reply for saving majoritarianism. But it doesn't save the argument from bias-variance decomposition, except in the special case where the loss function is equal to the squared difference for environmental or moral reasons - that is, we are genuinely using point scalar estimates and squared error for some reason or other. The natural loss function for probabilities is the log score, to which the bias-variance decomposition does not apply, although Jensen's Inequality does. (As I acknowledged in my earlier post on The Modesty Argument.)

This leaves us with the core question as "Can you legitimately believe yourself to be above-average?" or "Is keeping your own opinion like being a randomly selected agent?" which I think was always the key issue to begin with.

The graph for this post has vanished.

A similar graph is here.

[-][anonymous]13y00

I think one could still have the BVD and still believe it did involve some type of modesty. The idea is just that when I weight the input of the other agents, I'm not just looking at the raw number that they output as an estimate, but also models for how they arrived at that estimate, etc. Under some independence assumptions, you would re-write BVD in an expanded form that involved multiplying many probabilities for each agent. Thus, if a creationist advised you to down-weight the theory of natural selection, you wouldn't just consider that alone when re-forming your beliefs. You'd also consider the likelihood that that agent suffers from some biases, or has motivated skepticism, etc. And this whole string of probabilities would lead you to update your belief in the ten trillionth decimal place or something; something well below the machine epsilon of a human mind. But in cases where the other agents can't be modeled as deficient reasoners, you would give more credence to their differing estimates and update accordingly. The modesty argument, to me, represents credence for trustworthiness of other agents. In cases where that trustworthiness is probably low, not much updating happens. When it is high, more updating happens.

Results from the Good Judgment Project suggest that putting people into teams lets them significantly outperform (have lower Brier's scores than) predictions from both (unweighted) averaging of probabilities and the (admittedly also unweighted) averaging of probability estimates from the better portion of predictors. This seems to offer weak evidence that what goes on in a group is not simple averaging.