Bayesian Injustice
(Co-written with Bernhard Salow) TLDR: Differential legibility is a pervasive, persistent, and individually-rational source of unfair treatment. Either it’s a purely-structural injustice, or it’s a type of "zetetic injustice"—one requiring changes to our practices of inquiry. Finally, graduate admissions are done. Exciting. Exhausting. And suspicious. Yet again, applicants from prestigious, well-known universities—the “Presties”, as you call them—were admitted at a much higher rate than others. But you’re convinced that—at least controlling for standardized-test scores and writing samples—prestige is a sham: it’s largely money and legacies that determine who gets into prestigious schools; and such schools train their students no better. Suppose you’re right. Does that settle it? Is the best explanation for the Prestie admissions-advantage that your department has a pure prejudice toward fancy institutions? No. There’s a pervasive, problematic, but individually rational type of bias that is likely at play. Economists call it “statistical discrimination” (or "screening discrimination"). But it’s about uncertainty, not statistics. We’ll call it Bayesian injustice. A simplified case Start with a simple, abstract example. Two buckets, A and B, contain 10 coins each. The coins are weighted: each has either a ⅔ or a ⅓ chance of landing heads when tossed. Their weights were determined at random, independently of the bucket—so you expect the two buckets to have the same proportions of each type of coin. You have to pick one coin to bet will land heads on a future toss. To make your decision, you’re allowed to flip each coin from Bucket A once, and each coin from Bucket B twice. Here are the outcomes: Which coin are you going to bet on? One of the ones (in blue) that landed heads twice, of course! These are the coins that you should be most confident are weighted toward heads, since it’s less likely that two heads in
I get where you're coming from, but where do you get off the boat? The result is a theorem of probability: if (1) you update by conditioning on e, and (2) you had positive covariance for your own opinion and the truth, then you commit hindsight bias. So to say this is irrational we need to either say that (1) you don't update by conditioning, or (2) you don't have positive covariance between your opinion and the truth. Which do you deny, and why?
The standard route is to deny (2) by implicitly assuming that you know exactly what your prior probability was, at both the prior and future time. But that's a... (read more)