From Robyn Dawes’s Rational Choice in an Uncertain World:
In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.”
Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.
Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.
Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.
Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage given that there’s no Fifth Column, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.
A lack of sabotage doesn’t prove that no Fifth Column exists. Absence of proof is not proof of absence. In logic, (A ⇒ B), read “A implies B,” is not equivalent to (¬A ⇒ ¬B), read “not-A implies not-B .”
But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.1
Under the vast majority of real-life circumstances, a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted (even if the alternative hypothesis does not allow it at all) is very weak evidence of absence (though it is evidence nonetheless). This is the fallacy of “gaps in the fossil record”—fossils form only rarely; it is futile to trumpet the absence of a weakly permitted observation when many strong positive observations have already been recorded. But if there are no positive observations at all, it is time to worry; hence the Fermi Paradox.
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
1 If any of this sounds at all confusing, see my discussion of Bayesian updating toward the end of The Machine in the Ghost, the third volume of Rationality: From AI to Zombies.
This article makes a very good point very well. If E would be evidence for a hypothesis H, then ~E has to be evidence for ~H.
Unfortunately, I think that it is unfair to read Warren as violating this principle. (I say "Unfortunately" because it would be nice to have such an evocative real example of this fallacy.)
I think that Warren's reasoning is more like the following: Based on theoretical considerations, there is a very high probability P(H) that there is a fifth column. The theoretical considerations have to do with the nature of the Japanese–American conflict and the opportunities available to the Japanese. Basically, there mere fact that the Japanese have both means and motive is enough to push P(H) up to a high value.
Sure, the lack of observed sabotage (~E) makes P(H|~E) < P(H). So the probability of a fifth column goes down a bit. But P(H) started out so high that H is still the only contingency that we should really worry about. The only important question left is, Given that there is a fifth column, is it competent or incompetent? Does the observation of ~E mean that we are in more danger or less danger? That is, letting C = "The fifth column is competent", do we have that P(C | ~E & H) > P(C | H)?
Warren is arguing that ~E should lead us to anticipate a more dangerous fifth column. He is saying that an incompetent fifth column would probably have performed minor sabotage, which would have left evidence. A competent fifth column, on the other hand, would probably still be marshaling its forces to strike a major blow, which would be inconsistent with E. Hence, P(C | ~E & H) > P(C | H). That is why ~E is a greater cause for concern than E would have been.
Whether all of these prior probabilities are reasonable is another matter. But Warren's remarks are consistent with correct Bayesian reasoning from those priors.
While I think your reading is consistent with a very generous application of the principle of charity, I'm not certain it's appropriate in this case to so apply. Do you have any evidence that Warren was reasoning in this way rather than the less-charitable version, and if so, why didn't he say so explicitly?
It really seems like the simpler explanation is fear plus poor thinking.