From Robyn Dawes’s Rational Choice in an Uncertain World:
In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.”
Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.
Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.
Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.
Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage given that there’s no Fifth Column, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.
A lack of sabotage doesn’t prove that no Fifth Column exists. Absence of proof is not proof of absence. In logic, (A ⇒ B), read “A implies B,” is not equivalent to (¬A ⇒ ¬B), read “not-A implies not-B .”
But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.1
Under the vast majority of real-life circumstances, a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted (even if the alternative hypothesis does not allow it at all) is very weak evidence of absence (though it is evidence nonetheless). This is the fallacy of “gaps in the fossil record”—fossils form only rarely; it is futile to trumpet the absence of a weakly permitted observation when many strong positive observations have already been recorded. But if there are no positive observations at all, it is time to worry; hence the Fermi Paradox.
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
1 If any of this sounds at all confusing, see my discussion of Bayesian updating toward the end of The Machine in the Ghost, the third volume of Rationality: From AI to Zombies.
I think I understand that a little better now. So thank you for taking the time to explain that to me.
Even so, it seems all I must do is add to my counterexample a prior track record of the little boy changing strategies while pretending to go along with authority. Reconsidering my little boy example with that in mind, does that change your reply?
Also, I fail to see how your response ameliorates my objection to the claim "it is impossible for A and ~A to both be evidence for B." By your own explanation, they are both evidence, albeit offering unequal relative probabilities (forgive me if I'm getting the password wrong there, but I think you can surmise what it is I'm getting at). Maybe if we say that "It is impossible for A and ~A to both offer the same relative probability for B at the same time and concerning the same situation and given the same subjective view of the facts, etc," we have something that doesn't lead us to claim things that are not true about someone else's argument, as in the case above, that their argument depends on A and ~A at the same time and in the same way, when the precise claim in question is actually that A can be evidence for B in one situation; and based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B. I'm not sure if I've explained that clearly, but I'll keep trying until either I get what I'm missing, or I manage to express clearly what may well be coming out as gibberish. Either way, I get a little slice of the self-improvement I'm looking for.
Thanks again, and I hope you can forgive my wet ears on this and bear with me. The benefits of our exchanges here will probably be pretty one sided; I have almost nothing to offer a more experienced rationalist here, and lots to gain... and I realize that, so bear with me, and please know I am grateful for the feedback.
Here's a contradiction with A and ~A both being evidence for the same thing. You could tell your spouse "Go up and check if little Timmy went to bed". Before ze comes back you already have an estimate of how likely Timmy is to go to bed on time (your prior belief). But then your spouse, who was too tired to climb the stairs, comes back and tells you "Little Timmy may or may not have gone to bed". Now, i... (read more)