Years ago, I was speaking to someone when he casually remarked that he didn’t believe in evolution. And I said, “This is not the nineteenth century. When Darwin first proposed evolution, it might have been reasonable to doubt it. But this is the twenty-first century. We can read the genes. Humans and chimpanzees have 98% shared DNA. We know humans and chimps are related. It’s over.”
He said, “Maybe the DNA is just similar by coincidence.”
I said, “The odds of that are something like two to the power of seven hundred and fifty million to one.”
He said, “But there’s still a chance, right?”
Now, there’s a number of reasons my past self cannot claim a strict moral victory in this conversation. One reason is that I have no memory of whence I pulled that 2750,000,000 figure, though it’s probably the right meta-order of magnitude. The other reason is that my past self didn’t apply the concept of a calibrated confidence. Of all the times over the history of humanity that a human being has calculated odds of 2750,000,000:1 against something, they have undoubtedly been wrong more often than once in 2750,000,000 times. E.g., the shared genes estimate was revised to 95%, not 98%—and that may even apply only to the 30,000 known genes and not the entire genome, in which case it’s the wrong meta-order of magnitude.
But I think the other guy’s reply is still pretty funny.
I don’t recall what I said in further response—probably something like “No”—but I remember this occasion because it brought me several insights into the laws of thought as seen by the unenlightened ones.
It first occurred to me that human intuitions were making a qualitative distinction between “No chance” and “A very tiny chance, but worth keeping track of.” You can see this in the Overcoming Bias lottery debate.
The problem is that probability theory sometimes lets us calculate a chance which is, indeed, too tiny to be worth the mental space to keep track of it—but by that time, you’ve already calculated it. People mix up the map with the territory, so that on a gut level, tracking a symbolically described probability feels like “a chance worth keeping track of,” even if the referent of the symbolic description is a number so tiny that if it were a dust speck, you couldn’t see it. We can use words to describe numbers that small, but not feelings—a feeling that small doesn’t exist, doesn’t fire enough neurons or release enough neurotransmitters to be felt. This is why people buy lottery tickets—no one can feel the smallness of a probability that small.
But what I found even more fascinating was the qualitative distinction between “certain” and “uncertain” arguments, where if an argument is not certain, you’re allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you’re allowed to keep it.
Now it’s a free country and no one should put you in jail for illegal reasoning, but if you’re going to ignore an argument that says the likelihood is one over googol, why not also ignore an argument that says the likelihood is zero? I mean, as long as you’re ignoring the evidence anyway, why is it so much worse to ignore certain evidence than uncertain evidence?
I have often found, in life, that I have learned from other people’s nicely blatant bad examples, duly generalized to more subtle cases. In this case, the flip lesson is that, if you can’t ignore a likelihood of one over googol because you want to, you can’t ignore a likelihood of 0.9 because you want to. It’s all the same slippery cliff.
Consider his example if you ever you find yourself thinking, “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?
"The odds of that are something like two to the power of seven hundred and fifty million to one."
As Eliezer admitted, it is a very bad idea to ascribe probabilities like this to real world propositions. I think that the strongest reason is that it is just too easy for the presuppositions to be false or for your thinking to have been mistaken. For example, if I gave a five line logical proof of something, that would supposedly mean that there is no chance that its conclusion is true given the premisses, but actually the chance that I would make a logical error (even a transcription error somewhere) is at least on in a billion (~ 1 in 2^30). There is at least this much chance that either Elizer's reasoning or the basic scientific assumptions were seriously flawed in some way. Given the chance of error in even the simplest logical arguments (let alone the larger chance that the presuppositions about genes etc are false), we really shouldn't ascribe probabilities smaller than 1 in a billion to factual claims at all. Better to say that the probability of this happening by chance given the scientific presuppositions is vanishingly small. Or that the probability of it happening by chance pretty much equals the probability of the presuppositions being false.
Toby.