Years ago, I was speaking to someone when he casually remarked that he didn’t believe in evolution. And I said, “This is not the nineteenth century. When Darwin first proposed evolution, it might have been reasonable to doubt it. But this is the twenty-first century. We can read the genes. Humans and chimpanzees have 98% shared DNA. We know humans and chimps are related. It’s over.”
He said, “Maybe the DNA is just similar by coincidence.”
I said, “The odds of that are something like two to the power of seven hundred and fifty million to one.”
He said, “But there’s still a chance, right?”
Now, there’s a number of reasons my past self cannot claim a strict moral victory in this conversation. One reason is that I have no memory of whence I pulled that 2750,000,000 figure, though it’s probably the right meta-order of magnitude. The other reason is that my past self didn’t apply the concept of a calibrated confidence. Of all the times over the history of humanity that a human being has calculated odds of 2750,000,000:1 against something, they have undoubtedly been wrong more often than once in 2750,000,000 times. E.g., the shared genes estimate was revised to 95%, not 98%—and that may even apply only to the 30,000 known genes and not the entire genome, in which case it’s the wrong meta-order of magnitude.
But I think the other guy’s reply is still pretty funny.
I don’t recall what I said in further response—probably something like “No”—but I remember this occasion because it brought me several insights into the laws of thought as seen by the unenlightened ones.
It first occurred to me that human intuitions were making a qualitative distinction between “No chance” and “A very tiny chance, but worth keeping track of.” You can see this in the Overcoming Bias lottery debate.
The problem is that probability theory sometimes lets us calculate a chance which is, indeed, too tiny to be worth the mental space to keep track of it—but by that time, you’ve already calculated it. People mix up the map with the territory, so that on a gut level, tracking a symbolically described probability feels like “a chance worth keeping track of,” even if the referent of the symbolic description is a number so tiny that if it were a dust speck, you couldn’t see it. We can use words to describe numbers that small, but not feelings—a feeling that small doesn’t exist, doesn’t fire enough neurons or release enough neurotransmitters to be felt. This is why people buy lottery tickets—no one can feel the smallness of a probability that small.
But what I found even more fascinating was the qualitative distinction between “certain” and “uncertain” arguments, where if an argument is not certain, you’re allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you’re allowed to keep it.
Now it’s a free country and no one should put you in jail for illegal reasoning, but if you’re going to ignore an argument that says the likelihood is one over googol, why not also ignore an argument that says the likelihood is zero? I mean, as long as you’re ignoring the evidence anyway, why is it so much worse to ignore certain evidence than uncertain evidence?
I have often found, in life, that I have learned from other people’s nicely blatant bad examples, duly generalized to more subtle cases. In this case, the flip lesson is that, if you can’t ignore a likelihood of one over googol because you want to, you can’t ignore a likelihood of 0.9 because you want to. It’s all the same slippery cliff.
Consider his example if you ever you find yourself thinking, “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?
Why use probability even in conversations with people who don't understand probability?
Because probability is TRUE. And if people keep hearing about it, maybe they'll actually try to start learning about it.
You're right of course that this needs to be balanced with rhetorical efficiency---we may need to practice some Dark Arts to persuade people for the wrong reasons just to get them to the point where the right reasons can work at all.
The rest of your comment dissolves into irrationality pretty quickly. We do in fact know to very high certainty that "spiritual intuition" is not good evidence, and if you really doubt that we can deluge you with gigabytes of evidence to that effect.
Pyrrhonism is sometimes equated with skepticism, in which case it's stupid and self-defeating; and sometimes it's equated with fallibilism, in which case it's true and in some cases even interesting (many people who cite the Bible's infallibility do not seem to understand that relying on their assessment would be asserting their infallibility), but usually is implicit in the entire scientific method. I don't know which is historically closer to what Pyrrho thought, but nor do I particularly care.