Christian apologist William Lane Craig claims the skeptical slogan "extraordinary claims require extraordinary evidence" is contradicted by probability theory, because it actually wouldn't take all that much evidence to convince us that, for example, "the numbers chosen in last night's lottery were 4, 2, 9, 7, 8 and 3." The correct response to this argument is to say that the prior probability of a miracle occurring is orders of magnitude smaller than mere one in a million odds.
This only talks about the probability of the evidence given the truth of the hypothesis, but ignores the probability of the evidence given its falsity. For a variety of reasons, fake claims of miracles are far more common than fake TV announcements of the lottery numbers, which drastically reduces the likelihood ratio you get from the miracle claim relative to the lotto announcement.
The specific miracle also has lower prior probability (miracles are possible+this specific miracle's details), but that's not the only issue.
I think it's important to grasp the general principle under which a person telling you that this week's winning lotto numbers are some particular sequence is stronger evidence than their telling you a miracle took place. It offers a greater odds ratio, because they're much less likely to convey a particular lottery number in the event of it not being the winning one than they are to convey a miracle story in the event that no miracle occurred (even people who believe in miracles should be able to accept that miracles have a very high false positive rate if they believe that miracles only occur within their own religion.)
Generally, you should not be in the habit of doing things that have a 0.1% chance of killing you. Do so on a daily basis, and on average you will be dead in less than three years
Indeed!
It's even worse than that might suggest: 0.999^(3*365.25) = 0.334, so after three years you are almost exactly twice as likely to be dead than alive.
To get 50%, you only need 693 days, or about 1.9 years. Conversely, you need a surprising length of time (6500 days, about 17.8 years) to reduce your survival chances to 0.001.
The field of high-availability computing seems conceptually related. This is often considered in terms of the number of nines - so 'five nines' is 99.999% availability, or <5.3 min downtime a year. It often surprises people that a system can be unavailable for the duration of an entire working day and still hit 99.9% availability over the year. The 'nines' sort-of works conceptually in some situations (e.g. a site that makes money from selling things can't make money for as long as it's unavailable). But it's not so helpful in situations where the cost of an interruption per se is huge, and the length of downtime - if it's over a certain threshold - matters much less than whe...
Christian apologist William Lane Craig claims the skeptical slogan "extraordinary claims require extraordinary evidence" is contradicted by probability theory, because it actually wouldn't take all that much evidence to convince us that, for example, "the numbers chosen in last night's lottery were 4, 2, 9, 7, 8 and 3." The correct response to this argument is to say that the prior probability of a miracle occurring is orders of magnitude smaller than mere one in a million odds.
I'm not sure that response works. Flip a fair coin two hundred times, tell me the results, then show me the video and I'll almost certainly believe you. But if the results were H^200, I won't; I'll assume you were wrong or lying about the coin being fair, or something.
H^200 isn't any less likely than any other sequence of two hundred coin flips, but it's still one of the most extraordinary. Extraordinariness just doesn't feel like it's a mere question of prior probability.
H^200 isn't any less likely under the assumption that the coin is fair, and the person reporting the coin is honest. But! H^200—being a particularly simple sequence—is massively more likely than most other sequences under the alternative assumption that the reporter is a liar, or that the coin is biased.
So being told that the outcome was H^200 is at least a lot of evidence that there's something funny going on, for that reason.
This has nothing to do with simplicity. Any other apriori selected sequence, such as first 200 binary digits of pi, would be just as unlikely.
Yes, under the hypothesis that the coin is fair and has been flipped fairly all sequences are equally unlikely. But under the hypothesis that someone is lying to us or has been messing with the coin simple sequences are more likely. So (via Bayes) if we hear of a simple sequence we will think it's more likely to have be artificially created than if we hear of a complicated one.
Your list actually doesn't go far enough. There is a fourth, and scarier category. Things which would, if possibly render probability useless as a model. "The chance that probabilities don't apply to anything." is in the fourth category. I would also place anything that violates such basic things as the consistency of physics, or the existence of the external world.
For really small probabilities, we have to take into account some sources of error that just aren't meaningful in more normal odds.
For instance, if I shuffle and draw one card fro...
First, I really like you pointing out the frequent 99% cop out and your partitioning of low-probability events into meaningful categories.
Second, I am not sure that your example with 53 being prime is convincing. It would be more interesting to ask "what unlikely event would break your confidence in 53 being prime?" and estimate the probability of such an event.
This is one of the great reasons to do your math with odds rather than probabilities. (Well, this plus the fact that Bayes' Theorem is especially elegant when formulated in the form of odds ratios.)
There is no reason, save the historical one, that the default mode of thinking is in probabilities (as opposed to odds.) The math works just the same, but for probabilities that are even slightly extreme (even a fair amount less extreme than what is being talked about here), our intuitions about them break down. On the other hand, our intuitions when doing calc...
I'm not sure of the value of odds as opposed to probabilities for extreme values. Million-to-one odds is virtually the same thing as a 1/1,000,000 probability. Log odds, on the other hand, seem like they might have some potential for helping people think clearly about the issues.
I'd also note that probabilities are more useful for doing expected value calculations.
I've never been completely happy with the "I could make 1M similar statements and be wrong once" test. It seems, I dunno, kind of a frequentist way of thinking about the probability that I'm wrong. I can't imagine making a million statements and have no way of knowing what it's like to feel confidence about a statement to an accuracy of one part per million.
Other ways to think of tiny probabilities:
(1) If probability theory tells me there's a 1 in a billion chance of X happening, then P(X) is somewhere between 1 in a billion and P(I calculated wr...
it actually wouldn't take all that much evidence to convince us that, for example, "the numbers chosen in last night's lottery were 4, 2, 9, 7, 8 and 3." The correct response to this argument is to say that the prior probability of a miracle occurring is orders of magnitude smaller than mere one in a million odds.
That doesn't seem right. If somebody tries to convince me that the result of a fair 5 number lottery is 1, 2, 3, 4, 5 I would have a much harder time believing it, but not because the probability is less then one in a million. I think...
In cases like this where we want to drive the probability that something is true as high as possible, you are always left with an incomputable bit.
The bit that can't be computed is - am I sane? The fundamental problem is that there are (we presume) two kinds of people, sane people, and mad people who only think that they are sane. Those mad ones of course come up with mad arguments which show that their sanity is just fine. They may even have supporters who tell them they are perfectly normal - or even hallucinatory ones. How can I show which category I am...
Things that have a probability of something like one in a million. Includes many common ways to die that don't involve doing anything most people would regard as especially risky. For example, these stats suggest the odds of a 100 mile car trip killing you are somewhere on the order of one in a million.
I am not entirely sure about this, since I have made a similar mistake in the past, but If I am applying my relatively recent learning of this correctly, I think technically it suggests that if 1 million random people drive 100 miles, one of them will pro...
This fits with what I've read, though I'd point out that while we get our share of anti-drunk driving and now anti-texting-while-driving messages, most people don't seem to think driving in the rain, driving when they're a bit tired, or being a bit over the speed limit are particularly dangerous activities.
(Also, even if you're an exceptionally careful driver, you can still be killed by someone else's carelessness.)
Yeah, that's interesting.
I agree with Eliezer's post, but I think that's a good nitpick. Even if I can't be that certain about 10,000 statements consecutively because I get tired, I think it's plausible that there's 10,000 statements simple arithmetic statements which if I understand, check of my own knowledge, and remember seeing in a list on wikipedia, (which is what I did for 53), that, I've only ever been wrong once on. I find it hard to judge the exact amount, but I definitely remember thinking "I thought that was prime but I didn't really check ...
Of course, it's hard to be much more certain. I don't know what the chance is that (eg) mathematicians change the definition of prime -- that's pretty unlikely, but similar things have happened before that I thought I was certain of. But rarely.
If mathematicians changed the definition of "prime," I wouldn't consider previous beliefs about prime numbers to be wrong, it's just a change in convention. Mathematicians have disagreed about whether 1 was prime in the past, but that wasn't settled through proving a theorem about 1's primality, the way normal questions of mathematical truth are. Rather, it was realized that the convention that 1 is not prime was more useful, so that's what was adopted. But that didn't render the mathematicians who considered 1 prime wrong (at least, not wrong about whether 1 was prime, maybe wrong about the relative usefulness of the two conventions.)
I agree that you can be 99.99% (or more) certain that 53 is prime but I don't think you can be that confident based only on the arguement you gave.
...If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are
I should perhaps include within the text a more direct link to Peter de Blanc's anecdote here:
http://www.spaceandgames.com/?p=27
I won't say "Thus I refute" but it is certainly a cautionary tale.
It seems to me to be mostly a cautionary tale about the dangers of taking a long series of bets when you're tired.
I think part of what's troubling you about the test is thus: The claim, X has a probability of 10^-30 despite a prior of 50% is roughly equivalent to saying "I have information whose net result is 100 bits of information that X is false" That is certainly a difficult feat, but not really that hard if you put some effort into it (especially when you chose X). The proposed test to verify such a claim, ie making 10^30 similar statements and being wrong only once, would not only be impossible in your lifetime, but would be equivalent to saying "...
I don't think your comment with the lottery is a good example. If there was a lottery last night, then it was going to be some combination of random numbers, with no combination more or less likely then any others. If you come up and tell me "the winning lottery combination last night was X", the odds of you being correct are pretty high; there's really nothing unlikely in that scenario at all. Taking a look at some random number in the real world and thinking about the probability of it is meaningless, since you could be sitting there ha...
I feel that the sentence
Suppose you say that you're 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once.
is a little questionable to begin with. What exactly is an "independent" statement in this context? The only way to produce a statement about whether 2 + 2 = 4 holds, is to write a proof that it holds (or doesn't hold). But in a meaningful mathematical system you can't have two independent proofs for the same statement. Two proofs for the same thing are either both right or both wrong, or they aren't proofs in the first place.
I always thought this must be the case from plain observation of thinking; much thinking is "logical", and pure logic is not a suitable model with significant uncertainty. There must be many situations where you're 9.999+ certain in order to make logical thinking useful.
If humans are bad at mental arithmetic, but good at, say, not dying - doesn't that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.
Quantifying is ONE technique, and it's bee...
This is false modesty. This is assuming the virtue of doubt when none ought exist. Mathematics is one of the few (if not the only) worthwhile thing(s) we have in life that is entirely a priori. We can genuinely achieve 100% certainty. Anything less is to suggest the impossible, or to redefine the world in a way that has no meaning or usefulness.
I could say that I'm not really sure 2+2=4, but it would not make me more intelligent for the doubt, but more foolish. I could say that I'm not sure that 5 is really prime, but it would hinge on redefining '5' ...
TLDR; though you can't be 100% certain of anything, a lot of the people who go around talking about how you can't be 100% certain of anything would be surprised at how often you can be 99.99% certain. Indeed, we're often justified in assigning odds ratios well in excess of a million to one to certain claims. Realizing this is important for avoiding certain rookie Bayesian's mistakes, as well as for thinking about existential risk.
53 is prime. I'm very confident of this. 99.99% confident, at the very least. How can I be so confident? Because of the following argument:
If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are 51 (=17x3) and 54, so 53 is not divisible by 3. The nearest multiples of 7 are 49 (=7^2) and 56, so 53 is not divisible by 7. Therefore, 53 is prime.
(My confidence in this argument is helped by the fact that I was good at math in high school. Your confidence in your math abilities may vary.)
I mention this because in his post Infinite Certainty, Eliezer writes:
I think this argument that you can't be 99.99% certain that 53 is prime is fallacious. Stuart Armstrong explains why in the comments:
In other words, it's true that:
But it doesn't follow that:
If it's not clear why this doesn't follow consider the anecdote Eliezer references in the quote above, which runs as follows: A gets B to agree that if 7 is not prime, B will give A $100. B then makes the same agreement for 11, 13, 17, 19, and 23. Then A asks about 27. B refuses. What about 29? Sure. 31? Yes. 33? No. 37? Yes. 39? No. 41? Yes. 43? Yes. 47? Yes. 49? No. 51? Yes. And suddenly B is $100 poorer.
Now, B claimed to be 100% sure about 7 being prime, which I don't agree with. But that's not what lost him his $100. What lost him his $100 is that, as the game went on, he got careless. If he'd taken the time to ask himself, "am I really as sure about 51 as I am about 7?" he'd probably have realized the answer was "no." He probably didn't check he primality of 51 as carefully as I checked the primality of 53 at the beginning of this post. (From the provided chat transcript, sleep deprivation may have also had something to do with it.)
If you tried to make 10,000 statements with 99.99% certainty, sooner or later you would get careless. Heck, before I started writing this post, I tried typing up a list of statements I was sure of, and it wasn't long before I'd typed 1 + 0 = 10 (I'd meant to type 1 + 9 = 10. Oops.) But the fact that, as the exercise went on, you'd start including statements that weren't really as certain as the first statement doesn't mean you couldn't be justified in being 99.99% certain of that first statement.
I almost feel like I should apologize for nitpicking this, because I agree with the main point of the "Infinite Certainty" post, that you should never assign a proposition probability 1. Assigning a proposition a probability of 1 implies that no evidence could ever convince you otherwise, and I agree that that's bad. But I think it's important to say that you're often justified in putting a lot of 9s after the decimal point in your probability assignments, for a few reasons.
One reason is arguments in the style of Eliezer's "10,000 independent statements" argument lead to inconsistencies. From another post of Eliezer's:
Okay, so that's just Eliezer. But in a way, it's just a sophisticated version of a mistake a lot of novice students of probability make. Many people, when you tell them they can never be 100% certain of anything, respond switching to saying 99% or 99.9% whenever they previously would have said 100%.
In a sense they have the right idea—there are lots of situations where, while the appropriate probability is not 0, it's still negligible. But 1% or even 0.1% isn't negligible enough in many contexts. Generally, you should not be in the habit of doing things that have a 0.1% chance of killing you. Do so on a daily basis, and on average you will be dead in less than three years. Conversely, if you mistakenly assign a 0.1% chance that you will die each time you leave the house, you may never leave the house.
Furthermore, the ways this can trip people up aren't just hypothetical. Christian apologist William Lane Craig claims the skeptical slogan "extraordinary claims require extraordinary evidence" is contradicted by probability theory, because it actually wouldn't take all that much evidence to convince us that, for example, "the numbers chosen in last night's lottery were 4, 2, 9, 7, 8 and 3." The correct response to this argument is to say that the prior probability of a miracle occurring is orders of magnitude smaller than mere one in a million odds.
I suspect many novice students of probability will be uncomfortable with that response. They shouldn't be, though. After all, if you tried to convince the average Christian of Joseph Smith's story with the golden plates, they'd require much more evidence than they'd need to be convinced that last night's lottery numbers were 4, 2, 9, 7, 8 and 3. That suggests their prior for Mormonism is much less than one in a million.
This also matters a lot for thinking about futurism and existential risk. If someone is in the habit of using "99%" as shorthand for "basically 100%," they will have trouble grasping the thought "I am 99% certain this futuristic scenario will not happen, but the stakes are high enough that I need to take the 1% chance into account in my decision making." Actually, I suspect that problems in this vicinity explain much of the problems ordinary people (read: including average scientists) have thinking about existential risk.
I agree with what Eliezer has said about being ware of picking numbers out of thin air and trying to do math with them. (Or if you are going to pick numbers out of thin air, at least be ready to abandon your numbers at the drop of a hat.) Such advice goes double for dealing with very small probabilities, which humans seem to be especially bad at thinking about.
But it's worth trying to internalize a sense that there are several very different categories of improbable claims, along the lines of:
Furthermore, it's worth trying to learn to think coherently about which claims belong in which category. That includes not being afraid to assign claims to the third category when necessary.
Added: I also recommend the links in this comment by komponisto.