Many of you readers may instinctively know that this is wrong. If you flip a coin (50% chance) twice, you are not guaranteed to get heads. The probability of getting a heads is 75%. However you may be surprised to learn that there is some truth to this statement; modifying the statement just slightly will yield not just a true statement, but a useful and interesting one.
It's a spoiler, though. If you want to figure this out as you read this article yourself, you should skip this and then come back. Ok, ready? Here it is:
It's a chance and I did it times, so the probability should be... .
Almost always.
The math:
Suppose you're flipping a coin and you want to find the probability of NOT flipping a single heads in a dozen flips. The math for this is fairly simple: The probability of not flipping a single heads is the same as the probability of flipping 12 tails. which is
The same can be done with this problem: you have something with a 1/10 chance and you want to do it 10 times. The probability of not getting it to happen even once is the same as the probability of it not happening 10 times in a row. So
If you learned some fairly basic probability, I doubt this is that interesting to you. The interesting part comes when you look at the general formula: The probability of not getting what you want (I'll call this , because would be the probability of the outcome you want) is
Where in our case is 10, but in general is whatever number you hear when you hear the (incorrect) phrase "It's a one-in- chance, and I did it times, so it should be "
Hold on a sec, that formula looks familiar...
" ..." I thought to myself... "That looks familiar..." This is by no means obvious, but to people who have dealt with the number recently, this looks quite similar to the limit that actually defines that number. This sort of pattern recognition led me to google what this limit is, and it turns out my intuition was close:
So it turns out: for any n that's large enough, if you do something with a chance of success times, your probability of failure is always going to be roughly , which means your probability of success will always be roughly .
If something is a chance, and I do it times, the probability should be... .
Isn't that cool? I think that's cool.
What I'm NOT saying:
There are a couple ways to easily misinterpret this, so here are some caveats:
- The average number of successes, if you try a chance times is . This is always true for any value of . I believe this is the cause of why some folks think the probability is if they try enough times. Your probability of succeeding at least once may be , but you also have a chance of succeeding twice, thrice, etc. This means 1 success on average, not a 100% chance of success once.
- This is post explores how probability behaves as you try a less-likely chance, more often. Obviously for any given chance, trying it more often will increase your probability of succeeding at least once.
- This relies on a limit as . How useful is this in the real world where is probably a rather small number like 5, 10, or 20? Well you can try out the formula to find out. Personally I think the figure is good enough for (within 5%). You could use as a nice round number instead of if you want to round up, too.
Spoiler for 5, 10, and 20: it's 67%, 65%, and 64% respectively
Ironically, the even more basic error of probabilistic thinking that people so—painfully—commonly make ("It either happens or doesn't, so it's 50/50") would get closer to the right answer.
I agree with your point about there being a 'mental disconnect'. It seems to be less of an issue with understanding the concept of two events not being equally likely to occur, but rather an issue with applying mathematical reasoning to an abstract problem. If you can't find the answer to that problem, you are likely to use the seemingly plausible but incorrect reasoning that 'it either happens or doesn't, so it's 50/50.' This fallacy could be considered a misapplication of the principle of insufficient reason.