One of the few things that I really appreciate having encountered during my study of philosophy is the Gettier problem. Paper after paper has been published on this subject, starting with Gettier's original "Is Justified True Belief Knowledge?" In brief, Gettier argues that knowledge cannot be defined as "justified true belief" because there are cases when people have a justified true belief, but their belief is justified for the wrong reasons.
For instance, Gettier cites the example of two men, Smith and Jones, who are applying for a job. Smith believes that Jones will get the job, because the president of the company told him that Jones would be hired. He also believes that Jones has ten coins in his pocket, because he counted the coins in Jones's pocket ten minutes ago (Gettier does not explain this behavior). Thus, he forms the belief "the person who will get the job has ten coins in his pocket."
Unbeknownst to Smith, though, he himself will get the job, and further he himself has ten coins in his pocket that he was not aware of-- perhaps he put someone else's jacket on by mistake. As a result, Smith's belief that "the person who will get the job has ten coins in his pocket" was correct, but only by luck.
While I don't find the primary purpose of Gettier's argument particularly interesting or meaningful (much less the debate it spawned), I do think Gettier's paper does a very good job of illustrating the situation that I refer to as "being right for the wrong reasons." This situation has important implications for prediction-making and hence for the art of rationality as a whole.
Simply put, a prediction that is right for the wrong reasons isn't actually right from an epistemic perspective.
If I predict, for instance, that I will win a 15-touch fencing bout, implicitly believing this will occur when I strike my opponent 15 times before he strikes me 15 times, and I in fact lose fourteen touches in a row, only to win by forfeit when my opponent intentionally strikes me many times in the final touch and is disqualified for brutality, my prediction cannot be said to have been accurate.
Where this gets more complicated is with predictions that are right for the wrong reasons, but the right reasons still apply. Imagine the previous example of a fencing bout, except this time I score 14 touches in a row and then win by forfeit when my opponent flings his mask across the hall in frustration and is disqualified for an offense against sportsmanship. Technically, my prediction is again right for the wrong reasons-- my victory was not thanks to scoring 15 touches, but thanks to my opponent's poor sportsmanship and subsequent disqualification. However, I likely would have scored 15 touches given the opportunity.
In cases like this, it may seem appealing to credit my prediction as successful, as it would be successful under normal conditions. However, I think we perhaps have to resist this impulse and instead simply work on making more precise predictions. If we start crediting predictions that are right for the wrong reasons, even if it seems like the "spirit" of the prediction is right, this seems to open the door for relying on intuition and falling into the traps that contaminate much of modern philosophy.
What we really need to do in such cases seems to be to break down our claims into more specific predictions, splitting them into multiple sub-predictions if necessary. My prediction about the outcome of the fencing bout could better be expressed as multiple predictions, for instance "I will score more points than my opponent" and "I will win the bout." Some may notice that this is similar to the implicit justification being made in the original prediction. This is fitting-- drawing out such implicit details is key to making accurate predictions. In fact, this example itself was improved by tabooing[1] "better" in the vague initial sentence "I will fence better than my opponent."
In order to make better predictions, we must cast out those predictions that are right for the wrong reasons. While it may be tempting to award such efforts partial credit, this flies against the spirit of the truth. The true skill of cartography requires forming both accurate and reproducible maps; lucking into accuracy may be nice, but it speaks ill of the reproducibility of your methods.
[1] I greatly suggest that you make tabooing a five-second skill, and better still recognizing when you need to apply it to your own processes. It pays great dividends in terms of precise thought.
Lets abstract about this:
There are 2 unfair coins. One has P(heads)=1/3 and the other P(heads)=2/3. I take one of them, flip twice and it turns heads twice. Now I believe that the coin chosen was the one with P(heads)=2/3. In fact there are 4/5 likelihood of being so. I also believe that flipping again will turn heads again, mostly because I think that I choose the 2/3 heads coin (p=8/15). I also admit the possibility of getting heads but being wrong about the chosen coin, but this is much less likely (p=1/15). So I bet on heads. So I flip it again and it turns heads. I was right. But it turns out that the coin was the other one, the one with P(heads)=1/3 (which I found after a few hundreds flips). Would you say I was right for the wrong reasons? Well I was certainly surprised to find out I had the wrong coin. Does this apply for the Gettier problem?
Lets go back to the original problem to see that this abstraction is similar. Smith believes "the person who will get the job has ten coins in his pocket". And he does that mostly because he thinks Jones will get it and has ten coins. But if he is reasonable, he will also admit the possibility of he getting the job and also having ten coins, although with lower probability.
My point here is: at which probability the Gettier problem arises? Would it arises if in the coin problem P(heads) was different?
I think it arises at the point where you did not even consider the alternative. This is a very subjective thing, of course.
If the probability of the actual outcome was really negligible (with a perfect evaluation by the prediction-maker), this should not influence the evaluation of predictions in a significant way. If the probability was significant, it is likely that the prediction-maker considered it. If not, count it as false.