When a low-probability, high-impact event occurs, and the world “got it wrong”, it is tempting to look for the people who did successfully predict it in advance in order to discover their secret, or at least see what else they’ve predicted. Unfortunately, as Wei Dai discovered recently, this tends to backfire.
It may feel a bit counterintuitive, but this is actually fairly predictable: the math backs it up on some reasonable assumptions. First, let’s assume that the topic required unusual levels of clarity of thought not to be sucked into the prevailing (wrong) consensus: say a mere 0.001% of people accomplished this. These people are worth finding, and listening to.
But we must also note that a good chunk of the population are just pessimists. Let’s say, very conservatively, that 0.01% of people predicted the same disaster just because they always predict the most obvious possible disaster. Suddenly the odds are pretty good that anybody you find who successfully predicted the disaster is a crank. The mere fact that they correctly predicted the disaster becomes evidence only of extreme reasoning, but is insufficient to tell whether that reasoning was extremely good, or extremely bad. And on balance, most of the time, it’s extremely bad.
Unfortunately, the problem here is not just that the good predictors are buried in a mountain of random others; it’s that the good predictors are buried in a mountain of extremely poor predictors. The result is that the mean prediction of that group is going to be noticeably worse than the prevailing consensus on most questions, not better.
Obviously the 0.001% and 0.01% numbers above are made up; I spent some time looking for real statistics and couldn't find anything useful; this article claims roughly 1% of Americans are "preppers", which might be a good indication, except it provides no source and could equally well just be the lizardman constant. Regardless, my point relies mainly on the second group being an order of magnitude or more larger than the first, which seems (to me) fairly intuitively likely to be true. If anybody has real statistics to prove or disprove this, they would be much appreciated.
You can filter out some of the cranks by checking the forecaster's reasoning, data, credentials, and track record, by looking for a consensus of similarly-qualified people, and by taking the incentives of the forecasters into account. But this comes with its own problems:
To a non-expert, it's hard to tell to what degree an expert's area of specialization overlaps with the question at hand. Is a hospital administrator a trustworthy source of guidance on the risk that a novel coronavirus turns into a pandemic?
To a non-expert, easy questions look hard, and hard questions sometimes look easy. Can we distinguish between the two?
To a non-expert, it's hard to tell whether an expert consensus is really what it seems, or whether it's coalition-building by a political faction under the cloak of "objectivity."
These are just a few examples.
In the end, you have to decide whether it's easier to check the forecaster's reasoning or their trustworthiness.