This is a meta-level followup to an object level post about Dissolving the Fermi Paradox.
The basic observation of the paper is that when the statistics is done correctly to represent realistic distributions of uncertainty, the paradox largely dissolves.
The correct statistics is not that technically difficult: instead of point estimates, just take the distributions, reflecting the uncertainty (implied in the literature!)
There is sizeable literature about the paradox, stretching several decades. Just Wikipedia lists 22 hypothetical explanations, and it seems realistic, at least several hundred researchers spent some serious effort thinking about the problem.
It seems to me really important to reflect on this.
What's going on, why this inadequacy? (in general research)
And more locally, why did not this particular subset of the broader community, priding itself on use of Bayesian statistics, notice earlier?
(I have some hypotheses, but it seems better to just post it as an open-ended question)
That might explain why many individual researchers failed, but it can't be common enough to filter out everyone thinking about the problem except SDO. To see how many researchers we would expect to find this solution, we must multiply our estimates of the number thinking about it, by the fraction of those who know about the correct statistical technique of using distributions, multiplied by the odds they would apply this technique, do it correctly, and consider the result worth publishing.
N=R*f(s)*f(a)*f(c)*f(p)
Using personal estimates I obtained a result of N=2.998, close to the observed number of publishers of the paper