This is a meta-level followup to an object level post about Dissolving the Fermi Paradox.
The basic observation of the paper is that when the statistics is done correctly to represent realistic distributions of uncertainty, the paradox largely dissolves.
The correct statistics is not that technically difficult: instead of point estimates, just take the distributions, reflecting the uncertainty (implied in the literature!)
There is sizeable literature about the paradox, stretching several decades. Just Wikipedia lists 22 hypothetical explanations, and it seems realistic, at least several hundred researchers spent some serious effort thinking about the problem.
It seems to me really important to reflect on this.
What's going on, why this inadequacy? (in general research)
And more locally, why did not this particular subset of the broader community, priding itself on use of Bayesian statistics, notice earlier?
(I have some hypotheses, but it seems better to just post it as an open-ended question)
No. It boils down to the following fact: If you take given estimates on the distribution of parameter values at face value, then:
(1) The expected number of observable alien civilizations is medium-large (2) If you consider the distribution of the number of alien civs, you get a large probability of zero, and a small probability of "very very many aliens", that integrates up to the medium-large expectation value.
Previous discussions computed (1) and falsely observed a conflict with astronomical observations, and totally failed to compute (2) from their own input data. This is unquestionably an embarrassing failure of the field.