This is a meta-level followup to an object level post about Dissolving the Fermi Paradox.
The basic observation of the paper is that when the statistics is done correctly to represent realistic distributions of uncertainty, the paradox largely dissolves.
The correct statistics is not that technically difficult: instead of point estimates, just take the distributions, reflecting the uncertainty (implied in the literature!)
There is sizeable literature about the paradox, stretching several decades. Just Wikipedia lists 22 hypothetical explanations, and it seems realistic, at least several hundred researchers spent some serious effort thinking about the problem.
It seems to me really important to reflect on this.
What's going on, why this inadequacy? (in general research)
And more locally, why did not this particular subset of the broader community, priding itself on use of Bayesian statistics, notice earlier?
(I have some hypotheses, but it seems better to just post it as an open-ended question)
Note that the conclusion of the toy model rests not on "we did the 9-dimensional integral and got a very low number" but "we did Monte Carlo sampling and ended up with 21%"--it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all--that's as old as Fermi--but being able to do this sort of numerical integration sufficiently cheaply.)
Also, the central intuition guiding the alternative approach is that the expectation of a product is the product of the expectation, which is actually true. The thing that's going on here is elaborating on the generator of P(ETI=0) in a way that's different from "well, we just use a binomial with the middle-of-the-pack rate, right?". This sort of hierarchical modeling of parameter uncertainties is still fairly rare, even among professional statisticians today, and so it's not a huge surprise to me that the same is true for people here. [To be clear, the alternative is picking the MLE model and using only in-model uncertainty, which seems to be standard practice from what I've seen. Most of the methods that bake in the model uncertainty are so-called "model free" methods.]