Very interesting post from "The Last Rationalist" discussing how the rationalist community seems to have been slow to update on comparative impracticality of formal Bayes and on the replication crisis in psychology.
I don't fully agree with this post - for instance, my impression is that there is in fact a replication crisis in medicine, which the author seems unaware of or understates - but I think the key points provide useful food for thought.
(Note: this is my opinion as a private individual, not an official opinion as a CFAR instructor or as a member of any other organization.)
I discussed a few of the points here with some people at the MIRI lunch table, and Scott Garrabrant pointed out "hey, I loudly abandoned Bayesianism!". That is, we always knew that the ideal Bayesianism required infinite computation (you don't just consider a few hypotheses, but all possible hypotheses) and wouldn't work for embedded agents, and as MIRI became more interested in embedded agency they started developing theories of how that works. There was some discussion of how much this aligned with various people's claim that the quantitative side of Bayes wasn't all that practical for humans (with, I think, the end result being seeing them as similar).
For example, in August 2013 there was some discussion of Chapman's Pop Bayesianism, where I said:
Then Scott Alexander responded, identifying Bayesianism in contrast to other epistemologies, and I identified some qualitative things I learned from Bayes, as did Tyrrell McAllister.
How does this hold up, five years later?
I still think Bayesianism as synthesis of Aristotelianism and Anton-Wilsonism is superior to both; I think the operation underlying Bayesianism for embedded agents is not Bayesian updating, but rather something that approaches Bayesian updating in the limit, and that one of the current areas of progress in rationality is grappling with what's actually going there. (Basically, this is because of the standard Critical Rationalist critique of Bayesianism, that the Bayesian view says the equivalent of "well, you just throw infinite compute at the problem to consider the superset of all possible answers, and then you're good" which is not useful advice to current practicing scientists. But the CR answer doesn't appear to be good enough either.)
I think basically the same things I did then--that the actual quantitative use of Bayes is not that important for most people, and that CFAR's techniques for talking about the qualitative use of Bayes mostly don't refer to Bayes directly. I don't think this state of affairs represents a 'school without practitioners' / I still disagree with The Last Rationalist's assessment of things, but perhaps I'm missing what TLR is trying to point at.