For the record, the popular interpretation of "Popperian falsificationism" is not what Karl Popper actually believed. (According to Wikipedia, he did not even like the word "falsificationism" and preferred "critical rationalism" instead.) What most people know as "Popperian falsificationism" is a simplification optimized for memetic power, and it is quite simple to disprove. Then we can play motte and bailey with it: the motte being the set of books Karl Popper actually wrote, and the bailey being the argument of a clever internet wannabe meta-scientist about how this or that isn't scientific because it does not follow some narrow definition of falsifiability.
I have not read Popper's book, therefore I am only commenting here on the traditional internet usage of "Popperian falsificationism".
The good part is noticing that beliefs should pay rent in anticipated consequences. A theory that explains everything, predicts nothing. In the "Popperian" version, beliefs pay rent by saying which states of the world are impossible. As long as they are right, you keep them. When they get wrong once, you mercilessly kick them out.
An obvious problem: How does this work with probabilistic beliefs? Suppose we flip a fair coin, and one person believes there is a 50% chance of head/tails, and other person believes it is 99% head and 1% tails. How exactly is each of these hypotheses falsifiable? How many times exactly do I have to flip the coin and what results exactly do I need to get in order to declare each of the hypotheses as falsified? Or are they both unfalsifiable, and therefore both equally unscientific, neither of them better than the other?
That is, "Popperianism" feels a bit like Bayesianism for mathematically challenged people. Its probability theory only contains three values: yes, maybe, no. Assigning "yes" to any scientific hypothesis is a taboo (Bayesians agree), therefore we are left with "maybe" and "no", the latter for falsified hypotheses, the former for everything else. And we need to set the rules of the social game so that the "maybe" of science does not become completely worthless (i.e. equivalent to any other "maybe").
This is confusing again. Suppose you have two competing hypotheses, such as "there is a finite number of primes" and "there is an infinite number of primes". To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved. Wait, what?! How exactly would you falsify one of them without automatically proving the other?
I suppose the answer by Popper might be a combination of the following:
Yet another problem is that scientific hypotheses actually get disproved all the time. Like, I am pretty sure there were at least dozen popular-science articles about experimental refutation of theory of relativity upvoted to the front page of Hacker News. The proper reaction is to ignore the news, and wait a few days until someone provides an explanation of why the experiment was set up wrong, or the numbers were calculated incorrectly. That is business as usual for a scientist, but would pose a philosophical problem for a "Popperian": how do you justify believing in the scientific result during the time interval between the experiment and its refutation were published? How long is the interval allowed to be: a day? a month? a century?
The underlying problem is that experimental outcomes are actually not clearly separated from hypotheses. Like, you get the raw data ("the machine X beeped today at 14:09"), but you need to combine them with some assuptions in order to get the conclusion ("therefore, the signal travelled faster than light, and the theory of relativity is wrong"). So the end result is that "data + some assumptions" disagree with "other assumptions". There as assumptions on both sides; either of them could be wrong; there is no such thing as pure falsification.
Sorry, I got carried away...
This is confusing again. Suppose you have two competing hypotheses, such as “there is a finite number of primes” and “there is an infinite number of primes”. To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved.
It's been known for two thousand years that there are infinitely many primes.
Thanks for your generous reply. Maybe I understand the bailey and would need to acquaint myself with the motte to begin to understand what is meant by those who say it's being 'dethroned by the Bayesian revolution'.
Considered as an epistemology, I don't think you're missing anything.
To reconstruct Popperian falsification from Bayes, see that if you observe something that some hypothesis gave probability ~0 ("impossible"), that hypothesis is almost certainly false - it's been "falsified" by the evidence. With a large enough hypothesis space you can recover Bayes from Popper - that's Solomonoff Induction - but you'd never want to in practice.
For more about science - as institution, culture, discipline, human activity, etc. - and ideal Bayesian rationality, see the Science and Rationality sequence. I was going to single out particular essays, but honestly the whole sequence is probably relevant!
New poster. I love this topic. My own of view of the shortcoming of Bayesianism is as follows (speaking as a former die-hard Bayesian):
I'm not sure Bayes' Rule dictates anything beyond its plain mathematical content, which isn't terribly controversial:
When people speak of Bayesian inference, they are talking about a mode of reasoning that uses Bayes' Rule a lot, but it's mainly motivated by a different "ontology" of probability.
As to whether Bayesian inference and Popperian falsificationism are in conflict - I'd imagine that depends very much on the subject of investigation (does it involve a need to make immediate decisions based on limited information?) and the temperaments of the human beings trying to reach a consensus.
Hm. I don't think people who talk about "Bayesianism" in the broad sense are using a different ontology of probability than most people. I think what makes "Bayesians" different is their willingness to use probability at all, rather than some other conception of knowledge.
Like, consider the weird world of the "justified true belief" definition of knowledge and the mountains of philosophers trying to patch up its leaks. Or the FDA's stance on whether covid vaccines work in children. It's not that these people would deny the proof of Bayes' theorem - it's just that they wouldn't think to apply it here, because they aren't thinking of the status of some claim as being a probability.
Bayes' Rule dictates how much credence you should put in a given proposition in light of prior conditions/evidence. It answers the question How probable is this proposition?
Popperian falsificationism dictates whether a given proposition, construed as a theory, is epistemically justifiable, if only tentatively. But it doesn't say anything about how much credence you should put in an unfalsified theory (right?). It answers the question Is this proposition demonstrably false (and if not, lets hold on to it, for now)?
I gather that the tension has something to do with inductive reasoning/generalizing, which Popperians reject as not only false, but imaginary. But I don't see where inductive reasoning even comes in to Bayes' Rule. In Arbital's waterfall example, it just is the case that "the bottom pool has 3 parts of red water to 4 parts of blue water" - which means that there just is a roughly 43% probability that a randomly sampled water molecule from that pool is red. How could a Popperian disagree?
What am I missing?
Thanks!