Bayes' Rule dictates how much credence you should put in a given proposition in light of prior conditions/evidence. It answers the question How probable is this proposition?
 

Popperian falsificationism dictates whether a given proposition, construed as a theory, is epistemically justifiable, if only tentatively. But it doesn't say anything about how much credence you should put in an unfalsified theory (right?). It answers the question Is this proposition demonstrably false (and if not, lets hold on to it, for now)?  
 

I gather that the tension has something to do with inductive reasoning/generalizing, which Popperians reject as not only false, but imaginary. But I don't see where inductive reasoning even comes in to Bayes' Rule. In Arbital's waterfall example, it just is the case that "the bottom pool has 3 parts of red water to 4 parts of blue water" -  which means that there just is a roughly 43% probability that a randomly sampled water molecule from that pool is red. How could a Popperian disagree? 

What am I missing?

Thanks!

New Answer
New Comment

4 Answers sorted by

Viliam

120

For the record, the popular interpretation of "Popperian falsificationism" is not what Karl Popper actually believed. (According to Wikipedia, he did not even like the word "falsificationism" and preferred "critical rationalism" instead.) What most people know as "Popperian falsificationism" is a simplification optimized for memetic power, and it is quite simple to disprove. Then we can play motte and bailey with it: the motte being the set of books Karl Popper actually wrote, and the bailey being the argument of a clever internet wannabe meta-scientist about how this or that isn't scientific because it does not follow some narrow definition of falsifiability.

I have not read Popper's book, therefore I am only commenting here on the traditional internet usage of "Popperian falsificationism".

The good part is noticing that beliefs should pay rent in anticipated consequences. A theory that explains everything, predicts nothing. In the "Popperian" version, beliefs pay rent by saying which states of the world are impossible. As long as they are right, you keep them. When they get wrong once, you mercilessly kick them out.

An obvious problem: How does this work with probabilistic beliefs? Suppose we flip a fair coin, and one person believes there is a 50% chance of head/tails, and other person believes it is 99% head and 1% tails. How exactly is each of these hypotheses falsifiable? How many times exactly do I have to flip the coin and what results exactly do I need to get in order to declare each of the hypotheses as falsified? Or are they both unfalsifiable, and therefore both equally unscientific, neither of them better than the other?

That is, "Popperianism" feels a bit like Bayesianism for mathematically challenged people. Its probability theory only contains three values: yes, maybe, no. Assigning "yes" to any scientific hypothesis is a taboo (Bayesians agree), therefore we are left with "maybe" and "no", the latter for falsified hypotheses, the former for everything else. And we need to set the rules of the social game so that the "maybe" of science does not become completely worthless (i.e. equivalent to any other "maybe").

This is confusing again. Suppose you have two competing hypotheses, such as "there is a finite number of primes" and "there is an infinite number of primes". To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved. Wait, what?! How exactly would you falsify one of them without automatically proving the other?

I suppose the answer by Popper might be a combination of the following:

  • mathematics is a special case, because it is not about the real world -- that is, whenever we apply math to the real world, we have two problems: whether the math itself is correct, and whether we chose the right model for the real world, and the concept of "falsifiability" only applies to the latter;
  • there is always a chance that we left out something -- for example, it might turn out that the concept of "primes" or "infinity" is somehow ill-defined (self-contradictory or arbitrary or whatever), therefore one hypothesis being wrong does not necessarily imply the other being right.

Yet another problem is that scientific hypotheses actually get disproved all the time. Like, I am pretty sure there were at least dozen popular-science articles about experimental refutation of theory of relativity upvoted to the front page of Hacker News. The proper reaction is to ignore the news, and wait a few days until someone provides an explanation of why the experiment was set up wrong, or the numbers were calculated incorrectly. That is business as usual for a scientist, but would pose a philosophical problem for a "Popperian": how do you justify believing in the scientific result during the time interval between the experiment and its refutation were published? How long is the interval allowed to be: a day? a month? a century?

The underlying problem is that experimental outcomes are actually not clearly separated from hypotheses. Like, you get the raw data ("the machine X beeped today at 14:09"), but you need to combine them with some assuptions in order to get the conclusion ("therefore, the signal travelled faster than light, and the theory of relativity is wrong"). So the end result is that "data + some assumptions" disagree with "other assumptions". There as assumptions on both sides; either of them could be wrong; there is no such thing as pure falsification.

Sorry, I got carried away...

[-]TAG30

This is confusing again. Suppose you have two competing hypotheses, such as “there is a finite number of primes” and “there is an infinite number of primes”. To be considered scientific, either of them must be falsifiable in principle, but of course neither can be proved.

It's been known for two thousand years that there are infinitely many primes.

https://primes.utm.edu/notes/proofs/infinite/euclids.html

Thanks for your generous reply. Maybe I understand the bailey and would need to acquaint myself with the motte to begin to understand what is meant by those who say it's being 'dethroned by the Bayesian revolution'.

2Viliam
Sorry for jargon. But it's a useful concept, so here is the explanation: -- Motte and Bailey Doctrines -- All In All, Another Brick In The Motte The latter also contains a few examples.

Zac Hatfield-Dodds

30

Considered as an epistemology, I don't think you're missing anything.

To reconstruct Popperian falsification from Bayes, see that if you observe something that some hypothesis gave probability ~0 ("impossible"), that hypothesis is almost certainly false - it's been "falsified" by the evidence. With a large enough hypothesis space you can recover Bayes from Popper - that's Solomonoff Induction - but you'd never want to in practice.

For more about science - as institution, culture, discipline, human activity, etc. - and ideal Bayesian rationality, see the Science and Rationality sequence. I was going to single out particular essays, but honestly the whole sequence is probably relevant!

Thanks for the recommendation. To the sequence I go!

Ksaverus

10

New poster. I love this topic. My own of view of the shortcoming of Bayesianism is as follows (speaking as a former die-hard Bayesian):

  1. The world (multiverse) is deterministic.
  2. Probability therefore does not describe an actual feature of the world. Probabilities only make sense as a statistical statements.
  3. Making a statistical statement requires identifying a group of events or phenomena that are sufficiently similar that grouping makes sense. (Grouping disparate unique events makes the statistical statement meaningless, since we would have no reason to think subsequent events behave in the same way.)
  4. Events and phenomena like balls in urns, medical tests for diseases with large sample sizes, even some human events like sports games, have sufficient regularity that grouping makes sense and statistical statements are meaningful.
  5. Propositions about explanatory theories (are there infinite primes, is Newtonian physics “correct”) do not have sufficient regularity - a statistical statement based on any group of known propositions logically yields no predictive value about unknown propositions. (Other than where you have a good explanatory theory linking them.)
  6. If probability statements about the correctness of an unknown explanatory proposition are therefore meaningless, priors and Bayesian updates are similarly meaningless. Example: Newton. Before Einstein, one’s prior for the correctness of Newton would have been high. Just as one’s prior on Einstein being correct right now is presumably high. But both are meaningless, since it is not the case that there is a proportion of universes in which they are true and a proportion in which they are false.
  7. Counterpoint: why do prediction markets seem to work? No idea! Still wrestling with this. Would love to hear your thoughts.

tkpwaeub

10

I'm not sure Bayes' Rule dictates anything beyond its plain mathematical content, which isn't terribly controversial:


 

When people speak of Bayesian inference, they are talking about a mode of reasoning that uses Bayes' Rule a lot, but it's mainly motivated by a different "ontology" of probability. 

As to whether Bayesian inference and Popperian falsificationism are in conflict - I'd imagine that depends very much on the subject of investigation (does it involve a need to make immediate decisions based on limited information?) and the temperaments of the human beings trying to reach a consensus. 

Hm. I don't think people who talk about "Bayesianism" in the broad sense are using a different ontology of probability than most people. I think what makes "Bayesians" different is their willingness to use probability at all, rather than some other conception of knowledge.

Like, consider the weird world of the "justified true belief" definition of knowledge and the mountains of philosophers trying to patch up its leaks. Or the FDA's stance on whether covid vaccines work in children. It's not that these people would deny the proof of Bayes' theorem - it's just that they wouldn't think to apply it here, because they aren't thinking of the status of some claim as being a probability.

1TAG
What were the major problems with JTB before Gettier? There were problems with equating knowledge with certainty...but then pretty much everyone moved to fallibilism. Without abandoning JTB. So JTB and probablism, broadly defined, aren't incompatible. There's nothing about justification, or truth or belief that cant come in degrees. And regarding all three of them as non-binary is a richer model than just regarding belief as non-binary.
2Charlie Steiner
I'm not really sure about the history. A quick search turns up Russell making similar arguments at the turn of the century, but I doubt there was the sort of boom there was after Gettier - maybe because probability wasn't developed enough to serve as an alternative ontology.
1TAG
It remains the case that JTB isn't that bad, and Bayes isn't that good a substitute.
2Charlie Steiner
"Classic flavor" JTB is indeed that bad. JTB shifted to a probabilistic ontology is either Bayesian, wrong, or answering a different question altogether.
1TAG
I'll go for answering different questions. Bayes, although well known to mainstream academia , isn't regarded as the one epistemology to rule them all , precisely because there are so many issues it doesn't address.