David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math[1]:
Bayesianism boils down to “don’t be so sure of your beliefs; be less sure when you see contradictory evidence.”Now that is just common sense. Why does anyone need to be told this? And how does [Bayes'] formula help?
[...]
The leaders of the movement presumably do understand probability. But I’m wondering whether they simply use Bayes’ formula to intimidate lesser minds into accepting “don’t be so sure of your beliefs.” (In which case, Bayesianism is not about Bayes’ Rule, after all.)
I don’t think I’d approve of that. “Don’t be so sure” is a valuable lesson, but I’d rather teach it in a way people can understand, rather than by invoking a Holy Mystery.
What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?
I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.
I'm interested in hearing what other people here would put on their own lists of things Bayesianism taught them. (Different people would make different lists, depending on how they had already thought about epistemology when they first encountered "pop Bayesianism".)
I'm interested especially in those lessons that you think followed more-or-less directly from taking Bayesianism seriously as a normative epistemology (plus maybe the idea of making decisions based on expected utility). The LW memeplex contains many other valuable lessons (e.g., avoid the mind-projection fallacy, be mindful of inferential gaps, the MW interpretation of QM has a lot going for it, decision theory should take into account "logical causation", etc.). However, these seem further afield or more speculative than what I think of as "bare-bones Bayesianism".
So, without further ado, here are some things that Bayesianism taught me.
- Banish talk like "There is absolutely no evidence for that belief". P(E | H) > P(E) if and only if P(H | E) > P(H). The fact that there are myths about Zeus is evidence that Zeus exists. Zeus's existing would make it more likely for myths about him to arise, so the arising of myths about him must make it more likely that he exists. A related mistake I made was to be impressed by the cleverness of the aphorism "The plural of 'anecdote' is not 'data'." There may be a helpful distinction between scientific evidence and Bayesian evidence. But anecdotal evidence is evidence, and it ought to sway my beliefs.
- Banish talk like "I don't know anything about that". See the post "I don't know."
- Banish talk of "thresholds of belief". Probabilities go up or down, but there is no magic threshold beyond which they change qualitatively into "knowledge". I used to make the mistake of saying things like, "I'm not absolutely certain that atheism is true, but it is my working hypothesis. I'm confident enough to act as though it's true." I assign a certain probability to atheism, which is less than 1.0. I ought to act as though I am just that confident, and no more. I should never just assume that I am in the possible world that I think is most likely, even if I think that that possible world is overwhelmingly likely. (However, perhaps I could be so confident that my behavior would not be practically discernible from absolute confidence.)
- Absence of evidence is evidence of absence. P(H | E) > P(H) if and only if P(H | ~E) < P(H). Absence of evidence may be very weak evidence of absence, but it is evidence nonetheless. (However, you may not be entitled to a particular kind of evidence.)
- Many bits of "common sense" rationality can be precisely stated and easily proved within the austere framework of Bayesian probability. As noted by Jaynes in Probability Theory: The Logic of Science, "[P]robability theory as extended logic reproduces many aspects of human mental activity, sometimes in surprising and even disturbing detail." While these things might be "common knowledge", the fact that they are readily deducible from a few simple premises is significant. Here are some examples:
- It is possible for the opinions of different people to diverge after they rationally update on the same evidence. Jaynes discusses this phenomenon in Section 5.3 of PT:TLoS.
- Popper's falsification criterion, and other Popperian principles of "good explanation", such as that good explanations should be "hard to vary", follow from Bayes's formula. Eliezer discusses this in An Intuitive Explanation of Bayes' Theorem and A Technical Explanation of Technical Explanation.
- Occam's razor. This can be formalized using Solomonoff induction. (However, perhaps this shouldn't be on my list, because Solomonoff induction goes beyond just Bayes's formula. It also has several problems.)
- You cannot expect[2] that future evidence will sway you in a particular direction. "For every expectation of evidence, there is an equal and opposite expectation of counterevidence."
- Abandon all the meta-epistemological intuitions about the concept of knowledge on which Gettier-style paradoxes rely. Keep track of how confident your beliefs are when you update on the evidence. Keep track of the extent to which other people's beliefs are good evidence for what they believe. Don't worry about whether, in addition, these beliefs qualify as "knowledge".
What items would you put on your list?
ETA:
[1] See also Yvain's reaction to David Chapman's criticisms.
[2] ETA: My wording here is potentially misleading. See this comment thread.
The (related) way I would expand this is "if you know what you will believe in the future, then you ought to believe that now."
Quoting myself from Yvain's blog:
Another useful thing for qualitative Bayes from Jaynes - always include a background information I in the list of information you're conditioning on. It reminds you that your estimates are fully contextual on all your knowledge, most of which is unstated and unexamined.
Actually, this seems like a General Semantics meets Bayes kind of principle. Surely Korzybski had a catchy phrase for a similar idea. Anyone got one?