This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.
You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.
We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?
If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)
There should be nothing surprising about that, of course. Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.
Core tenet 1: Any given observation has many different possible causes.
Acknowledging this, however, leads to a somewhat less intuitive realization. For any given observation, how you should interpret it always depends on previous information. Simply seeing that the person was running wasn't enough to tell you that they were in a hurry, or that they were getting some exercise. Or suppose you had to choose between two competing scientific theories about the motion of planets. A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forwards with His Noodly Appendage. If these both theories made the same predictions, you'd have to depend on your prior knowledge - your prior, for short - to judge which one was more likely. And even if they didn't make the same predictions, you'd need some prior knowledge that told you which of the predictions were better, or that the predictions matter in the first place (as opposed to, say, theoretical elegance).
Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.
Again, this might seem obvious. But there are many well-known instances in which people forget to apply this information. Take supernatural phenomena: yes, if there were spirits or gods influencing our world, some of the things people experience would certainly be the kinds of things that supernatural beings cause. But then there are also countless of mundane explanations, from coincidences to mental disorders to an overactive imagination, that could cause them to perceived. Most of the time, postulating a supernatural explanation shouldn't even occur to you, because the mundane causes already have lots of evidence in their favor and supernatural causes have none.
Core tenet 2: How we interpret any event, and the new information we get from anything, depends on information we already had.
Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself "if this cause didn't exist, would I regardless expect to experience this with equal probability?" If the answer is "yes", then it probably wasn't cause A.
This realization, in turn, leads us to
Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.
The fact that anything can be caused by an infinite amount of things explains why Bayesians are so strict about the theories they'll endorse. It isn't enough that a theory explains a phenomenon; if it can explain too many things, it isn't a good theory. Remember that if you'd expect to experience something even when your supposed cause was untrue, then that's no evidence for your cause. Likewise, if a theory can explain anything you see - if the theory allowed any possible event - then nothing you see can be evidence for the theory.
At its heart, Bayesianism isn't anything more complex than this: a mindset that takes three core tenets fully into account. Add a sprinkle of idealism: a perfect Bayesian is someone who processes all information perfectly, and always arrives at the best conclusions that can be drawn from the data. When we talk about Bayesianism, that's the ideal we aim for.
Fully internalized, that mindset does tend to color your thought in its own, peculiar way. Once you realize that all the beliefs you have today are based - in a mechanistic, lawful fashion - on the beliefs you had yesterday, which were based on the beliefs you had last year, which were based on the beliefs you had as a child, which were based on the assumptions about the world that were embedded in your brain while you were growing in your mother's womb... it does make you question your beliefs more. Wonder about whether all of those previous beliefs really corresponded maximally to reality.
And that's basically what this site is for: to help us become good Bayesians.
Whoa! On reflection this looks like an extended misunderstanding. This isn't especially surprising as we've had trouble communicating before.
I apologize for offending you. In making the comment I truly didn't mean it as a personal insult- though I can see how it came off that way. There is a not insignificant tendency around here to A) place truth-seeking as secondary to winning and B) reduce things to status games. So in your comment I pattern matched this
with that tendency. And so in saying that persuasion and status seemed to be what you were concerned with I thought I was basically just recognizing the position you had taken.
There isn't an explicit transition to this second part. I can see in retrospect that this was a comment about defending beliefs. You're saying, no it is not an obligation, just sometimes a good idea, here is when it is (pragmatically) a good idea. What I saw the first time was "No, there isn't any obligation like this. Here are the concerns that should instead enter into the decision to defend beliefs: Status and persuasion." Even if the expectation that someone defends their beliefs doesn't rise to the level of an obligation it still seems like the pro-social reasons for doing it have to do with truth-seeking and sharing information. So when all I see is persuasion and status I inferred that you weren't concerned with these other things. Does that make it clear where I was getting it from, even if I got it wrong?
It wasn't a particularly deliberate phrasing. That said, I think it is a defensible, even obvious, rule of discourse. Of course, one way of describing what happens to someone when they don't obey such rules is just that they are no longer taken seriously. Your tone in the first comment, didn't suggest to me that you were only making a minor point and is part of the reason I interpreted it as differing from my own view more radically than it apparently does. And, I mean, an obligation that people be prepared to give reasons for their views seems like a totally reasonable thing to have in an attempt at cooperative rationalist discourse. Indeed, if people refuse to defend beliefs I have no idea how this kind of cooperation is suppose to proceed. From this perspective your objection looks like it has to be coming from a pretty different set of assumptions.
I'm going to edit the offending comment and remove the material. Would you consider making this last comment somewhat less scolding and accusatory as it was an honest misunderstanding?
Hi Jack, thanks for that. I deleted my reply. I can see why you would object to that first interpretation. I too like to keep my 'winning' quite separate from my truth seeking and would join you in objecting to exhortations that people should explain reasons for their beliefs only for pragmatic purposes. It may be that my firm disapproval of mixing epistemic rationality with pragmatics was directed at you, not the mutual enemy so pardon me if that is the case.
I certainly support giving explanations and justifications for beliefs. The main reason I wouldn't... (read more)