This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.
You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.
We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?
If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)
There should be nothing surprising about that, of course. Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.
Core tenet 1: Any given observation has many different possible causes.
Acknowledging this, however, leads to a somewhat less intuitive realization. For any given observation, how you should interpret it always depends on previous information. Simply seeing that the person was running wasn't enough to tell you that they were in a hurry, or that they were getting some exercise. Or suppose you had to choose between two competing scientific theories about the motion of planets. A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forwards with His Noodly Appendage. If these both theories made the same predictions, you'd have to depend on your prior knowledge - your prior, for short - to judge which one was more likely. And even if they didn't make the same predictions, you'd need some prior knowledge that told you which of the predictions were better, or that the predictions matter in the first place (as opposed to, say, theoretical elegance).
Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.
Again, this might seem obvious. But there are many well-known instances in which people forget to apply this information. Take supernatural phenomena: yes, if there were spirits or gods influencing our world, some of the things people experience would certainly be the kinds of things that supernatural beings cause. But then there are also countless of mundane explanations, from coincidences to mental disorders to an overactive imagination, that could cause them to perceived. Most of the time, postulating a supernatural explanation shouldn't even occur to you, because the mundane causes already have lots of evidence in their favor and supernatural causes have none.
Core tenet 2: How we interpret any event, and the new information we get from anything, depends on information we already had.
Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself "if this cause didn't exist, would I regardless expect to experience this with equal probability?" If the answer is "yes", then it probably wasn't cause A.
This realization, in turn, leads us to
Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.
The fact that anything can be caused by an infinite amount of things explains why Bayesians are so strict about the theories they'll endorse. It isn't enough that a theory explains a phenomenon; if it can explain too many things, it isn't a good theory. Remember that if you'd expect to experience something even when your supposed cause was untrue, then that's no evidence for your cause. Likewise, if a theory can explain anything you see - if the theory allowed any possible event - then nothing you see can be evidence for the theory.
At its heart, Bayesianism isn't anything more complex than this: a mindset that takes three core tenets fully into account. Add a sprinkle of idealism: a perfect Bayesian is someone who processes all information perfectly, and always arrives at the best conclusions that can be drawn from the data. When we talk about Bayesianism, that's the ideal we aim for.
Fully internalized, that mindset does tend to color your thought in its own, peculiar way. Once you realize that all the beliefs you have today are based - in a mechanistic, lawful fashion - on the beliefs you had yesterday, which were based on the beliefs you had last year, which were based on the beliefs you had as a child, which were based on the assumptions about the world that were embedded in your brain while you were growing in your mother's womb... it does make you question your beliefs more. Wonder about whether all of those previous beliefs really corresponded maximally to reality.
And that's basically what this site is for: to help us become good Bayesians.
I was interested in your defence of the "truther" position until I saw this this litany of questions. There are two main problems with your style of argument.
First, the quality of the evidence you are citing. Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case). Anyone who has read newspaper coverage of something they know about in detail will know that, even in the absence of malice, the coverage is less than accurate, especially in a big and confusing event.
When Jack pointed out that a particular piece of evidence you cite is wrong (hijackers supposedly not appearing on the passenger list), you rather snidely reply "You win a cookie!", before conceding that it only took a bit of research to find out that the supposed "anomaly" never existed. But then, instead of considering what this means for the quality of all your other evidence, you then sarcastically cite the factoid that "6 of the alleged hijackers have turned up alive" as another killer anomaly, completely ignoring the possibility of identity theft/forged passports!
If you made a good-faith attempt to verify ALL the facts you rely on (rather than jumping from one factoid to another), I'm confident you would find that most of the "anomalies" have been debunked.
Second, the way you phrase all these questions shows that, even when you're not arguing from imaginary facts, you are predisposed to believe in some kind of conspiracy theory.
For example, you seem to think it's unlikely that hijackers could take over a plane using "only box-cutters", because the pilots were "professionals" who were somehow "trained" to fight and might not have found a knife sufficiently threatening. So you think two unarmed pilots would resist ten men who had knives and had already stabbed flight attendants to show they meant business? Imagine yourself actually facing down ten fanatics with knives.
The rest of your arguments that don't rely on debunked facts are about framing perfectly reasonable trains of events in terms to make them seem unlikely - in Less Wrong terms, "privileging the hypothesis". "How likely is that no heads would roll as a consequence of this security failure?" - well, since the main failure in the official account was that agencies were "stove-piped" and not talking to each other and responsibilities were unclear, this is entirely consistent. Also, governments may be reluctant to implicitly admit that something had been preventable by firing someone straight away - see "Heckuva job, Brownie".
"How likely is it that no less than three steel-framed buildings would completely collapse from fire and mechanical damage, for the first time in history, all on the same day?" It would be amazing if they'd all collapsed from independent causes! But all you are really asking is "how likely is it that a steel-framed building will collapse when hit with a fully-fueled commercial airliner, or parts of another giant steel-framed building?" Since a comparable crash had never happened before, the "first time in history" rhetoric adds nothing to your argument.
"How likely is it that the plane flown into the Pentagon would execute a difficult hairpin turn in order to fly into the most heavily-protected side of the building?"
Well, since it was piloted by a suicidal hijacker who had been trained to fly a plane, I guess it's not unlikely that it would manouevre to hit the building. Perhaps a more experienced pilot, or A GOVERNMENT HOLOGRAM DRONE (which is presumably what you're getting at), would have planned an approach that didn't involve a difficult hairpin turn. And why wouldn't an evil conspiracy want the damage to the Pentagon to be spectacular and therefore aim for the least heavily protected side? Since, you know, they know it's going to happen anyway so they can avoid being in the Pentagon at all?
If the plane had manoeuvred to hit the least heavily-protected side of the building, truthers would argue that this also showed that the pilot had uncanny inside knowledge.
"How likely is it that [buildings] would ... explode straight downward?" Well, as a non-expert I would have said a priori that seems unlikely, but the structure of the towers made that failure mode the one that would happen. All you're asking is "how likely is it that the laws of physics would operate?" I'm sure there is some truther analysis disputing that, but then you're back into the realm of imaginary evidence.
"How likely is it that this would result in pools of molten steel?" How likely is it that someone observed pools of molten aluminium, or some other substance, and misinterpreted them as molten steel? After all, you've just said that the steel girders were left behind, so there is some evidence that the fire didn't get hot enough to melt (rather than weaken) steel.
I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.
The "Wikipedia standard" seems to work pretty well, though -- didn't someone do a study comparing Wikipedia'... (read more)