This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.
You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.
We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?
If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)
There should be nothing surprising about that, of course. Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.
Core tenet 1: Any given observation has many different possible causes.
Acknowledging this, however, leads to a somewhat less intuitive realization. For any given observation, how you should interpret it always depends on previous information. Simply seeing that the person was running wasn't enough to tell you that they were in a hurry, or that they were getting some exercise. Or suppose you had to choose between two competing scientific theories about the motion of planets. A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forwards with His Noodly Appendage. If these both theories made the same predictions, you'd have to depend on your prior knowledge - your prior, for short - to judge which one was more likely. And even if they didn't make the same predictions, you'd need some prior knowledge that told you which of the predictions were better, or that the predictions matter in the first place (as opposed to, say, theoretical elegance).
Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.
Again, this might seem obvious. But there are many well-known instances in which people forget to apply this information. Take supernatural phenomena: yes, if there were spirits or gods influencing our world, some of the things people experience would certainly be the kinds of things that supernatural beings cause. But then there are also countless of mundane explanations, from coincidences to mental disorders to an overactive imagination, that could cause them to perceived. Most of the time, postulating a supernatural explanation shouldn't even occur to you, because the mundane causes already have lots of evidence in their favor and supernatural causes have none.
Core tenet 2: How we interpret any event, and the new information we get from anything, depends on information we already had.
Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself "if this cause didn't exist, would I regardless expect to experience this with equal probability?" If the answer is "yes", then it probably wasn't cause A.
This realization, in turn, leads us to
Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.
The fact that anything can be caused by an infinite amount of things explains why Bayesians are so strict about the theories they'll endorse. It isn't enough that a theory explains a phenomenon; if it can explain too many things, it isn't a good theory. Remember that if you'd expect to experience something even when your supposed cause was untrue, then that's no evidence for your cause. Likewise, if a theory can explain anything you see - if the theory allowed any possible event - then nothing you see can be evidence for the theory.
At its heart, Bayesianism isn't anything more complex than this: a mindset that takes three core tenets fully into account. Add a sprinkle of idealism: a perfect Bayesian is someone who processes all information perfectly, and always arrives at the best conclusions that can be drawn from the data. When we talk about Bayesianism, that's the ideal we aim for.
Fully internalized, that mindset does tend to color your thought in its own, peculiar way. Once you realize that all the beliefs you have today are based - in a mechanistic, lawful fashion - on the beliefs you had yesterday, which were based on the beliefs you had last year, which were based on the beliefs you had as a child, which were based on the assumptions about the world that were embedded in your brain while you were growing in your mother's womb... it does make you question your beliefs more. Wonder about whether all of those previous beliefs really corresponded maximally to reality.
And that's basically what this site is for: to help us become good Bayesians.
So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that your claim about the hijackers not being on the passenger manifest was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.
I suggest that a better way to convey that might have been "Sorry, I was wrong" rather than "You win a cookie!" When I am making a sincere apology, I find that the phrase "You win a cookie!" can often be misconstrued.
A box-cutter is a kind of sharp knife. A determined person with a sharp knife can kill you. An 11-year-old girl can inflict fatal injuries with a box-cutter - do you really think that five burly fanatics couldn't achieve the same thing on one adult? All the paragraph above establishes is that you - and maybe some licensed pilots - have an underdeveloped sense of the danger posed by knives.
I propose an experiment - you and a friend can prepare for a year, then I and nine heavyset friends will come at you with box-cutters (you will be unarmed). If we can't make you stop laughing off our attack, then I'll concede you are right. Deal?
Let's go into more details with this "plane manoeuvre" thing.
Well, what we should really ask is "given that we a plane made a difficult manoeuvre to hit the better-protected side of the Pentagon, how much more likely does that make a conspiracy than other possible explanations?"
Here are some possible explanations of the observed event:
The hijacker aimed at the less defended side, overshot, made a desperate turn back and got lucky.
The hijacker wanted to fake out possible air defences, so had planned a sudden turn which he had rehearsed dozens of times in Microsoft Flight Simulator. Coincidentally, the side he crashed into was better protected.
The hijacker was originally tasked to hit a different landmark, got lost, spotted the Pentagon, made a risky turn and got lucky. Coincidentally, the side he crashed into was better protected.
A conspiracy took control of four airliners. The plan was to crash two of them into the WTC, killing thousands of civilians, one into a field, and one into the Pentagon. The conspirators decided that hitting part of the Pentagon that hadn't yet been renovated with sprinklers and steel bars was going a bit too far, so they made the relevant plane do a drastic manoeuvre to hit the best-protected side. There was an unspecified reason they didn't just approach from the best-protected side to start with.
A conspiracy aimed to hit the less defended side of the Pentagon, but a bug in the remote override software caused the plane to hit the most defended side.
etc.
Putting the rest of the truther evidence aside, do the conspiracy explanations stand out as more likely than the non-conspiracy explanations?
Well, in this thread alone, you have seen Jack knock down one of your arguments (hijackers not on manifest) to your own satisfaction. And yet you already seem to have forgotten that. Since you've already conceded a point, it's not true that the only opposition is "straw-man attacks and curiosity-stoppers". Do you think my point about alternate Pentagon scenarios is a straw man or a curiosity stopper? Is it possible that anyone arguing against you is playing whack-a-mole, and once they debunk argument A you will introduce unrelated argument B, and once they debunk that you will bring up argument C, and then once they debunk that you will retreat back to A again?
There's a third problem here - the truthers as a whole aren't arguing for a single coherent account of what really happened. True, you have outlined a detailed position (which has already changed during this thread because someone was able to use Google and consequently win a cookie), but you are actually defending the far fuzzier proposition that truthers have "some very good arguments which deserve serious consideration". This puts the burden on the debunkers, because even if someone shows that one argument is wrong, that doesn't preclude the existence of some good arguments somewhere out there. It also frees up truthers to pile on as many "anomalies" as possible, even if these are contradictory.
For example, you assert that it's suspicious that the buildings were "completely pulverized", and also that it's suspicious that some physical evidence - the passports - survived the collapse of the buildings. (And this level of suspicion is based purely on your intuition about some very extreme physical events which are outside of everyday experience. Maybe it's completely normal for small objects to be ejected intact from airliners which hit skyscrapers - have you done simulations or experiments which show otherwise?)
Anyway, this is all off-topic. I think you should do a post where you outline the top three truther arguments which deserve serious consideration.