In the art of rationality there is a discipline of closeness-to-the-issue—trying to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.
The Wright Brothers say, “My plane will fly.” If you look at their authority (bicycle mechanics who happen to be excellent amateur physicists) then you will compare their authority to, say, Lord Kelvin, and you will find that Lord Kelvin is the greater authority.
If you demand to see the Wright Brothers’ calculations, and you can follow them, and you demand to see Lord Kelvin’s calculations (he probably doesn’t have any apart from his own incredulity), then authority becomes much less relevant.
If you actually watch the plane fly, the calculations themselves become moot for many purposes, and Kelvin’s authority not even worth considering.
The more directly your arguments bear on a question, without intermediate inferences—the closer the observed nodes are to the queried node, in the Great Web of Causality—the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.
Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”1
Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.
If you forget this principle, learning about more biases will hurt you, because it will distract you from more direct arguments. It’s all too easy to argue that someone is exhibiting Bias #182 in your repertoire of fully generic accusations, but you can’t settle a factual issue without closer evidence. If there are biased reasons to say the Sun is shining, that doesn’t make it dark out.
Just as you can’t always experiment today, you can’t always check the calculations today.2 Sometimes you don’t know enough background material, sometimes there’s private information, sometimes there just isn’t time. There’s a sadly large number of times when it’s worthwhile to judge the speaker’s rationality. You should always do it with a hollow feeling in your heart, though, a sense that something’s missing.
Whenever you can, dance as near to the original question as possible—press yourself up against it—get close enough to hug the query!
1Jerry Cleaver, Immediate Fiction: A Complete Writing Course (Macmillan, 2004).
2See also “Is Molecular Nanotechnology ’Scientific’?” http://lesswrong.com/lw/io/is_molecular_nanotechnology_scientific.
Eliezer, where do your strong claims about the causal structure of scientific discourse come from?
I consider them as obvious first-order approximations, especially to the normative structure. Does an authoritative expert cause a hypothesis to become true, so that we can surgically intervene on the truth of a physical theory by giving its adherents more authority? Clearly not. Does an authoritative expert cause the "arguments" to become stronger? Defining the matter normatively makes it clear that the answer is no. If we talk about perceived arguments, then a good expert makes us perceive the arguments as stronger, but that's simply a question of backward inference not causation - like saying that if the sidewalk is slippery this causes us to think it is raining, but does not cause it to rain.
Since I am discussing what we should pay attention to, not what we do pay attention to, it makes sense to discuss the normative causal struture.
Do you have an alternative suggestion? Clearly there are many things that supervene on expert opinion besides valid arguments, which we could coalesce into a Noise node and a Bias node, describing the invalid influences that we think we can't predict and that we think we can systematically predict respectively:
Truth -> Argument -> Expert Belief <- Noise, Bias
This gives us obvious inferences like "If you know the experts will be biased, but you don't understand their arguments apart from authority, you will be less certain of the truth" and "Surgical interventions on bias and on expert belief cannot make a proposition true, or change which non-authoritative propositions are arguments in favor of it".
You probably have that directional causal structure represented in your mind, which makes the above inferences seem plausible; I just wrote it out.