Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/
Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating? How do you identify "bunk" or "crackpot" ideas?
Here are some examples to give an idea.
"Here's a perpetual motion machine": bunk. "I've found an elementary proof of Fermat's Last Theorem": bunk. "9-11 was an inside job": bunk.
"Humans did not cause global warming": possibly bunk, but I'm not sure. "The Singularity will come within 100 years": possibly bunk, but I'm not sure. "The economic system is close to collapse": possibly bunk, but I'm not sure.
"There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk. "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk.
(These are my own examples. They're meant to be illustrative, not definitive. I imagine that some people here will think "But that's obviously not bunk!" Sure, but you probably can think of some claim that *you* consider bunk.)
A few notes of clarification: I'm only examining factual, not normative, claims. I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.") I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?" A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes. That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.) The question "Is this bunk?" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot. In other words, is it a scientific or a pseudoscientific hypothesis? Or, in practical terms, is it worth it for me or anybody else to investigate it?
This is an important question, and especially to this community. People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one. Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.
I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. "I was abducted by aliens" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut. Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim. Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.
But how do we determine what's likely to be nonsense? "I know it when I see it" is a pretty bad guide.
First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.
But what if there aren't such clear warning signs? Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on. And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible. What now? Is this bunk?
A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file? There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians. As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist. How much should we be troubled, though, by the fact that most scientists of their disciplines shun them? Perhaps it's only reasonable to give some weight to that fact.
Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then "mainstream" status doesn't confer any information. The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves. We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.
So the question "What is bunk?", that is, the question, "What is likely enough to be worth investigating?", apparently destroys itself. You can only tell if a claim is unlikely by doing a little investigation. It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.
The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further. Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research. When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do. We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to. From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap? The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation. We should thank the poor dogged saps of the world. We should create more incentives to be a poor dogged sap. Because if we don't have enough of them, we're going to be very mistaken when we think "Well, this wasn't important enough for anyone to investigate, so it must be bunk."
(N.B. I am aware that many climate scientists are being "poor dogged saps" by communicating with and attempting to refute global warming skeptics. I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)
SarahC:
An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we're talking about, the fact that a view is out of the mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:
(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.
(2) There are no incentives to invent bullshit for political or ideological reasons.
As soon as either of these conditions doesn't hold in an academic area, the mainstream will become infested with worthless bullshit work to at least some degree. For example, condition (2) is true for theoretical physics, but in many of its subfields, condition (1) no longer holds. Thus we get things like the Bogdanoff affair and the string theory wars -- regardless of who (if anyone) is right in these controversies, it's obvious that some bullshit work has infiltrated the mainstream. Nevertheless, the scenario where condition (1) doesn't hold, but (2) does is relatively benign, and such areas are typically still basically sound despite the partial infestation.
The real trouble starts when condition (2) doesn't hold. Even if (1) still holds, the field will be in a hopeless confusion where it's hardly possible to separate bullshit from quality work. For example, in the fields that involve human sociobiology and behavioral genetics, particularly those that touch on the IQ controversies, there are tons of interesting study ideas waiting to be done. Yet, because of the ideological pressures and prejudices -- both individual and institutional -- bullshit work multiplies without end. (Again, regardless of whom you support in these controversies, it's logically impossible that at least one side isn't bullshitting.) Thus, on the whole, condition (2) is even more critical than (1).
When neither (1) nor (2) holds in some academic field, it tends to become almost pure bullshit. Macroeconomics is the prime example.
Very articulate comment, it helped clarify my thinking on this topic; thanks.