Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/
Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating? How do you identify "bunk" or "crackpot" ideas?
Here are some examples to give an idea.
"Here's a perpetual motion machine": bunk. "I've found an elementary proof of Fermat's Last Theorem": bunk. "9-11 was an inside job": bunk.
"Humans did not cause global warming": possibly bunk, but I'm not sure. "The Singularity will come within 100 years": possibly bunk, but I'm not sure. "The economic system is close to collapse": possibly bunk, but I'm not sure.
"There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk. "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk.
(These are my own examples. They're meant to be illustrative, not definitive. I imagine that some people here will think "But that's obviously not bunk!" Sure, but you probably can think of some claim that *you* consider bunk.)
A few notes of clarification: I'm only examining factual, not normative, claims. I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.") I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?" A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes. That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.) The question "Is this bunk?" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot. In other words, is it a scientific or a pseudoscientific hypothesis? Or, in practical terms, is it worth it for me or anybody else to investigate it?
This is an important question, and especially to this community. People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one. Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.
I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. "I was abducted by aliens" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut. Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim. Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.
But how do we determine what's likely to be nonsense? "I know it when I see it" is a pretty bad guide.
First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.
But what if there aren't such clear warning signs? Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on. And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible. What now? Is this bunk?
A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file? There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians. As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist. How much should we be troubled, though, by the fact that most scientists of their disciplines shun them? Perhaps it's only reasonable to give some weight to that fact.
Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then "mainstream" status doesn't confer any information. The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves. We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.
So the question "What is bunk?", that is, the question, "What is likely enough to be worth investigating?", apparently destroys itself. You can only tell if a claim is unlikely by doing a little investigation. It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.
The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further. Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research. When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do. We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to. From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap? The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation. We should thank the poor dogged saps of the world. We should create more incentives to be a poor dogged sap. Because if we don't have enough of them, we're going to be very mistaken when we think "Well, this wasn't important enough for anyone to investigate, so it must be bunk."
(N.B. I am aware that many climate scientists are being "poor dogged saps" by communicating with and attempting to refute global warming skeptics. I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)
Liked the post. One of the two big questions it's poking at is 'how does one judge a hypothesis without researching it?' To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:
I'll add a few more:
If H is a physical or mathematical hypothesis, try and find a quantitative statement of it. If there isn't one, watch out: crackpots are sometimes too busy trying to overthrow a consensus to make sure the math actually works.
Suppose some event is already expected to occur as an implication of a well-established theory. If H is meant to be a novel explanation for that event, H not only has to explain the event, it also has to explain why the well-established theory doesn't actually entail the event.
Can H's fans/haters discuss H without injecting their politics? It doesn't really matter if they sometimes mention their politics around H, but if they can't resist the temptation to growl about 'fascists' or 'political correctness' or 'Marxists' or whatever every time they discuss H, watch out. (Unless H is a hypothesis about fascism, political correctness or Marxism or whatever, obviously.)
If arguments about H consistently turn into arguments about who should bear the burden of proof, there's probably too little evidence to prove H either way.
Hypotheses that implicitly assume current trends will continue or accelerate arbitrarily far into the future should be handled with care. (An exercise I like doing occasionally is taking some time series data that someone's fitted an exponential for and fitting an S-curve instead.)
If H based on a small selection from many available data points, is there a rationale for that selection?
Looking at the credentials of people discussing H is a quick and dirty rule of thumb, but it's better than nothing.
Does whoever's talking about H get the right answer on questions with clearer answers? Someone who thinks vaccines, fluoride in the drinking water and FEMA are all part of the NWO conspiracy is probably a poor judge of whether 9/11 was an inside job.
How sloppily is the case for (or against) H made? (E.g. do a lot of the citations fail to match references? Are there citations or links to evidence in the first place? Is the author calling a trend on a log-linear graph 'exponential growth' when it's clearly not a straight line? Do they misspell words like 'exponential?')
Are possible shortcomings in H and/or the evidence for H acknowledged? If someone thinks the case for/against H is open and shut, but I'm really not sure, something isn't right.
And Daniel Davies helpfully points out that lying (whether in the form of consistent lies about H itself, or H's supporters/skeptics simply being known liars) can be an informative warning sign.
* The second question being 'do we have enough people researching obscure hypotheses and if not, how do we fix that?' I don't know how to start answering that one yet.
This isn't the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is "climate sensitivity" - increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculabl... (read more)