SarahC:
A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?
An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we're talking about, the fact that a view is out of the mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:
(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.
(2) There are no incentives to invent bullshit for political or ideological reasons.
As soon as either of these conditions doesn't hold in an academic area, th...
SarahC:
There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.
So, to apply my above criteria to these cases:
Climate science is politicized to an extreme degree and plagued by vast methodological difficulties. (Just think about the difficulty of measuring global annual average temperature with 0.1C accuracy even in the present, let alone reconstructing it far into the past.) Thus, I'd expect a very high level of bullshit infestation in its mainstream, so critics scorned by the mainstream should definitely not be dismissed out of hand.
Ditto for mainstream vs. Austrian macroeconomics; in fact, even more so. If you look at the blogs of prominent macroeconomists, you'll see lots of ideologically motivated mutual scorn and abuse even within the respectable mainstream. Austrians basically call bullshit on the entire mainstream, saying that the whole idea of trying to study economic aggregates by aping physics is a fundamentally unsound cargo-cult approach, so they're hated by everyone. While Austrians have their own dubious (and sometimes obvious
For me the primary evidence of a bunk claim is when the claimant fails to reasonably deal with the mainstream. Let's take the creation evolution debate. If someone comes along claiming a creationist position, but is completely unable to even describe what the evolutionary position is, or what might be good about it, then their idea is bunk. If someone is very good at explaining evolution as it really happens, but then goes on to claim something different can happen as well - then it becomes interesting.
Anyone proposing an alternative idea needs to know precisely what it is an alternative to - otherwise they haven't done their homework, and it isn't worth my time.
Yes! This is a key point in the Alternative-Science Respectability Checklist, for example:
Someone comes along and says “I’ve discovered that there’s no need for dark matter.” A brief glance at the abstract reveals that the model violates our understanding of perturbation theory. Well, perhaps there is something subtle going on here, and our conventional understanding of perturbation theory doesn’t apply in this case. So here’s what any working theoretical cosmologist would do (even if they aren’t consciously aware that they’re doing it): they would glance at the introduction to the paper, looking for a paragraph that says “Look, we know this isn’t what you would expect from elementary perturbation theory, but here’s why that doesn’t apply in this case.” Upon not finding that paragraph, they would toss the paper away.
Note that when you consider a claim, you shouldn't set out to prove it false, or to prove it true. You should set out to find a correct conclusion about the claim, the truth about it. Not being skeptical is a particular failure mode that makes experts who you suspect of having this flaw, inappropriate source of knowledge about the claim. "Skepticism" is a similarly flawed mode of investigation.
So, the question shouldn't be, "Who is qualified to refute the Friendly AI idea?", but "Who is qualified to reveal the truth about the Friendly AI idea?".
It should be an established standard to link to the previous posts on the same topic. This is necessary to actually build upon existing work, and not just create blogging buzz. In this case, the obvious reference is The Correct Contrarian Cluster, and also probably That Magical Click and Reason as memetic immune disorder.
By the way, I have spent quite a long time trying to "debunk" the set of ideas around Friendly AI and the Singularity, and my conclusion is that there's simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn't have to publicly endorse every idea it cannot refute. There is no "court of crackpot appeal" where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say "fuck off, we don't like you or your idea, you won't get a job at a university unless you work on something we like".
Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom's insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.
Robin Hanson's way around this was to expend a significant fraction of his life getting tenure, and now they can't sack him, but that doesn't mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the "ignore" button.
My social intuitions tell me it is generally a bad idea to say words like 'kill' (as opposed to, say, 'overwrite', 'fatally reorganize', or 'dismantle for spare part(icle)s') in describing scenarios like that, as they resemble some people's misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
I think it's worth emphasizing that ideas aren't "worth investigating" or "not worth investigating" in themselves; different people will have different opportunities to investigate things at different costs, and will have different info and care about the answers to different degrees.
This is the bunk-detection strategy on TakeOnIt:
Examples that you alluded to in your post (I threw in cryonics because that's a contrarian issue often brought up on LW):
Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity
In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is th...
Liked the post. One of the two big questions it's poking at is 'how does one judge a hypothesis without researching it?' To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:
I'll add a few more:
If H is a physical or mathematical hypoth
"How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?"
This is not what's actually going on. To quote Eliezer:
"With regard to academia 'showing little interest' in my work - you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all - you have to publish article after article, write commentaries on other people's work from within your re...
There isn't any universal distinguishing rule, but in general you want to ask would a world where this were false, look just like our own world? A couple of useful specific guidelines:
Is this something people would be disposed to believe even if it were false?
Is this something that would be impossible to disprove even if it were false?
Flying saucers, psychic powers, and the Singularity are good examples here: suppose we lived in a world where they were not real, what would it look like? Answer, people would still believe them because we are disposed...
We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.
More generally, one can't optimize a process of getting some kind of answers by also usi...
First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.
More:
Does he use math or formal logic when a claim demands it? Does he accuse others of suppressing his views?
The Crackpot index is helpful, though it is physics centric.
So a claim is bunk if and only if:
Those with the right kind of difficult-to-access information or who trust the relevant "expert" class will assign it an extremely low probability.
Those without that information who either don't know or don't trust the relevant expert class may assign it a more reasonable probability or even believe it.
The claim is false.
(?) The claim is non-trivial, if true, it would have wide-reaching implications.
So claims to have a perpetual motion machine are bunk because to understand how unlikely they are you eit...
There isn't as much of a free rider problem as you make there out to be. Different people can divide their time to different subjects to investigate. Thus, we all benefit from the collective effort to investigate.
Investigating unlikely claims is also healthy in general because it helps us hone our reasoning capabilities so people investigating them may get some direct benefit.
I'm not sure I like the category of "bunk"; it seems overly broad and not clearly defined. Your definition "there are claims so cracked that they aren't worth investiga...
or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.
Electrical engineering is not the appropriate discipline, and neither is most of computer science. AI/cognitive science and philosophy are the closest.
Appropriate experts to "debunk" the singularity would be analytic philosophers such as David Chalmers, or AI/cognitive science people like Josh Tenebaum or Stuart Russell, Peter Norvig, etc.
Bryan Caplan spends time refuting Austrians - he thinks Austrian Economics is a mistake that wastes the time of a lot of quality free market economists.
In other words, is it a scientific or a pseudoscientific hypothesis?
Surprisingly, I don't think we've ever gotten deep into demarcation issues here. Anyone want to attempt demarcation criteria? Is that even a worthwhile task?
One word: attachment.
Claims like, "The singularity will occur within this century," do not have attached implications, i.e. there aren't any particular facts we would would expect to be able to currently observe if they were true. Things we dismiss as bunk we either have evidence that directly contradicts them, (e.g. "The Earth is 6000 years old" is directly contradicted by evidence) or we lack evidence that would expect to observe with extremely high probability were they true (e.g alien abductions - it's rather bizarre that aliens wou...
(e.g alien abductions - it's rather bizarre that aliens would do such specific things and somehow invariably avoid large demographics of society. ...
When I abduct humans, I abduct specifically those who are known to be liars, insane, or seeking attention.
Works wonders for the problem of witnesses.
Before anyone asks: rectal probing has extensive applications in paperclip manufacturing.
Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.
You just can't do that. At least not without some a posteriori empirical data about the said innovation. More it is an innovation, the less you can know about it in advance. And less something is a novum, better you can judge it.
Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/
Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating? How do you identify "bunk" or "crackpot" ideas?
Here are some examples to give an idea.
"Here's a perpetual motion machine": bunk. "I've found an elementary proof of Fermat's Last Theorem": bunk. "9-11 was an inside job": bunk.
"Humans did not cause global warming": possibly bunk, but I'm not sure. "The Singularity will come within 100 years": possibly bunk, but I'm not sure. "The economic system is close to collapse": possibly bunk, but I'm not sure.
"There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk. "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk.
(These are my own examples. They're meant to be illustrative, not definitive. I imagine that some people here will think "But that's obviously not bunk!" Sure, but you probably can think of some claim that *you* consider bunk.)
A few notes of clarification: I'm only examining factual, not normative, claims. I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.") I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?" A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes. That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.) The question "Is this bunk?" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot. In other words, is it a scientific or a pseudoscientific hypothesis? Or, in practical terms, is it worth it for me or anybody else to investigate it?
This is an important question, and especially to this community. People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one. Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.
I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. "I was abducted by aliens" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut. Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim. Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.
But how do we determine what's likely to be nonsense? "I know it when I see it" is a pretty bad guide.
First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.
But what if there aren't such clear warning signs? Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on. And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible. What now? Is this bunk?
A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file? There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians. As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist. How much should we be troubled, though, by the fact that most scientists of their disciplines shun them? Perhaps it's only reasonable to give some weight to that fact.
Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then "mainstream" status doesn't confer any information. The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves. We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.
So the question "What is bunk?", that is, the question, "What is likely enough to be worth investigating?", apparently destroys itself. You can only tell if a claim is unlikely by doing a little investigation. It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.
The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further. Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research. When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do. We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to. From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap? The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation. We should thank the poor dogged saps of the world. We should create more incentives to be a poor dogged sap. Because if we don't have enough of them, we're going to be very mistaken when we think "Well, this wasn't important enough for anyone to investigate, so it must be bunk."
(N.B. I am aware that many climate scientists are being "poor dogged saps" by communicating with and attempting to refute global warming skeptics. I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)