If you are going to suggest that academic climate research is not up to scratch, you need to do more than post links to pages that link to non-academic articles. Saying "you can find lots on google scholar" is not that same as actually pointing to the alleged sub-standard research.
For a long time I too was somewhat skeptical about global warming. I recognized the risk that researchers would exaggerate the problem in order to obtain more funding.
What I chose to do to resolve the matter was to deep dive into a few often-raised skeptic arguments using my knowledge of physics as a starting point, and learning whatever I needed to learn along the way (it took a while). The result was that the academic researchers won 6-0 6-0 6-0 in three sets (to use a tennis score analogy). Most striking to me was the dishonesty and lack of substance on the "skeptic" side. There was just no "there" there.
The topics I looked into were: accuracy of the climate temperature record, alleged natural causes explaining the recent heating, the alleged saturation of the atmospheric CO2 infra-red wavelengths, and the claim that the CO2 that is emitted by man is absorbed very quickly...
waveman:
If you are going to suggest that academic climate research is not up to scratch, you need to do more than post links to pages that link to non-academic articles. Saying "you can find lots on google scholar" is not that same as actually pointing to the alleged sub-standard research.
I agree that I should have argued and referenced that part better. What I wanted to point out is that there is a whole cottage industry of research purporting to show that climate change is supposedly influencing one thing or another, a very large part of which appears to advance hypotheses so far-fetched and weakly substantiated that they seem like obvious products of the tendency to involve this super-fashionable topic into one's research whenever possible, for reasons of both status- and career-advancement.
Even if one accepts that the standard view on climate change has been decisively proven and the issue shown to be a pressing problem, I still don't think how one could escape this conclusion.
Hmm. So if someday I find that some scientists make conclusions that don't follow and these conclusions are used to make harmful policy decisions, I must not point out that certain scientific problems are unsolved or gather other scientists to write petitions, because that would make me match the RW pattern of "denialist". Also apparently I must not say that correlation isn't causation, because that's "minimizing the relevance of statistical data".
One marker to watch out for is a kind of selection effect.
In some fields, only 'true believers' have any motivation to spend their entire careers studying the subject in the first place, and so the 'mainstream' in that field is absolutely nutty.
Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it's only "radicals" who say things like, "Um, you realize that the 'gospel of Mark' is written in the genre of fiction, right?"
I agree about the historical Jesus studies. At one point, I got intensely interested in this topic and read a dozen or so books about it by various authors (mostly on the skeptical end). My conclusion is that this is possibly the ultimate example of an area where the questions are tantalizingly interesting, but making any reliable conclusions from the available evidence is basically impossible. At the end, as you say, we get a lot of well written and impressively researched books whose content is however just a rationalization for the authors' opinions held for altogether different reasons.
On the other hand, I'm not sure if you're expressing support for the radical mythicist position, but if you do, I disagree. As much as Christian apologists tend to stretch the evidence in their favor, it seems to me like radical mythicists are biased in the other direction. (It's telling that the doyen of contemporary mythicism, G.A. Wells, who certainly has no inclination towards Christian apologetics, has moderated his position significantly in recent years.)
When I wrote "What is Bunk?" I thought I had a pretty good idea of the distinction between science and pseudoscience, except for some edge cases. Astrology is pseudoscience, astronomy is science. At the time, I was trying to work out a rubric for the edge cases (things like macroeconomics.)
Now, though, knowing a bit more about the natural sciences, it seems that even perfectly honest "science" is much shakier and likelier to be false than I supposed. There's apparently a high probability that the conclusions of a molecular biology paper will be false -- even if the journal is prestigious and the researchers are all at a world-class university. There's simply a lot of pressure to make results look more conclusive than they are.
In the field of machine learning, which I sometimes read the literature in, there are foundational debates about the best methods. Ideas which very smart and highly credentialed people tout often turn out to be ineffective, years down the road. Apparently smart and accomplished researchers will often claim that some other apparently smart and accomplished researcher is doing it all wrong.
If you don't actually know a field, you mig...
I guess the moral is "Don't trust anyone but a mathematician"?
Safety in numbers? ;)
Perhaps it's useful to distinguish between the frontier of science vs. established science. One should expect the frontier to be rather shaky and full of disagreements, before the winning theories have had time to be thoroughly tested and become part of our scientific bedrock. There was a time after all when it was rational for a layperson to remain rather neutral with respect to Einstein's views on space and time. The heuristic of "is this science established / uncontroversial amongst experts?" is perhaps so boring we forget it, but it's one of the most useful ones we have.
I guess the moral is "Don't trust anyone but a mathematician"?
Theorems get published all the time that turn out to have incorrect proofs or to be not even theorems. There was about a decade long period in the late 19th century where there was a proof of the four color theorem that everyone thought was valid. And the middle of the 20th century there were serious issues with calculating homology groups and cohomology groups of spaces where people kept getting different answers. And then there are a handful of examples where theorems simply got more and more conditions tacked on to them as more counterexamples to the theorems became apparent. The Euler formula for polyhedra is possibly the most blatant such example.
So even the mathematicians aren't always trustworthy.
Huh? There are no counterexamples to the Euler characteristic of a polyhedra being 2, and the theorem has generalized beautifully. If anything conditions have been loosened as new versions of the theorem have been used in more places.
Well, what do you mean by polyhedron? Consider for example a cubic nut. Does this fit your intuition of a polyhedron? Well, since it has genus that is not equal to 1, it doesn't have Euler characteristic 2. And the original proof that V+F-E=2 didn't handle this sort of case. (That's one reason why people often add convex as a condition, to deal with just this situation even though convex is in many respects stronger than what one needs.) Cauchy's 1811 proof suffers from this problem as do some of the other early proofs (although his is repairable if one is careful). There are also other subtle issues that can go wrong and in fact do go wrong in a lot of the historical versions. Lakatos's book "Proofs and Refutations" discusses this albeit in an essentially ahistorical fashion.
To evaluate a contrarian claim, it helps to break down the contentious issue into its contentious sub-issues. For example, contrarians deny that global warming is caused primarily by humans, an issue which can be broken down into the following sub-issues:
Have solar cycles significantly affected earth's recent climate?
Does cosmic radiation significantly affect earth's climate?
Has earth's orbit significantly affected its recent climate?
Does atmospheric CO2 cause significant global warming?
Do negative feedback loops mostly cushion the effect of atmospheric CO2 increases?
Are recent climatic changes consistent with the AGW hypothesis?
Is it possible to accurately predict climate?
Have climate models made good predictions so far?
Are the causes of climate change well understood?
Has CO2 passively lagged temperature in past climates?
Are climate records (of temperature, CO2, etc.) reliable?
Is the Anthropogenic Global Warming hypothesis falsifiable?
Does unpredictable weather imply unpredictable climate?
It's much easier to assess the liklihood of a position once you've assessed the liklihood of each of its supporting positions. In this particular case, I found that the contrarians made a very ...
Here's another one: what I call the layshadow heuristic: could an intelligent layperson produce passable, publishable work [1] in that field after a few days of self-study? It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.
The heuristic works because passing it implies very low inferential distance and therefore very little knowledge accumulation.
[1] specifically, work that unsuspecting "experts" in the field cannot distinguish from that produced by "serious" researchers with real "experience" and "education" in that field.
SilasBarta:
It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.
I agree this is indicative of serious pathology of one sort or another, but in fairness, I find it plausible that in many fields there might be a very severe divide between real scholarship done by people on the tenure track and the routine drudgery assigned to students, even graduate students who aren't aiming for the tenure track.
The pathologies of the educational side of the modern academic system are certainly a fascinating topic in its own right.
As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.
In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.
This sounds like a useful heuristic, but I think there'...
We'd expect most changes to the Earth's climate to be bad (on net) for its current inhabitants because the Earth has been settled in ways that are appropriate to its current climate. Species are adapted to their current environment, so if weather patterns change and the temperature goes up or down, or precipitation increases or decreases, or whatever else, that's more likely to be bad for them than good.
Similarly, humans grow crops in places where those crops grow well, live where they have access to water but not too many floods (and where they are on land rather than underwater), and so on. If the climate changes, then the number of places on Earth that would be a good place for a city might not change, but fewer of our existing cities will be in one of those places.
There are some expected benefits of global warming (e.g., "Crop productivity is projected to increase slightly at mid- to high latitudes for local mean temperature increases of up to 1-3°C depending on the crop, and then decrease beyond that in some regions"). But, unsurprisingly, climate scientists are projecting more costs than benefits, and a net cost. News articles are likely to have a further bias ...
I've been surprised by how bad the majority of scholarship is around the "inspired-by" or "metaphorical" genre of algorithms - neural networks, genetic algorithms, Baum's Hayek machine and so on. My guess is that the colorful metaphors allow you to disguise any success as due to the technique rather than a grad student poking and prodding at it until a demo seems to work.
Within the metaphorical algorithms, I've been surprised at reinforcement learning in particular. It may have started with a metaphor of operant conditioning, but it has a useful mathematical foundation related to dynamic programming.
As an economist myself (though a microeconomist) I share some of your concerns about macroeconomics. The way support and opposition for the US's recent stimulus broke down along ideological lines was wholly depressing.
I think the problem for macro is that they have almost no data to work with. You can't run a controlled experiment on a whole country and countries tend to be very different from each other which means there are a lot of confounding factors to deal with. And without much evidence, how could they hope to generate accurate beliefs?
Add to that the raw complexity of what economists study. The human brain the most complex object known to exist and the the global economy is about 7 billion of them interacting with each other.
None of this is meant to absolve macroeconomics, it may just be that meaningful study in this area just isn't possible. Macro has made some gains, there's a list of things that don't work in development economics and stabilisation policy is better than it was in the 1970s. But apart from that? Not much.
I'm surprised that you don't mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content. Take English literature for example. Barrels of ink have been spilled in writing about Hamlet, and genuinely new insights are quite rare. The methods are also about as unsound as you can imagine. Freud is still heavily cited and applied, and postmodern/poststructuralist/deconstructionist writing seems to be accorded higher status the more impossible to read it is.
Ideological interest is also a big problem. This seems almost inevitable, since the subject of the humanities is human culture, which is naturally bound up with human ideals, beliefs, and opinions. Academic disciplines are social groups, so they have a natural tendency to develop group norms and ideologies. It's unsurprising that this trend is reinforced in those disciplines that have ideologies as their subject matter. The result is that interpretations which do not support the dominant paradigm (often a variation on how certain sympathetic social groups are repressed, marginalized, or "otherized"), are themselves suppressed.
One theory of why the humanities are so bad is ...
The danger I see is mathematicians endorsing mathematics research because it serves explicitly mathematical goals....I'd like us to decide to attack [the Rieman Hypothesis] because we expect it to be useful, not merely because it's difficult and therefore allows us to demonstrate skill.
Why such prejudice against "explicitly mathematical goals"? Why on Earth is this a danger? One way or another, people are going to amuse themselves -- via art, sports, sex, or drugs -- so it might as well be via mathematics, which even the most cynically "hard-headed" will concede is sometimes "useful".
But more fundamentally, the heuristic you're using here ("if I don't see how it's useful, it probably isn't") is wrong. You underestimate the correlation between what mathematicians find interesting and what is useful. Mathematicians are not interested in the Riemann Hypothesis because it may be useful, but the fact that they're interested is significant evidence that it will be.
What mathematics is, as a discipline, is the search for conceptual insights on the most abstract level possible. Its usefulness does not lie in specific ad-hoc "applications&qu...
On "ideologically charged" science producing good results:
Evolutionary biology, in general. Creationism went down really hard and really quickly.
Did it? Sure, it's clear cut now. But what I've read about the subject says that back in the days when it was a matter of mainstream intellectual debate, it was long and very messy, and included things like scientists on the 'right' side accepting extremely dodgy evidence for spontaneous generation of life in the test tube because they felt that to reject it would weaken the case for being able to do without divine intervention.
I am reminded of this recent article from the arXiv blog:
Biologists Ignoring Low-Hanging Fruit, Says Drug Discovery Study
Molecular biologists focus most of their attention on a small set of biomolecules, while ignoring the rest, according to a study of research patterns.
[2] Moldbug’s "What’s wrong with CS research" is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.
With the slight problem that Moldbug appears to be writing as a Systems Weenie, and being someone with cursory training on multiple sides of this issue (PL/Formal Verification and systems), I don't think his assessment there is accurate.
When assessing an academic field, you should include a ki...
You confuse two very different issues.
1) How much weight you should give to the views of academics in that area, e.g., if some claim is accepted by the mainstream establishment (or conversely viewed as a valid point of disagreement) how much should that information affect your own probability judgement.
2) How much progress/how useful is the academic discipline in question. Does it require reform.
Your arguments in the first part are only relevant to #2. The programming language research community may be mirred in hopeless mathematical jealousy as they c...
I do agree that there are fields where the overall standards of the academic mainstream are not that high, but I'm not sure about the heuristics - I tend to use a different set.
One confusing factor is that in almost any field, the academic level of an arbitrary academic paper is not that high - average academic papers are published by average scientists, and are generally averagely brilliant - in other words, not that good. The preferred route is typically to prove something that's actually already well known, but there are also plenty of flawed papers. Th...
"In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances..."
Describes Ireland pretty well.
When dealing with the possibility of ideology influencing results one needs to be careful that one isn't engaging in projection based on one's own ideology influencing results. Otherwise this can turn into a fully general counter-argument. (To use one of the possibly more amusing examples, look at Conservapedia's labeling of the complex numbers and the axiom of choice as products of liberal ideology.)
Also, an incidental note about the issue of climate change: we should expect that most aspects of climate change will be bad. Humans have developed an extrem...
No, the comments have been made by the project's founder Andrew Schlafly. He's also claimed that the Fields Medal has a liberal bias (disclaimer: that's a link to my own blog.) Andrew also has a page labeled Counterexamples to Relativity written almost exclusively by him that claims among other things that "The theory of relativity is a mathematical system that allows no exceptions. It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world."
I will add to help prevent mind-killing that Conservapedia is not taken seriously by much of the American right-wing, and that this sort of extreme behavior is not limited to any specific end of the political spectrum.
David_Gerard:
It is important to note here that Andrew Schlafly, founder of Conservapedia and author of most of these articles, has a degree in electrical engineering and worked as an engineer for several years before becoming a lawyer. He would not only be capable of understanding the mathematics, he would have used concepts from the theory in his professional work.
In fairness to relativity crackpots, unless things have changed since my freshman days, the way special relativity is commonly taught in introductory physics courses is practically an invitation for the students to form crackpot ideas. Instead of immediately explaining the idea of the Minkowski spacetime, which reduces the whole theory almost trivially to some basic analytic geometry and calculus and makes all those so-called "paradoxes" disappear easily in a flash of insight, physics courses often take the godawful approach of grafting a mishmash of weird "effects" (like "length contraction" and "time dilatation") onto a Newtonian intuition and then discussing the resulting "paradoxes" one by one. This approach is clearly great for pop-science writers trying to dazzle and amaze their lay audiences, but I'm at a loss to understand why it's foisted onto students who are supposed to learn real physics.
...nutrition. Here ideological influences aren’t very strong (though not altogether absent either).
Does "ideological influces" include fiscal influences? Because most of the contrarian nutritionists I've read say that the mainstream is swayed by heavily funded groups who'd like to see people eat more corn, dairy products, etc.
Nutrition's also entangled with a horrific mess of body-image issues and cultural expectations. These aren't essential to any of the strains of cultural criticism that they intersect, so I don't think I'd call them ideological; but because they're so closely linked to people's identities, they exhibit a lot of the problems we associate with ideology.
Same goes for related fields like exercise. The mind-killer here doesn't metastasize like ideology tends to, but it's every bit as pathological if you accidentally end up poking one of its hosts in the wrong spot.
It’s much harder to think of examples where the ideological interest heuristic fails. AIDS would be one example. The academic community might downplay the fact that condoms break from time to time but by in large the academia is right about AIDS.
Vaccine would be another charged topic where I think academia is mostly right.
Am I really in the minority in not wanting political discussion on the site, at least without special precautions?
Am I really in the minority in not wanting political discussion on the site, at least without special precautions?
I do not consider this post to be political. It is practical look at how and when to update on evidence of orthodox opinion. It could not be more relevant.
Do you think my post goes too far in this direction, or are you referring to some of the comments?
steven0461:
And while I didn't see anything inflammatory in your post, even the least inflammatory comments about an ideologically-charged issue can serve as an invitation for people to empty their cached opinions on the subject in the comments.
In your opinion, has this actually happened? Do you see something among the comments that, in your opinion, represents a negative contribution so that provoking it should be counted against the original post? (I understand you might not want to point fingers at concrete people, so feel free to answer just yes or no.)
The official motto in the logo is "refining the art of human rationality", which implies that our rationality is still imperfect.
It's still imperfect, but can't people try a little harder?
I don't see why it's absurd or bad PR to say that we're more rational than most other communities, but still not rational enough to talk about politics.
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer's writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the "mind-killer."
I have to wonder, if LW isn't ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then ...
Hang on. Instrumental rationality.
If you want to make political impact, don't have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn't sound appealing (to me, it doesn't, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you're very serious and rational about it, it's pretty much recreation.
I used to really like politics as recreation -- it made me feel good -- but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you'd otherwise like; it screws with personal relationships. Three, there's something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense ...
I didn't think footnotes 1 or 7 were very good examples. The fact that low quality work gets published is not enough to establish the soundness of the "academic mainstream". Given enough journals we should expect that to happen, and we should also expect most hypotheses to be false. Low quality work being cited and relied upon is a more serious problem.
Poser was not firmly dismissing the attempted solution as unsound. He said that there wasn't enough information given to properly evaluate the idea (although he could speculate on what the methods might have been), which is why it should have been a full-paper rather than a letter.
Ideology is a quite interesting factor.
Hypnosis is a nice example. For a long time there wasn't good academic research about the topic because of idealogical conflict. At the moment we know that it can be used to lower pain but the exact extent of what it can do is still quite unclear.
Hypnosis has also another trait. There's no financial incentive to research it in the way that drugs get researched.
Regarding endnote [4]: I'd be as interested in examples where we should read contrarian history as in any of your other examples; I'm interested in history. However, I think that you'd probably fall into mind-killing territory.
ETA: Thanks for the suggestions!
Google "Mencius Moldbug", "Unqualified Reservations". Read until you get bored.
Alternatively read Thomas Carlyle, (long dead historian) or actual primary documents. The TIME magazine archives are pretty cool for this, as is Google Books.
I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.
But you can't "confirm or disprove" your heuristics unless you have independent access to the truth about the health of the various academic disciplines. All you can do is to compare the opinions generated by your heuristics with other people's opinions.
For what it's worth, personally, I agree with most of your opinions, but have reservations about the heuristics. Two places where I disagree with your opinions are macroeconomics and climate ...
Can you check a favorite theory of mine?
If we categorize nations as habitual war winners / war losers, occupiers / occupied, strong or weak, we see the following. Pretty much every ideology or ideological keyword as created by the winners, the strong at the height of their power, left and right was invented just before the French Revolution, liberalism and conservatism descends from the Gladstone-Disraeli era and so on. Ultimately the ideologies are all about how to handle conflict INSIDE a society, like a rich vs. poor, state vs. capitalists, religious v...
The humanities. Literary theory, culture and media studies, as well as philosophy (continental philosophy in particular) are fields filled with nonsense. The main problem with these fields stems from the lack or difficulty of an objective judgment, in my opinion. In literary theory, for example, it's more important to be interesting than to be right.
I have to admit that they fail the heuristic of ideological interest as well. Even if we ignore for a moment Nobel and other prizes in literature (which have always been seriously biased), as well as culture st...
(This post is an expanded version of a LW comment I left a while ago. I have found myself referring to it so much in the meantime that I think it’s worth reworking into a proper post. Some related posts are "The Correct Contrarian Cluster" and "What is Bunk?")
When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots.
The trouble is, this is not always the case. Even those whose view of the modern academia is much rosier than mine should agree that it would be astonishing if there didn’t exist at least some areas where the academic mainstream is detached from reality on important issues, while much more accurate views are scorned as kooky (or would be if they were heard at all). Therefore, depending on the area, the fact that a view is way out of the academic mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
I will discuss some heuristics that, in my experience, provide a realistic first estimate of how sound the academic mainstream in a given field is likely to be, and how justified one would be to dismiss contrarians out of hand. These conclusions have come from my own observations of research literature in various fields and some personal experience with the way modern academia operates, and I would be interested in reading others’ opinions.
Low-hanging fruit heuristic
As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.
In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.
Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation. [1]
Somewhat surprisingly, another example is presented by some subfields of computer science. With all the new computer gadgets everywhere, one would think that no other field could be further from a stale dead end. In some of its subfields this is definitely true, but in others, much of what is studied is based on decades old major breakthroughs, and the known viable directions from there have long since been explored all until they hit against some fundamentally intractable problem. (Or alternatively, further progress is a matter of hands-on engineering practice that doesn't lend itself to the way academia operates.) This has led to a situation where a lot of the published CS research is increasingly distant from reality, because to keep the illusion of progress, it must pretend to solve problems that are basically known to be impossible. [2]
Ideological/venal interest heuristic
Bad as they might be, the problems that occur when clear research directions are lacking pale in comparison with what happens when things under discussion are ideologically charged or a matter in which powerful interest groups have a stake. As Hobbes remarked, people agree about theorems of geometry not because their proofs are solid, but because "men care not in that subject what be truth, as a thing that crosses no man’s ambition, profit, or lust." [3]
One example is the cluster of research areas encompassing intelligence research, sociobiology, and behavioral genetics, which touches on a lot of highly ideologically charged questions. These pass the low-hanging fruit heuristic easily: the existing literature is full of proposals for interesting studies waiting to be done. Yet, because of their striking ideological implications, these areas are full of work clearly aimed at advancing the authors’ non-scientific agenda, and even after a lot of reading one is left in confusion over whom to believe, if anyone. It doesn’t even matter whose side one supports in these controversies: whichever side is right (if any one is), it’s simply impossible that there isn’t a whole lot of nonsense published in prestigious academic venues and under august academic titles.
Yet another academic area that suffers from the same problems is the history of the modern era. On many significant events from the last two centuries, there is a great deal of documentary evidence laying around still waiting to be assessed properly, so there is certainly no lack of low-hanging fruit for a smart and diligent historian. Yet due to the clear ideological implications of many historical topics, ideological nonsense cleverly masquerading as scholarship abounds. I don’t think anything resembling an accurate world history of the last two centuries could be written without making a great many contrarian claims. [4] In contrast, on topics that don't arouse ideological passions, modern histories are often amazingly well researched and free of speculation and distortion. (In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances, but you may be able to find very accurate information on your local history in the works of foreign historians from the elite academia.)
On the whole, it seems to me that failing the ideological interest test suggests a much worse situation than failing the low-hanging fruit test. The areas affected by just the latter are still fundamentally sound, and tend to produce work whose contribution is way overblown, but which is still built on a sound basis and internally coherent. Even if outright nonsense is produced, it’s still clearly distinguishable with some effort and usually restricted to less prestigious authors. Areas affected by ideological biases, however, tend to drift much further into outright delusion, possibly lacking a sound core body of scholarship altogether.
[Paragraphs below added in response to comments:]
What about the problem of purely venal influences, i.e. the cases where researchers are under the patronage of parties that have stakes in the results of their research? On the whole, the modern Western academic system is very good at discovering and stamping out clear and obvious corruption and fraud. It's clearly not possible for researchers to openly sell their services to the highest bidder; even if there are no formal sanctions, their reputation would be ruined. However, venal influences are nevertheless far from nonexistent, and a fascinating question is under what exact conditions researchers are likely to fall under them and get away with it.
Sometimes venal influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the real problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don't even need to hide anything when establishing a perverse symbiosis that results in biased research. Such relationships, while fundamentally representing venal interest, are in fact often boasted about as beneficial and productive cooperation. Pharmaceutical research is an often cited example, but I think the phenomenon is in fact far more widespread, and reaches the height of perverse perfection in those research communities whose structure effectively blends into various government agencies.
The really bad cases: failing both tests
So far, I've discussed examples where one of the mentioned heuristics returns a negative answer, but not the other. What happens when a field fails both of them, having no clear research directions and at the same time being highly relevant to ideologues and interest groups? Unsurprisingly, it tends to be really bad.
The clearest example of such a field is probably economics, particularly macroeconomics. (Microeconomics covers an extremely broad range of issues deeply intertwined with many other fields, and its soundness, in my opinion, varies greatly depending on the subject, so I’ll avoid a lengthy digression into it.) Macroeconomists lack any clearly sound and fruitful approach to the problems they wish to study, and any conclusion they might draw will have immediately obvious ideological implications, often expressible in stark "who-whom?" terms.
And indeed, even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world [5], experts with the most prestigious credentials dismissing each other as crackpots (in more or less diplomatic terms) when their favored ideologies clash, etc., etc. Fringe contrarians in this area (most notably extreme Austrians) typically have silly enough ideas of their own, but their criticism of the academic mainstream is nevertheless often spot-on, in my opinion.
Other examples
So, what are some other interesting case studies for these heuristics?
An example of great interest is climate science. Clearly, the ideological interest heuristic raises a big red flag here, and indeed, there is little doubt that a lot of the research coming out in recent years that supposedly links "climate change" with all kinds of bad things is just fashionable nonsense [6]. (Another sanity check it fails is that only a tiny proportion of these authors ever hypothesize that the predicted/observed climate change might actually improve something, as if there existed some law of physics prohibiting it.) Thus, I’d say that contrarians on this issue should definitely not be dismissed out of hand; the really hard question is how much sound insight (if any) remains after one eliminates all the nonsense that’s infiltrated the mainstream. When it comes to the low-hanging fruit heuristic, I find the situation less clear. How difficult is it to achieve progress in accurately reconstructing long-term climate trends and forecasting the influences of increasing greenhouse gases? Is it hard enough that we’d expect, even absent an ideological motivation, that people would try to substitute cleverly contrived bunk for unreachable sound insight? My conclusion is that I’ll have to read much more on the technical background of these subjects before I can form any reliable opinion on these questions.
Another example of practical interest is nutrition. Here ideological influences aren’t very strong (though not altogether absent either). However, the low-hanging fruit raises a huge red flag: it’s almost impossible to study these things in a sound way, controlling for all the incredibly complex and counterintuitive confounding variables. At the same time, it’s easy to produce endless amounts of plausible-looking junk studies. Thus, I’d expect that the mainstream research in this area is on average pure nonsense, with a few possible gems of solid insight hopelessly buried under it, and even when it comes to very extreme contrarians, I wouldn’t be tremendously surprised to see any one of them proven right at the end. My conclusion is similar when it comes to exercise and numerous other lifestyle issues.
Exceptions
Finally, what are the evident exceptions to these trends?
I can think of some exceptions to the low-hanging fruit heuristic. One is in historical linguistics, whose standard well-substantiated methods have had great success in identifying the structure of the world’s language family trees, but give no answer at all to the fascinating question of how far back into the past the nodes of these trees reach (except of course when we have written evidence). Nobody has any good idea how to make progress there, and the questions are tantalizing. Now, there are all sorts of plausible-looking but fundamentally unsound methods that purport to answer these questions, and papers using them occasionally get published in prestigious non-linguistic journals, but the actual historical linguists firmly dismiss them as unsound, even though they have no answers of their own to offer instead. [7] It’s an example of a commendable stand against seductive nonsense.
It’s much harder to think of examples where the ideological interest heuristic fails. What field can one point out where mainstream scholarship is reliably sound and objective despite its topic being ideologically charged? Honestly, I can’t think of one.
What about the other direction -- fields that pass both heuristics but are nevertheless nonsense? I can think of e.g. artsy areas that don’t make much of a pretense to objectivity in the first place, but otherwise, it seems to me that absent ideological and venal perverse incentives, and given clear paths to progress that don’t require extraordinary genius, the modern academic system is great in producing solid and reliable insight. The trouble is that these conditions often don’t hold in practice.
I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.
Footnotes
[1] Commenter gwern has argued that the Bogdanoff affair is not a good example, claiming that the brothers have been shown as fraud decisively after they came under intense public scrutiny. However, even if this is true, the fact still remains that they initially managed to publish their work in reputable peer-reviewed venues and obtain doctorates at a reputable (though not top-ranking) university, which strongly suggests that there is much more work in the field that is equally bad but doesn't elicit equal public interest and thus never gets really scrutinized. Moreover, from my own reading about the affair, it was clear that in its initial phases several credentialed physicists were unable to make a clear judgment about their work. On the whole, I don’t think the affair can be dismissed as an insignificant accident.
[2] Moldbug’s "What’s wrong with CS research" is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.
[3] Thomas Hobbes, Leviathan, Chapter XI.
[4] I have the impression that LW readers would mostly not be interested in a detailed discussion of the topics where I think one should read contrarian history, so I’m skipping it. In case I’m wrong, please feel free to open the issue in the comments.
[5] Oskar Morgenstern’s On the Accuracy of Economic Observations is a tour de force on the subject, demonstrating the essential meaninglessness of many sorts of numbers that economists use routinely. (Many thanks to the commenter realitygrill for directing me to this amazing book.) Morgenstern is of course far too prestigious a name to dismiss as a crackpot, so economists appear to have chosen to simply ignore the questions he raised, and his book has been languishing in obscurity and out of print for decades. It is available for download though (warning: ~31MB PDF).
[6] Some amusing lists of examples have been posted by the Heritage Foundation and the Number Watch (not intended to endorse the rest of the stuff on these websites). Admittedly, a lot of the stuff listed there is not real published research, but rather just people's media statements. Still, there's no shortage of similar things even in published research either, as a search of e.g. Google Scholar will show.
[7] Here is, for example, the linguist Bill Poser dismissing one such paper published in Nature a few years ago.