Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being "convinced" should cause you to act like the claim in question has probability 1. Thus, one shouldn't say one is "convinced" unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Not at all. In fact I pointed out that my account of being "convinced" is continuous with Pascal's Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
“Pascal's Wager” is the name given to an argument due to Blaise Pascal for believing, or for at least taking steps to believe, in God.
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about "believing". And of course nobody gets into heaven without believing. So Pascal wasn't talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
I would hesitantly suggest that for most questions if one can't conceive easily of what such evidence would look like then one probably hasn't thought much about the matter.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren't elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason "why", one possible answer is, "because elementary proofs aren't that important, really." I mean, it might be the right thing. How would I know whether it was the right thing? I'm not sure. I'm not sure that it's not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it?
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability - that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell's account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, "I'm following the rules", which he is. It is the rules which are foolish. But the rules aren't any person. They can't be smacked. Voila - evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way - but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as "peer review", which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable - though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable - to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you're a team player, you can survive. You don't actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here's a prediction from this theory: we should see a lot of trivial papers published, papers that don't really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast - I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal's Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I'm not sure what a good definition of "liberalism" is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn't the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise defini...
(This post is an expanded version of a LW comment I left a while ago. I have found myself referring to it so much in the meantime that I think it’s worth reworking into a proper post. Some related posts are "The Correct Contrarian Cluster" and "What is Bunk?")
When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots.
The trouble is, this is not always the case. Even those whose view of the modern academia is much rosier than mine should agree that it would be astonishing if there didn’t exist at least some areas where the academic mainstream is detached from reality on important issues, while much more accurate views are scorned as kooky (or would be if they were heard at all). Therefore, depending on the area, the fact that a view is way out of the academic mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
I will discuss some heuristics that, in my experience, provide a realistic first estimate of how sound the academic mainstream in a given field is likely to be, and how justified one would be to dismiss contrarians out of hand. These conclusions have come from my own observations of research literature in various fields and some personal experience with the way modern academia operates, and I would be interested in reading others’ opinions.
Low-hanging fruit heuristic
As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.
In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.
Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation. [1]
Somewhat surprisingly, another example is presented by some subfields of computer science. With all the new computer gadgets everywhere, one would think that no other field could be further from a stale dead end. In some of its subfields this is definitely true, but in others, much of what is studied is based on decades old major breakthroughs, and the known viable directions from there have long since been explored all until they hit against some fundamentally intractable problem. (Or alternatively, further progress is a matter of hands-on engineering practice that doesn't lend itself to the way academia operates.) This has led to a situation where a lot of the published CS research is increasingly distant from reality, because to keep the illusion of progress, it must pretend to solve problems that are basically known to be impossible. [2]
Ideological/venal interest heuristic
Bad as they might be, the problems that occur when clear research directions are lacking pale in comparison with what happens when things under discussion are ideologically charged or a matter in which powerful interest groups have a stake. As Hobbes remarked, people agree about theorems of geometry not because their proofs are solid, but because "men care not in that subject what be truth, as a thing that crosses no man’s ambition, profit, or lust." [3]
One example is the cluster of research areas encompassing intelligence research, sociobiology, and behavioral genetics, which touches on a lot of highly ideologically charged questions. These pass the low-hanging fruit heuristic easily: the existing literature is full of proposals for interesting studies waiting to be done. Yet, because of their striking ideological implications, these areas are full of work clearly aimed at advancing the authors’ non-scientific agenda, and even after a lot of reading one is left in confusion over whom to believe, if anyone. It doesn’t even matter whose side one supports in these controversies: whichever side is right (if any one is), it’s simply impossible that there isn’t a whole lot of nonsense published in prestigious academic venues and under august academic titles.
Yet another academic area that suffers from the same problems is the history of the modern era. On many significant events from the last two centuries, there is a great deal of documentary evidence laying around still waiting to be assessed properly, so there is certainly no lack of low-hanging fruit for a smart and diligent historian. Yet due to the clear ideological implications of many historical topics, ideological nonsense cleverly masquerading as scholarship abounds. I don’t think anything resembling an accurate world history of the last two centuries could be written without making a great many contrarian claims. [4] In contrast, on topics that don't arouse ideological passions, modern histories are often amazingly well researched and free of speculation and distortion. (In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances, but you may be able to find very accurate information on your local history in the works of foreign historians from the elite academia.)
On the whole, it seems to me that failing the ideological interest test suggests a much worse situation than failing the low-hanging fruit test. The areas affected by just the latter are still fundamentally sound, and tend to produce work whose contribution is way overblown, but which is still built on a sound basis and internally coherent. Even if outright nonsense is produced, it’s still clearly distinguishable with some effort and usually restricted to less prestigious authors. Areas affected by ideological biases, however, tend to drift much further into outright delusion, possibly lacking a sound core body of scholarship altogether.
[Paragraphs below added in response to comments:]
What about the problem of purely venal influences, i.e. the cases where researchers are under the patronage of parties that have stakes in the results of their research? On the whole, the modern Western academic system is very good at discovering and stamping out clear and obvious corruption and fraud. It's clearly not possible for researchers to openly sell their services to the highest bidder; even if there are no formal sanctions, their reputation would be ruined. However, venal influences are nevertheless far from nonexistent, and a fascinating question is under what exact conditions researchers are likely to fall under them and get away with it.
Sometimes venal influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the real problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don't even need to hide anything when establishing a perverse symbiosis that results in biased research. Such relationships, while fundamentally representing venal interest, are in fact often boasted about as beneficial and productive cooperation. Pharmaceutical research is an often cited example, but I think the phenomenon is in fact far more widespread, and reaches the height of perverse perfection in those research communities whose structure effectively blends into various government agencies.
The really bad cases: failing both tests
So far, I've discussed examples where one of the mentioned heuristics returns a negative answer, but not the other. What happens when a field fails both of them, having no clear research directions and at the same time being highly relevant to ideologues and interest groups? Unsurprisingly, it tends to be really bad.
The clearest example of such a field is probably economics, particularly macroeconomics. (Microeconomics covers an extremely broad range of issues deeply intertwined with many other fields, and its soundness, in my opinion, varies greatly depending on the subject, so I’ll avoid a lengthy digression into it.) Macroeconomists lack any clearly sound and fruitful approach to the problems they wish to study, and any conclusion they might draw will have immediately obvious ideological implications, often expressible in stark "who-whom?" terms.
And indeed, even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world [5], experts with the most prestigious credentials dismissing each other as crackpots (in more or less diplomatic terms) when their favored ideologies clash, etc., etc. Fringe contrarians in this area (most notably extreme Austrians) typically have silly enough ideas of their own, but their criticism of the academic mainstream is nevertheless often spot-on, in my opinion.
Other examples
So, what are some other interesting case studies for these heuristics?
An example of great interest is climate science. Clearly, the ideological interest heuristic raises a big red flag here, and indeed, there is little doubt that a lot of the research coming out in recent years that supposedly links "climate change" with all kinds of bad things is just fashionable nonsense [6]. (Another sanity check it fails is that only a tiny proportion of these authors ever hypothesize that the predicted/observed climate change might actually improve something, as if there existed some law of physics prohibiting it.) Thus, I’d say that contrarians on this issue should definitely not be dismissed out of hand; the really hard question is how much sound insight (if any) remains after one eliminates all the nonsense that’s infiltrated the mainstream. When it comes to the low-hanging fruit heuristic, I find the situation less clear. How difficult is it to achieve progress in accurately reconstructing long-term climate trends and forecasting the influences of increasing greenhouse gases? Is it hard enough that we’d expect, even absent an ideological motivation, that people would try to substitute cleverly contrived bunk for unreachable sound insight? My conclusion is that I’ll have to read much more on the technical background of these subjects before I can form any reliable opinion on these questions.
Another example of practical interest is nutrition. Here ideological influences aren’t very strong (though not altogether absent either). However, the low-hanging fruit raises a huge red flag: it’s almost impossible to study these things in a sound way, controlling for all the incredibly complex and counterintuitive confounding variables. At the same time, it’s easy to produce endless amounts of plausible-looking junk studies. Thus, I’d expect that the mainstream research in this area is on average pure nonsense, with a few possible gems of solid insight hopelessly buried under it, and even when it comes to very extreme contrarians, I wouldn’t be tremendously surprised to see any one of them proven right at the end. My conclusion is similar when it comes to exercise and numerous other lifestyle issues.
Exceptions
Finally, what are the evident exceptions to these trends?
I can think of some exceptions to the low-hanging fruit heuristic. One is in historical linguistics, whose standard well-substantiated methods have had great success in identifying the structure of the world’s language family trees, but give no answer at all to the fascinating question of how far back into the past the nodes of these trees reach (except of course when we have written evidence). Nobody has any good idea how to make progress there, and the questions are tantalizing. Now, there are all sorts of plausible-looking but fundamentally unsound methods that purport to answer these questions, and papers using them occasionally get published in prestigious non-linguistic journals, but the actual historical linguists firmly dismiss them as unsound, even though they have no answers of their own to offer instead. [7] It’s an example of a commendable stand against seductive nonsense.
It’s much harder to think of examples where the ideological interest heuristic fails. What field can one point out where mainstream scholarship is reliably sound and objective despite its topic being ideologically charged? Honestly, I can’t think of one.
What about the other direction -- fields that pass both heuristics but are nevertheless nonsense? I can think of e.g. artsy areas that don’t make much of a pretense to objectivity in the first place, but otherwise, it seems to me that absent ideological and venal perverse incentives, and given clear paths to progress that don’t require extraordinary genius, the modern academic system is great in producing solid and reliable insight. The trouble is that these conditions often don’t hold in practice.
I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.
Footnotes
[1] Commenter gwern has argued that the Bogdanoff affair is not a good example, claiming that the brothers have been shown as fraud decisively after they came under intense public scrutiny. However, even if this is true, the fact still remains that they initially managed to publish their work in reputable peer-reviewed venues and obtain doctorates at a reputable (though not top-ranking) university, which strongly suggests that there is much more work in the field that is equally bad but doesn't elicit equal public interest and thus never gets really scrutinized. Moreover, from my own reading about the affair, it was clear that in its initial phases several credentialed physicists were unable to make a clear judgment about their work. On the whole, I don’t think the affair can be dismissed as an insignificant accident.
[2] Moldbug’s "What’s wrong with CS research" is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.
[3] Thomas Hobbes, Leviathan, Chapter XI.
[4] I have the impression that LW readers would mostly not be interested in a detailed discussion of the topics where I think one should read contrarian history, so I’m skipping it. In case I’m wrong, please feel free to open the issue in the comments.
[5] Oskar Morgenstern’s On the Accuracy of Economic Observations is a tour de force on the subject, demonstrating the essential meaninglessness of many sorts of numbers that economists use routinely. (Many thanks to the commenter realitygrill for directing me to this amazing book.) Morgenstern is of course far too prestigious a name to dismiss as a crackpot, so economists appear to have chosen to simply ignore the questions he raised, and his book has been languishing in obscurity and out of print for decades. It is available for download though (warning: ~31MB PDF).
[6] Some amusing lists of examples have been posted by the Heritage Foundation and the Number Watch (not intended to endorse the rest of the stuff on these websites). Admittedly, a lot of the stuff listed there is not real published research, but rather just people's media statements. Still, there's no shortage of similar things even in published research either, as a search of e.g. Google Scholar will show.
[7] Here is, for example, the linguist Bill Poser dismissing one such paper published in Nature a few years ago.