Bryan Caplan writes:

Almost all economic models assume that human beings are Bayesians...  It is striking, then, to realize that academic economists are not Bayesians.  And they're proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more... 

Empirical economists' deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing "whatever the data say."  When there's no data that meets their standards, they mimic the theorists' snobby agnosticism.  If you mention "common sense," they'll scoff.  If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.

Robin Hanson offers an explanation:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.  If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis.  And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display.  It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

I wonder, what does this look like in the cross section?  In other words, relative to other academic disciplines, which have the strongest tendency to celebrate difficult work but ignore sound-yet-unimpressive work?  My hunch is that economics, along with most other social sciences, would be the worst offenders, while the fields closer to engineering will be on the other end of the spectrum.  Engineers should be more concerned with truth since whatever they build has to, you know, work.  What say you?  More importantly, anyone have any evidence?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 10:38 AM

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.

What does this theory predict relative to theory that people are interested in quality teaching and research, and they use reputation as a not terribly reliable proxy for it, as quality is too hard to measure for most people?

What does this theory predict relative to theory that people are interested in quality teaching and research, and they use reputation as a not terribly reliable proxy for it, as quality is too hard to measure for most people?

People in the US do not use prestige as a proxy for teaching, or at least that is quite inconsistent with their other claims. Everyone agrees that large research schools are bad at teaching and that at least some small schools are better. But very few people turn down Harvard to go to Williams, so they seem to admit to having other priorities than teaching.

There is more to learning than teaching, namely the classmates. It may be a coordination issue: the good students want to be together, but it doesn't matter much where, so there are multiple equilibria: in France the undergrads want to go to different schools than the grad students. (Robin's theory seems to predict that this shouldn't happen, but not terribly strongly.) ETA: also, in the US, liberal arts schools and state research universities largely flip prestige between the (undergrad) students and the faculty.

(Yes, it makes sense that journalists and grad students should look to prestige as a proxy for research.)

Everyone agrees that large research schools are bad at teaching and that at least some small schools are better.

"Everyone agrees" huh? Do you have any evidence for that? As far as I can tell correlation between prestige, research quality, and teaching quality is highly positive in Polish universities' computer science (that's the only kind I know closely, for everything else I would just guess their quality from their prestige).

I would say there is a general positive correlation between teaching quality, research quality, and prestige, with some exceptions for smaller schools that specifically focus on quality undergraduate education (like Princeton). But don't be fooled by a college saying that its classes are better because its smaller- you actually need to attend both classes and compare. People can be very proud that their school has 'great' lectures in what I would consider high school level biology, simply because their professor is 'fun.'

But don't be fooled by a college saying that its classes are better because its smaller ... People can be very proud that their school has 'great' lectures in what I would consider high school level biology, simply because their professor is 'fun.'

It seems to me that you are mainly objecting to people caring about teaching quality than disagreeing with their assessment. Maybe people are fools to care about class size and student evaluations, but they appear to care, unless I'm confusing consoling lies with actual advice.

Yes curriculum matters, but that is much more predicted by student quality than professor quality: two kinds of prestige diverge.

See the rest of the quote. Also from Robin's post:

Relative to the Bayesians that academic economic theorists typically assume populate the world, real academics over-react or under-react to evidence, as needed to show respect for impressive academic displays. This helps assure the customers of academia that by affiliating with the most respected academics, they are affiliating with very impressive minds.

These look more like classical statistics vs Bayesian statistics than anything status-related.

I haven't seen any science run in Bayesian way, academic, commercial, or whatnot, and I have no idea how it would really look like, in spite of its theoretical appeal.

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display. It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

If we believe that the sciences are systematically irrational, then isn't this the rational thing to do? To wait for convincing, irrefutable evidence, and after a certain point treat confirmatory evidence as adding nothing?

If scientists are herd-followers and affiliating (both socially and due to institutional pressures), then after X studies showing a link between HIV and AIDS, say, study X+1 adds nothing because the conductors know exactly what they're supposed to get and have no incentive to show the opposite unless they have irrefutably strong HIV!=AIDS evidence.

For perfect Bayesians, even murky or weak evidence is something that shifts one's beliefs; but in the real world, murky or weak evidence of something against common wisdom just makes you look ideologically driven or a young turk who wants publicity (any publicity at all). Knowing this, scientists will avoid weak evidence which is unpopular, which means that only those who are ideologically driven or attention-seekers will publish, which reinforces why other scientists will avoid weak unpopular evidence, in a feedback loop. So only very strong evidence will break through the noise of irrationality.

This is the standard "herding" hypothesis, that public behavior ignore private signals once public signals have become lopsided enough.

Alas, there is nothing new under the sun. I'm guessing the herding hypothesis also says that only very strong private signals can override the public ones too. So, if this is an old hypothesis well-known to you, why would you then lament the herding? If herding is the case, then not updating (much) after a certain point gives you better results than continuing to update, doesn't it? And if it does, then wouldn't that 'win' and be the rational thing to do given the circumstances?

(Alternate question: if not-updating is rational, why resort to social signalling explanations for the not-updating? Social signalling may explain how the herding starts and perpetuates itself, but there's no need to drag it in as an explanation for not-updating as well.)

This looks like the "Science vs Bayes" distinction to me.

Science works hugely better than random crackpottery, but is also very far from optimal.

If you can't trust yourself to update on evidence, then go with science. If you can (you're here, aren't you?) then updating will leave you better off.

You can always limit yourself to updating in all but the most obvious cases that science misses, and doing marginally better.

You can always limit yourself to updating in all but the most obvious cases that science misses, and doing marginally better.

No doubt that this is what many scientists do - 'this is what I really think, but I'll admit it's not generally accepted'. But I'd put the emphasis on updating only in the obvious cases and otherwise trusting in science, because how many areas of science can one really know well enough to do better than the subject-area consensus?

Academic engineers can be useful, but so can social scientists, if they so choose. The point is that academics have other pressures besides being useful, and this can apply to engineers as well as social scientists. Non-academic engineers and economists must both be useful somehow to someone, but that is a different matter.

Do you think the effect of the 'other pressures' academics feel is the same for all disciplines? Or are there other factors that increase or decrease that effect?

It is unlikely to be exactly the same, but it seems hard to measure the differences. Fields in which academics are more often hired not for their prestige but for their directly useful knowledge tend to be less prestigious fields I think, so I'd guess that might be one clue, but a weak one.