gwern comments on The Academic Epistemology Cross Section: Who Cares More About Status? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
If we believe that the sciences are systematically irrational, then isn't this the rational thing to do? To wait for convincing, irrefutable evidence, and after a certain point treat confirmatory evidence as adding nothing?
If scientists are herd-followers and affiliating (both socially and due to institutional pressures), then after X studies showing a link between HIV and AIDS, say, study X+1 adds nothing because the conductors know exactly what they're supposed to get and have no incentive to show the opposite unless they have irrefutably strong HIV!=AIDS evidence.
For perfect Bayesians, even murky or weak evidence is something that shifts one's beliefs; but in the real world, murky or weak evidence of something against common wisdom just makes you look ideologically driven or a young turk who wants publicity (any publicity at all). Knowing this, scientists will avoid weak evidence which is unpopular, which means that only those who are ideologically driven or attention-seekers will publish, which reinforces why other scientists will avoid weak unpopular evidence, in a feedback loop. So only very strong evidence will break through the noise of irrationality.
This is the standard "herding" hypothesis, that public behavior ignore private signals once public signals have become lopsided enough.
Alas, there is nothing new under the sun. I'm guessing the herding hypothesis also says that only very strong private signals can override the public ones too. So, if this is an old hypothesis well-known to you, why would you then lament the herding? If herding is the case, then not updating (much) after a certain point gives you better results than continuing to update, doesn't it? And if it does, then wouldn't that 'win' and be the rational thing to do given the circumstances?
(Alternate question: if not-updating is rational, why resort to social signalling explanations for the not-updating? Social signalling may explain how the herding starts and perpetuates itself, but there's no need to drag it in as an explanation for not-updating as well.)
This looks like the "Science vs Bayes" distinction to me.
Science works hugely better than random crackpottery, but is also very far from optimal.
If you can't trust yourself to update on evidence, then go with science. If you can (you're here, aren't you?) then updating will leave you better off.
You can always limit yourself to updating in all but the most obvious cases that science misses, and doing marginally better.
No doubt that this is what many scientists do - 'this is what I really think, but I'll admit it's not generally accepted'. But I'd put the emphasis on updating only in the obvious cases and otherwise trusting in science, because how many areas of science can one really know well enough to do better than the subject-area consensus?