passive_fist comments on Open Thread, May 25 - May 31, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (301)
What else does it reject?
I think it's important to look at this on a per-discipline basis. Some disciplines have much higher standards of clarity, precision, and repeatability than others. That article you linked looks at statistical studies with a special focus on medical research, but then seems to make the critical error of generalizing this to all scientific research. Do the findings apply to physics? Math? Computer science?
Different fields use different methods. The basic point Ioannidis makes applies to any field which uses null-hypothesis significance-testing statistics for interpreting sampled data.
Ecology, medicine, biology, psychology, economics - heavy NHST users, critique definitely applies.
Computer science is tricky:
It would be interesting to weight fields by publication count to see if Ioannidis's title, interpreted literally, is still right. When one criticizes 'ecology, medicine, biology, psychology, economics', one is criticizing what must be at least hundreds of thousands of papers every year - those are big fields. I don't know that math, physics, theoretical CS etc publish enough papers to offset that.
I agree 100%.
I see papers get rejected all the time for methodological disagreements and failure to cite papers the referee thinks important. More broadly, ideas that are perfectly plausible but contrary to current thinking in a field have a much higher threshold to publication than ideas consonant with current thinking.
But more generally, peer review is normally explicitly aimed at rejecting work judged to be non-novel or non-substantial. That boring replication attempts can't get published should therefore be seen as a feature not a bug. The ability of academics to publish novel, counter-intuitive and false results should therefore also be seen as a feature not a bug.
Oh, I'm sure some disciplines are worse than others. But as you seem to be tacitly conceding, "the vast majority of scientific output never undergoes real review," and that's true in all disciplines.