HungryHobo comments on Link: Evidence-Based Medicine Has Been Hijacked - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
Having been through some of that process... it's less than stellar.
That recent "creator" paper managed, somehow, to get through peer review and in the past I've been acutely aware that it's been clear that sometimes reviewers have no clue about what they've been asked to review and just sort of wave it through with a few requests for spelling and grammar corrections.
To an extent it's a very similar problem to ones faced in programming and engineering. Asking for more feedback is just the waterfall model applied to research.
To an extent, even if researchers weren't being asked to publicly post their pre-reg getting them to actually work out what they're planning to measure is a little like getting programmers to adopt Test Driven Development (write the tests, then write the code) which tends to produce higher quality output.
Despite that 8 years a lot of people still don't really know what they're doing in research and just sort of ape their supervisor. (who may have been in the same situation)
Since the system is still half-modeled on the old medieval master-journeyman-apprentice system you can also get massive massive massive variation in ability/competence so simply trusting in people being highly qualified isn't very reliable.
The simplest way to illustrate the problem is to point to really really basic stats errors which make it into huge portions of the literature. Basic errors which have made it past supervisors, made it past reviewers, made it past editors. Made it past many people with PHD's and not one picked up on them.
(This is just an example, there are many many other basic errors made constantly in research)
http://www.badscience.net/2011/10/what-if-academics-were-as-dumb-as-quacks-with-statistics/
It makes sense when you realize that many people simply ape their supervisors and the existing literature. When bad methods make it into a paper people copy those methods without ever considering whether they're obviously incorrect.
I'm arguing we need more feedback rather than more filtering.
You're arguing the new filtering will be more effective than the old filtering, and as proof, here is all the ways the old filtering method has failed.
But pointing out that filtering didn't work in the past is not a criticism of my argument that we need more feedback such as through objective post-publication reviews of articles. I never argued that the old filtering method works.
If you believe the old filtering method isn't a stringent filtering system, do you believe it wouldn't make much difference if we removed it, and let anybody publish anywhere without peer review as long as they preregistered their study? Would this produce an improvement?
I think you also need to contend with the empirical evidence from COMPare that preregistration (the new filtering method you support) hasn't been effective so far.
I think more stringent filtering can increase reliability, but doing so will also increase wastefulness. Feedback can increase reliability without increasing wastefulness.
Feedback from supervisors and feedback from reviewers is what the current system is mostly based on. We're currently in a mostly-feedback system but it's disorganised, poorly standardised feedback and the feedback tends to end a very short time after publication.
Some of the better journals operate blinded reviews so that in theory "anybody" should be able to publish a paper if the quality is good and that's a good thing.
COMPare implies that preregistration didn't solve all the problems but other studies have shown that it has massively improved matters.