ChristianKl comments on Link: Evidence-Based Medicine Has Been Hijacked - Less Wrong

17 Post author: Anders_H 16 March 2016 07:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: FrameBenignly 18 March 2016 08:44:31PM 0 points [-]

The short general version of my argument is: feedback > filtering

I would agree that preregistration is one way to make p-values more useful. They may be the best way to determine what the researcher originally intended to measure, but they're not the only way to know if that was the only thing a researcher measured. I've found asking questions often works.

If we're talking strictly about properly run RCTs, then I would agree, preregistration is close to free relatively speaking. But that's because a properly conducted RCT is such a big undertaking that most filtering initiatives are going to be relatively small. But RCTs aren't the only kind of study design out there. They are the gold-standard, yes, in that they have the greatest robustness, but their major drawback is that they're expensive to conduct properly relative to alternatives.

Science already has a pretty strong filter. Researchers need to spend 8 years (and usually much more than that) after high school working towards a PhD. They then have to decide that what they're doing is the best way to analyze a problem, or if they're still in grad school, their professor has to approve it, and they have to believe in it. Then two or more other people with PhDs who weren't involved in the research (editor and peer reviewer(s)) have to review what the researcher did, and come to the conclusion that the research was properly conducted. I don't view this as principally a filtering problem. Filtering can improve quality, but it also reduces the number of possible ways to conduct research. The end result of excessive filtering to me is that everybody ends up just doing RCTs for everything, which is extremely cost-inefficient, and leads to the problem of everybody chasing funding. If nobody with less than a million on their credit can conduct a study, I think that's a problem.

Comment author: ChristianKl 18 March 2016 09:49:43PM 0 points [-]

Science already has a pretty strong filter.

If that's true, why are replication rates so poor?

They may be the best way to determine what the researcher originally intended to measure, but they're not the only way to know if that was the only thing a researcher measured. I've found asking questions often works.

You can ask questions but how do you know whether the answers you are getting are right? It's quite easy for people who fit a linear model to play a bit around with the parameters and not even remember all parameters they tested.

Then two or more other people with PhDs who weren't involved in the research (editor and peer reviewer(s)) have to review what the researcher did

More often they don't review what the researcher did but what the researchers claimed they did.

Comment author: FrameBenignly 18 March 2016 10:30:16PM 0 points [-]

If that's true, why are replication rates so poor?

There is no feedback post publication. Researchers are expected to individually decide on the quality of a published study, or occasionally ask the colleagues in their department.

I don't get the impression that low replication rates are due to malice generally. I think it's a training and incentive problem most of the time. In that case just asking should often work.

Science has very little feedback and lots of filtering at present. Preregistration is just more filtering. Science needs more feedback.

Comment author: ChristianKl 18 March 2016 10:37:10PM 0 points [-]

What kind of feedback would you want to exist?