raisin comments on Rationality Quotes April 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (656)
Does anyone know how often this happens in statistical meta-analysis?
We can't know for certain. That's the idea of systematic biases. There no way to tell if all your trials are slanted in a specific fashion, if the biases also appears in your high quality studies.
On the other hand we have fields such as homeopathy or telephathy (Ganzfeld experiments) where there are meta-analysis that treat all studies mostly equally that find that homeopathy works and telepahty exist. On the other hand you have meta-analysis who try to filter out low quality studies who come to the conclusion that homeopathy doesn't work and telepathy doesn't exist.
As a percentage? No. But qualitatively speaking, "often."
The most recent book I read discusses this particularly with respect to medicine, where the problem is especially pronounced because a majority of studies are conducted or funded by an industry with a financial stake in the results, with considerable leeway to influence them even without committing formal violations of procedure. But even in fields where this is not the case, issues like non-publication of data (a large proportion of all studies conducted are not published, and those which are not published are much more likely to contain negative results) will tend to make the available literature statistically unrepresentative.
Fairly often. One strategy I've seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a 'ground truth' or oracle for the meta-analysis's estimate, and if it's later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the 'right' answer.)
For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. "Discrepancies between meta-analyses and subsequent large randomized, controlled trials". N Engl J Med 1997;337:536e42
(You can probably dig up more results looking through reverse citations of that paper, since it seems to be the originator of this criticism. And also, although I disagree with a lot of it, "Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses", Al khalaf et al 2010.)
I'm not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.