PhilGoetz comments on Even if you have a nail, not all hammers are the same - Less Wrong

95 Post author: PhilGoetz 29 March 2010 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 March 2010 01:14:57PM 8 points [-]

Very, very briefly (I'm preparing a very long blog post on this, but I want to post it when Dr Hickey, my uncle, releases his book on this, which won't be for a while yet) - meta-analysis is essentially a method for magnifying the biases of the analyst. When collating the papers, nobody is blinded to anything so it's very, very easy to remove papers that the people doing the analysis disagree with (approx 1% or fewer of papers that turn up in initial searches end up getting used in most meta-analyses, and these are hand-picked). On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis. You can easily see how this could cause problems, I'm sure. There are many, many problems of this nature. I'd strongly recommend everyone do what I did (for a paper analysing these problems) - go to the Cochrane or JAMA sites, and just read every meta-analysis published in a typical year, without any previous prejudice as to the worth or otherwise of the technique. If you can find a single one that appears to be good science, I'd be astonished...

Comment author: XFrequentist 31 March 2010 02:54:04AM 6 points [-]

When collating the papers, nobody is blinded to anything so it's very, very easy to remove papers that the people doing the analysis disagree with...

A good systematic review (meta-analysis is the quantitative component thereof, although the terms are often incorrectly used interchangeably) will define inclusion criteria before beginning the review. Papers are then screened independently by multiple parties to see if they fit these criteria, in attempt to limit introducing bias in the choice of which to include. It shouldn't be quite as arbitrary as you imply.

On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis.

This is meant to counter publication bias, although it's fraught with difficulties. Your comment seems to imply that this practice deliberately introduces bias, which is not necessarily the case.

Are you aware of the PRISMA statement? If so, can you suggest improvements to the recommended reporting of systematic reviews?