You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Open Thread, April 27-May 4, 2014 - Less Wrong Discussion

0 Post author: NancyLebovitz 27 April 2014 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (200)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 06 May 2014 02:44:26AM 3 points [-]

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

Why would you do that? If I got a p=0.1 result doing a meta-analysis, I wouldn't be surprised at all since things like random-effects means it takes a lot of data to turn in a positive result at the arbitrary threshold of 0.05. And as it happens, in some areas, an alpha of 0.1 is acceptable: for example, because of the poor power of tests for publication bias, you can find respected people like Ioannides using that particular threshold (I believe I last saw that in his paper on the binomial test for publication bias).

If people really acted that way, we'd see odd phenomenon where people saw successive meta-analysts on whether grapes cure cancer: 0.15 that grapes cure cancer (decreases belief grapes cure cancer), 0.10 (decreases), 0.07 (decreases), someone points out that random-effects is inappropriate because studies show very low heterogeneity and the better fixed-effects analysis suddenly reveals that the true p-value is now at 0.05 (everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'). Instead, we see people acting more like Bayesians...

And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Is that a guess, or a fact based on meta-studies showing that Bayesian-using papers cook the books more than NHST users with p-hacking etc?

Comment author: gwern 10 October 2014 02:10:38AM *  0 points [-]

everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'

Turns out I am overoptimistic and in some cases people have done just that: interpreted a failure to reject the null (due to insufficient power, despite being evidence for an effect) as disproving the alternative in a series of studies which all point the same way, only changing their minds when an individually big enough study comes out. Hauer says this is exactly what happened with a series of studies on traffic mortalities.

(As if driving didn't terrify me enough, now I realize traffic laws and road safety designs are being engineered by vulgarized NHST practitioners who apparently don't know how to patch the paradigm up with emphasis on power or meta-analysis.)