You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Douglas_Knight comments on Open Thread, April 27-May 4, 2014 - Less Wrong Discussion

0 Post author: NancyLebovitz 27 April 2014 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (200)

You are viewing a single comment's thread. Show more comments above.

Comment author: lmm 05 May 2014 06:20:57PM 3 points [-]

We live at a time where up to 70% of scientific research can't be replicated. Frequentism might not be to blame for all of that, but it does play it's part. There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

It seems to me that there's a bigger risk from Bayesian methods. They're more sensitive to small effect sizes (doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against, doing a bayesian one it might be evidence for). If the prior isn't swamped then it's important and we don't have good best practices for choosing priors; if the prior is swamped then the bayesianism isn't terribly relevant. And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Bayes' theorem is true (duh) and I'd accept that there are situations where bayesian analysis is more effective than frequentist, but I think it would do more harm than good in formal science.

Comment author: Douglas_Knight 13 May 2014 06:53:50AM 0 points [-]

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

No. The most basic version of meta-analysis is, roughly, that if you have two p=0.1 studies, the combined conclusion is p=0.01.