Deleet comments on Original Research on Less Wrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
That's for experimental statistical reports. Trying to do math runs into a different set of dangers.
You can easily beat "Most published research findings are false" by reporting Bayesian likelihood ratios instead of "statistical significance", or even just keeping statistical significance and demanding p < .001 instead of the ludicrous p < .05. It should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05 and the proportion of false findings would go way down immediately. That's what current grantmakers and journals would ask for if they cared.
I have made a habit out of ignoring p<.05 values when they are reported, unless its a special case where getting more subjects is too difficult or impossible.* I normally go with p<0.01 results unless its very easy to gather more subjects, in which case going with p<0.001 or lower is good.