Emily comments on Original Research on Less Wrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
That's for experimental statistical reports. Trying to do math runs into a different set of dangers.
You can easily beat "Most published research findings are false" by reporting Bayesian likelihood ratios instead of "statistical significance", or even just keeping statistical significance and demanding p < .001 instead of the ludicrous p < .05. It should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05 and the proportion of false findings would go way down immediately. That's what current grantmakers and journals would ask for if they cared.
Not to disagree with the overarching point, but the use of "only" here is inappropriate under some circumstances. Eg, a neuropsychological study requiring participants with a particular kind of brain injury is going to find more than doubling its n extremely difficult and time-consuming. For this kind of study (presuming insistence on working with p-values) it seems better to roll with the "ludicrous" p < .05 and rely on replication elsewhere for improved reliability. "Ludicrous" is too strong in fields with small effect sizes and small subject pools; they just need a much higher rate of replication.