Eugine_Nier comments on How to Fix Science - Less Wrong

50 Post author: lukeprog 07 March 2012 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (141)

You are viewing a single comment's thread. Show more comments above.

Comment author: satt 03 March 2012 10:37:43PM *  12 points [-]

There are many more problems with NHST and with "frequentist" statistics in general, but the central one is this: NHST does not follow from the axioms (foundational logical rules) of probability theory. It is a grab-bag of techniques that, depending on how those techniques are applied, can lead to different results when analyzing the same data — something that should horrify every mathematician.

The inferential method that solves the problems with frequentism — and, more importantly, follows deductively from the axioms of probability theory — is Bayesian inference.

But two Bayesian inferences from the same data can also give different results. How could this be a non-issue for Bayesian inference while being indicative of a central problem for NHST? (If the answer is that Bayesian inference is rigorously deduced from probability theory's axioms but NHST is not, then the fact that NHST can give different results for the same data is not a true objection, and you might want to rephrase.)

Comment author: gwern 03 March 2012 11:33:36PM 8 points [-]

By a coincidence of dubious humor, I recently read a paper on exactly this topic, how NHST is completely misunderstood and employed wrongly and what can be improved! I was only reading it for a funny & insightful quote, but Jacob Cohen (as in, 'Cohen's d') in pg 5-6 of "The Earth Is Round (p < 0.05)" tells us that we shouldn't seek to replace NHST with a "magic alternative" because "it doesn't exist". What we should do is focus on understanding the data with graphics and datamining techniques; report confidence limits on effect sizes, which gives us various things I haven't looked up; and finally, place way more emphasis on replication than we currently do.

An admirable program; we don't have to shift all the way to Bayesian reasoning to improve matters. Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.

Comment author: Eugine_Nier 04 March 2012 01:00:24AM 5 points [-]

I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.

This only works in extremely simple cases.

Comment author: Sam_Jaques 05 March 2012 04:00:27PM 1 point [-]

Could you give an example of an experiment that would be too complex for log odds to be useful?

Comment author: Eugine_Nier 06 March 2012 02:25:38AM *  3 points [-]

Any example where there are more than two potential hypotheses.

Note, that for example, "this coin is unbiased", "this coin is biased toward heads with p=.61", and "this coin is biased toward heads with p=.62" count as three different hypotheses for this purpose.

Comment author: Cyan 10 March 2012 03:33:05AM 2 points [-]

This is fair as a criticism of log-odds, but in the example you give, one could avoid the issue of people having varying priors by just reporting the value of the likelihood function. However, this likelihood function reporting idea fails to be a practical summary in the context of massive models with lots of nuisance parameters.