gwern comments on How to Fix Science - Less Wrong

50 Post author: lukeprog 07 March 2012 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (141)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 03 March 2012 11:33:36PM 8 points [-]

By a coincidence of dubious humor, I recently read a paper on exactly this topic, how NHST is completely misunderstood and employed wrongly and what can be improved! I was only reading it for a funny & insightful quote, but Jacob Cohen (as in, 'Cohen's d') in pg 5-6 of "The Earth Is Round (p < 0.05)" tells us that we shouldn't seek to replace NHST with a "magic alternative" because "it doesn't exist". What we should do is focus on understanding the data with graphics and datamining techniques; report confidence limits on effect sizes, which gives us various things I haven't looked up; and finally, place way more emphasis on replication than we currently do.

An admirable program; we don't have to shift all the way to Bayesian reasoning to improve matters. Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.

Comment author: Eugine_Nier 04 March 2012 01:00:24AM 5 points [-]

I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.

This only works in extremely simple cases.

Comment author: Sam_Jaques 05 March 2012 04:00:27PM 1 point [-]

Could you give an example of an experiment that would be too complex for log odds to be useful?

Comment author: Eugine_Nier 06 March 2012 02:25:38AM *  3 points [-]

Any example where there are more than two potential hypotheses.

Note, that for example, "this coin is unbiased", "this coin is biased toward heads with p=.61", and "this coin is biased toward heads with p=.62" count as three different hypotheses for this purpose.

Comment author: Cyan 10 March 2012 03:33:05AM 2 points [-]

This is fair as a criticism of log-odds, but in the example you give, one could avoid the issue of people having varying priors by just reporting the value of the likelihood function. However, this likelihood function reporting idea fails to be a practical summary in the context of massive models with lots of nuisance parameters.

Comment author: satt 04 March 2012 12:34:26AM *  4 points [-]

Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.

I didn't have any specific examples in mind. But more generally, posteriors are a function of both priors and likelihoods. So even if one avoids using priors entirely by reporting only likelihoods (or some function of the likelihoods, like the log of the likelihood ratio), the resulting implied inferences can change if one's likelihoods change, which can happen by calculating likelihoods with a different model.