Eugine_Nier comments on The Statistician's Fallacy - Less Wrong

38 Post author: ChrisHallquist 09 December 2013 04:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 09 December 2013 08:35:00PM *  23 points [-]

Essentially all scientific fields rely heavily on statistics.

This is true in a technical sense but misses a crucial distinction. Hard sciences (basically physics and its relatives), are far less vulnerable to statistical pitfalls because practitioners in those fields have the ability to generate effectively unlimited quantities of data by simply repeating experiments as many times as necessary. This makes statistical reasoning largely irrelevant: in the limit of infinite data, you don't need to do Bayesian updates because the weight of the prior is insignificant compared to the weight of the observations. Rutherford, for example, did not bother to state a prior probability for the plum pudding model of the atom compared to the planetary model; he just amassed a bunch of experimental data, and showed that the plum pudding model could not explain it. This large-data-generation ability of physics is largely why that field has succeeded in spite of continuing debates and confusion about the fundamentals of statistical philosophy. Researchers in fields like economics, nutrition, and medicine simply cannot obtain data on the same scale that physicists can.

Comment author: Eugine_Nier 11 December 2013 05:02:54AM -1 points [-]

I suspect it's not so much the amount of data as the fact that the underlying causal structure tends to be much simpler.

With, e.g., biology you the problem of the Harvard law.