EHeller comments on The Statistician's Fallacy - Less Wrong

38 Post author: ChrisHallquist 09 December 2013 04:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 11 December 2013 10:33:55PM 4 points [-]

Ilya, I'm curious what your thoughts on Beautiful Probability are.

Personally, I flinch whenever I get to the "accursèd frequentists" line. But beyond that I think it does a decent job of arguing that Bayesians win the philosophy of statistics battle, even if they don't generate the best tools for any particular application. And so it seems to me that in ML or stats, where the hunt is mostly for good tools instead of good laws, having the right philosophy is only a bit of a help, and can be a hindrance if you don't take the 'our actual tools are generally approximations' part seriously.

In this particular example, it seems to me that ChrisHallquist has a philosophical difference with his stats professor, and so her not being Bayesian is potentially meaningful. I think that LW should tell statisticians that they shouldn't believe cell phones cause cancer, even if they shouldn't tell them what sort of conditional independence tests to use when they're running PC on a continuous dataset.

Comment author: EHeller 11 December 2013 11:34:20PM 0 points [-]

While I'm not Ilya, I find the 'beautiful probability' discussion somewhat frustrating.

Sure, if we test different hypotheses with the same low sample data, we can get different results. However, starting from different priors, we can also get different results with that same data. Bayesianism won't let you escape the problem, which is ultimately a problem of data volume.

Comment author: alex_zag_al 13 December 2013 12:03:32AM 0 points [-]

LW (including myself) is very influenced by ET Jaynes, who believed that for every state of knowledge, there's a single probability distribution that represents it. Therefore, you'd only get different results from the same data if you started with different knowledge.

It makes a lot of sense for your conclusions to depend on your knowledge. It's not a problem.

Finding the prior that represents your knowledge is a problem, though.

Comment author: EHeller 13 December 2013 12:50:38AM 1 point [-]

I've read Jaynes (I used to spend long hours trying to explain to a true-believer why I thought MaxEnt was a bad approach to out-of-equilibrium thermo), but my point is that for small sample data, assumptions will (of course) matter. For our frequentist, this means that the experimental specification will lead to small changes in confidence intervals. For the Bayesian this means that the choice of the prior will lead to small changes in credible intervals.

Neither is wrong, and neither is "the one true path"- they are different, equally useful approaches to the same problem.