Douglas_Knight comments on Case study: abuse of frequentist statistics - Less Wrong

25 Post author: Cyan 21 February 2010 06:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (96)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 21 February 2010 07:00:17PM 4 points [-]

what we do is simply calculate P(E|~H) (techniques for doing this being of course the principal concern of statistics texts),

No no no. That would be a hundred times saner than frequentism. What you actually do is take the real data e-12 and put it into a giant bin E that also contains e-1, e-3, and whatever else you can make up a plausible excuse to include or exclude, and then you calculate P(E|~H). This is one of the key points of flexibility that enables frequentists to get whatever answer they like, the other being the choice of control variables in multivariate analyses.

See e.g. this part of the article:

The authors used what's called a Mann-Whitney U test, which, in simplified terms, aims to determine if two sets of data come from different distributions. The essential thing to know about this test is that it doesn't depend on the actual data except insofar as those data determine the ranks of the data points when the two data sets are combined. That is, it throws away most of the data, in the sense that data sets that generate the same ranking are equivalent under the test.

Comment author: Douglas_Knight 22 February 2010 12:04:40AM 13 points [-]

This seems to use "frequentist" to mean "as statistics are actually practiced." It is unreasonable to compare the implementation of A to the ideal form of B. In particular, the problem of the Mann-Whitney test seem to me that the authors looked up a recipe in a cookbook without understanding it, which they could have done just as easily in a bayesian cookbook.