jake987722 comments on Case study: abuse of frequentist statistics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (96)
Not necessarily better. Just more convenient for the thumbs up/thumbs down way of looking at evidence that scientists tend to like.
It's a convention. The point is to have a pre-agreed, low significance level so that testers can't screw with the result of a test by arbitrary jacking the significance level up (if they want to reject a hypothesis) or turning it down (if they don't). The significance level has to be low to minimize the risk of a type I error.
The certainty level is effectively communicated via the significance level and p-value itself. (And the use of a reject vs. don't reject dichotomy can be desirable if one wishes to decide between performing some action and not performing it based on some data.)
A frequentist can deal in likelihoods, for example by doing hypothesis tests of likelihood ratios. As for priors, a frequentist encapsulates them in parametric and sampling assumptions about the data. A Bayesian might give a low weight to a positive result from a parapsychology study because of their "low priors", but a frequentist might complain about sampling procedures or cherrypicking being more likely than a true positive. As I see it, the two say essentially the same thing; the frequentist is just being more specific than the Bayesian.
No. P-values are not equivalent when they are calculated using different statistics, or even the same statistic but a different sample size. On the latter point see Royall, 1986.
I'd say the frequentist is using Bayesian reasoning informally; Jaynes discusses this exact problem from a Bayesian perspective at the beginning of Chapter 5 of his magnum opus.
Sorry. You are quite right, and I was sloppy. I had in mind the implicit idea that holding the choices of statistical test and data collection procedure constant, different p-values suggest how strongly one should reject the null hypothesis, and I should have made that explicit. It is absolutely true that if I just ask someone, "Test A gave me p = 0.008 and Test B gave me p = 0.4, which test's null hypothesis is worse off?", the correct answer is "how should I know?"
Yep. I think this is an example of the frequentist encapsulating what a Bayesian would call priors in their sampling assumptions.