Related to: Beyond Bayesians and Frequentists
Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.
I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:
1. Either the null hypothesis or the alternative hypothesis is true.
2. If the null hypothesis is true, then the data has a certain probability distribution.
3. Under this distribution, our sample is extremely unlikely.
4. Therefore under the null hypothesis, our sample is extremely unlikely.
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
An example looks like this:
Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:
1. Either or where is some constant.
2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.
3. Under the null hypothesis, has a distribution with degrees of freedom.
4. is really small under the null hypothesis (e.g. less than 0.05).
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath.
This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe.
That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?
Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.
As a Bayesian, I'm very happy to see an attempted steelman of hypothesis testing. Too often I see Bayesian criticism of "frequentist" reasoning no frequentist statistician would ever actually apply. Unfortunately, this is a failed steelman (even granting the first premise) -- the description of the process of hypothesis testing is wrong, and as a result the the actual near-syllogism underlying hypothesis testing is not properly explained.
The first flaw with the description of the process is that it omits the need for some kind of ordering on the set of hypotheses, plus the need for a statistic -- a function of from the sample space to a totally ordered set -- such that more extreme statistic values are more probable (in some sense, e.g., ordered by median or ordered by expected value) the further an alternative is from the null. This is not too restrictive as a mathematical condition, but it often involves throwing away relevant information in the data (basically, any time there isn't a sufficient statistic).
The second flaw is that the third and fourth step of the syllogism should read something like "Under the null distribution, a statistic value as or more extreme than ours is extremely unlikely". Being able to say this is the point of the orderings discussed in the previous paragraph. Without the orderings, you're left talking about unlikely samples, which, as gjm pointed out, is not enough on its own to make the move from 4 to 5 even roughly truth-preserving. For example, that move would authorize the rejection of the null hypothesis "no miracle occurred" on these data.
As to the actual reasoning underlying the hypothesis testing procedure, it's helpful to think about the kinds of tests students are given in school. An idealized (i.e., impractical) test would deeply probe a student's understanding of the course material, such that a passing grade would be (in logical terms) both a necessary and a sufficient signifier of an adequate understanding of the material. In practice, it's only feasible to test a patchwork subset of the course material, which introduces an element of chance. A student whose understanding is just barely inadequate (by some arbitrary standard) might get lucky and be tested mostly on material she understands; and vice versa. The further the student's understanding lies from the threshold of bare adequacy, the less likely the test is to pass or fail in error.
In a closely analogous fashion, a hypothesis test is a probe for a certain kind of inadequacy in the statistical model. The statistic is the equivalent of the grade, and the threshold of statistical significance is equivalent of the standard of bare adequacy. And just as the standard of bare adequacy in the above metaphor is notional and arbitrary, the threshold of the hypothesis test need not be set in advance -- with the realized value of the statistic in hand, one can consider the entire class of hypothesis tests ex post facto. The p-value is one way of capturing this kind of information. For more on this line of reasoning, see the work of Deborah Mayo.
Thanks for this comment. I was attempting to abstract away from the specific details of NHST and talk about the general idea since in many particulars there is much to criticize, but it appears that I abstracted too much - the ordering of the hypothesis space (i.e. a monotone likelihood ratio as in Neyman-Pearson) is definitely necessary.
... (read more)