Related to: Beyond Bayesians and Frequentists
Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.
I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:
1. Either the null hypothesis or the alternative hypothesis is true.
2. If the null hypothesis is true, then the data has a certain probability distribution.
3. Under this distribution, our sample is extremely unlikely.
4. Therefore under the null hypothesis, our sample is extremely unlikely.
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
An example looks like this:
Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:
1. Either or where is some constant.
2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.
3. Under the null hypothesis, has a distribution with degrees of freedom.
4. is really small under the null hypothesis (e.g. less than 0.05).
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath.
This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe.
That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?
Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.
The details are hazy at this point, but by assigning a realistic probability to the "Something else" hypothesis, you avoid making over confident estimates of your other hypotheses in a multiple hypothesis testing scenario.
See Multiple Hypothesis Testing in Jaynes PTTLOS, starting pg. 98, and the punchline on pg. 105:
I think this is especially relevant to standard "null hypothesis" hypothesis testing because the likelihood of the data under the alternative hypothesis is never calculated, so you don't even get a hint that your model might just suck, and instead conclude that the null hypothesis should be rejected.
What is the likelihood of the "something else" hypothesis? I don't think this is really a general remedy.
Also, you can get the same thing in the hypothesis testing framework by doing two hypothesis tests, one of which is a comparison to the "something else" hypothesis and one of which is a comparison to the original null hypothesis.
Finally, while I forgot to mention this above, in most cases where hypothesis testing is applied, you actually are considering all possibilities, because you are doing something like P0 = "X <= 0"... (read more)