Related to: Beyond Bayesians and Frequentists
Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.
I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:
1. Either the null hypothesis or the alternative hypothesis is true.
2. If the null hypothesis is true, then the data has a certain probability distribution.
3. Under this distribution, our sample is extremely unlikely.
4. Therefore under the null hypothesis, our sample is extremely unlikely.
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
An example looks like this:
Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:
1. Either or where is some constant.
2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.
3. Under the null hypothesis, has a distribution with degrees of freedom.
4. is really small under the null hypothesis (e.g. less than 0.05).
5. Therefore the null hypothesis is false.
6. Therefore the alternative hypothesis is true.
What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath.
This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe.
That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?
Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.
But it isn't theory-of-induction-free. It just pretends to be. There's a theory of induction right in there, where you correctly identify it, in step 4->5. It's no better, no more likely to be true, and no more robust, merely on account of being squashed up small and hidden.
You haven't truly minimized the "amount of induction" in the argument; only the "amount of induction that's easily visible", which I don't think is a parameter that deserves minimizing. You'd need just the same amount of induction if, say, instead of doing classical NHST you did Bayesian inference or maximum likelihood (= Bayesian inference where you pretend not to have priors) or something. You could squash it up just as small, too; you'd just need to make steps 1-4 more quantitative.
Consider a case -- they're not hard to find -- where you have a test statistic that's low-probability on any of your model hypotheses. Then the same logic as you've used says that you're "forced" to conclude that all your hypotheses are false -- even if it happens that one of them is right. (In practice your model is never exactly right, but never mind that.) To me, this shows that the whole enterprise is fundamentally non-deductive, and that trying to make it look as much as possible like a pure deduction is actively harmful.
Maybe a better way of phrasing what I'm trying to point out is that induction is isolated to a single step. Instead of working directly with probabilities which require a theory of what probabilities are, NHST waves it's hands a bit and treats the inductive step as deductive, but transparently so (once you lay out the deduction anyway).
Your point about a test statistic that's low-probability on all possible model hypotheses is a good one - and it suggests that the details of hypothesis testing should change even if the general logic is kept. I doubt that t... (read more)