I have been exploring alternative approaches to statistics ever since I read about Halpern’s counter-example to Cox’s theorem and I came across this:

Could you please give a brief summary of Akaike statistics, references, and why you think is (not) a good foundation for statistics? Thanks!

New Answer
New Comment

1 Answers sorted by

Adam Bull

40

I wouldn't read too much into that quote, it seems pretty misleading to me. There are essentially two statistical paradigms, frequentism and Bayesianism, which you probably already know about.

The third item is referring to the likelihood principle, a philosophical position which states that inference should be based only on the likelihood. There are a small number of techniques in both frequentism and Bayesianism which have this property, but in general it's pretty restrictive, and I don't know of any practitioners who take it seriously.

You'll notice the fourth item is not referring to Akaike in general, but rather to the Akaike information criterion (AIC). This is a specific technique for model selection, which happens to depend only on the likelihood and the model size.

If you were the kind of person who enjoys armchair statistical philosophy, and has heard the likelihood principle is too restrictive, you might like to enlarge it to also allow dependence on the model size; you could then describe this as AIC-based statistics. It's not a term in common usage, and I think that principle is still too restrictive to be of much interest.