A large likelihood ratio? I have two likelihood functions -- at what values of the parameter arguments should I evaluate them when forming the ratio? Given that one of the versions is nested in the other at the boundary of the parameter space (Gaussian errors versus Student-t errors with degrees of freedom fit to the data), what counts as a large enough likelihood ratio to prefer the more general version of the model?
I still can't see the relevance of Bayesian Statistics over Frequentist Statistics, and I take Less Wrong as evidence that this is a cause for clarification.
I'm looking for a historical narrative of the development of mathematics that tells me what mistake lead to frequentism over Bayesianism, which is supposedly the correct view. Alternatively, you can just say "Read PT:TLOS!" if it's that silly of a question.