Model Comparison
20 coin flips yield 16 heads and 4 tails. Is the coin biased? Given data on 20000 rolls of an imperfect die, can we deduce not just the die's bias, but the physical asymmetries of the die? Given a set of x-y data, should we use a linear or quadratic regression? These are questions of model comparison.
This sequence tackles model comparison from a Bayesian first-principles approach.
Outline:
- Very Short Introduction is exactly what it sounds like. It introduces the main idea, and walks through a simple example: calculating the probability that a coin is biased, given some data. Wolf's Dice is a similar but more in-depth example, which also sets up for later.
- In Wolf's Dice II, we try to figure out not just the biases of a die, but what physical asymmetries give rise to those biases. This example comes up again later when discussing cross-validation.
- The next three posts talk about two methods to approximate Bayesian model comparison in practice: Laplace approximation and BIC. We also compare their performance. These three posts are mainly for people who want to implement Bayesian model comparison on larger-scale problems (i.e. for machine learning) and need to understand the approximation trade-offs; others will likely want to skip them.
- Finally, we compare Bayesian model comparison to cross-validation. We talk about the different questions asked by each, and when one or the other should be used. We wrap up with some comments on what it means for two models to make different predictions, and why it matters in practice.