RobinZ comments on Simpson's Paradox - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
Good post, thanks. One comment:
First, I assume you mean "aggregated", otherwise this statement doesn't make sense.
Second, I don't believe you. I say it's always smarter to use the partitioned data than the aggregate data. If you have a data set that includes the gender of the subject, you're always better off building two models (one for each gender) instead of one big model. Why throw away information?
There is a nugget of truth to your claim, which is that sometimes the partitioning strategy becomes impractical. To see why, consider what happens when you first partition on gender, then on history of heart disease. The number of partitions jumps from two to four, meaning there are fewer data samples in each partition. When you add a couple more variables, you will have more partitions than data samples, meaning that most partitions will be empty.
So you don't always want to do as much partitioning as you plausibly could. Instead, you want to figure out how to combine single partition statistics corresponding to each condition (gender, history,etc) into one large predictive model. This can be attacked with techniques like AdaBoost or MaxEnt.
Because, as Von Neumann was supposed to have said, "with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." Unless your data is good enough to support the existence of the other factors, or you have other data available that does so, a model you fit to the lowest-level data is likely to capture more noise than reality.
Right, so the challenge is to incorporate as much auxiliary information as possible without overfitting. That's what AdaBoost does - if you run it for T rounds, the complexity of the model you get is linear in T, not exponential as you would get from fitting the model to the finest partitions.
This is in general one of the advantages of Bayesian statistics in that you can split the line between aggregate and separated data with techniques that automatically include partial pooling and information sharing between various levels of the analysis. (See pretty much anything written by Andrew Gelman, but Bayesian Data Analysis is a great book to cover Gelman's whole perspective.)