Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
You are viewing a comment permalink. View the original post to see all
comments and the full post content.
You are viewing a single comment's thread.
The Vapnik Chernovenkis Dimension also offers a way of filling in the detail of the the concept of "simple" appropriate to Occam's Razor. I've read about it in the context of statistical learning theory, specifically "probably approximately correct learning".
Having successfully tuned the parameters of your model to fit the data, how likely is it to fit new data, that is, how well does it generalise. The VC dimension comes with formulae that tell you. I've not been able to follow the field, but I suspect that VC dimension leads to worst case estimates whose usefulness is harmed by their pessimism.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?