passive_fist comments on Open thread, Dec. 29, 2014 - Jan 04, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (164)
That paper doesn't seem to be arguing against Occam's razor. Rather it seems to be making the more specific point that model complexity on training data doesn't necessarily mean worse generalization error. I didn't read through the whole article so I can't say if the arguments make sense, but it seems that if you follow the procedure of updating your posteriors as new data arrives, the point is moot. Besides, the complexity prior framework doesn't make that claim at all.