Yes, I think that was better, because the ground truth is Kepler's third law and jimrandomh pointed out your method actually recaptures a (badly obfuscated and possibly overfit) variant of it.
"Dimensionality" is totally relevant in any approach to supervised learning. But it matters even without considering the bias/variance trade-off, etc.
Imagine that you have an high-dimensional predictor, of which one dimension completely determines the outcome and the rest are noise. Your shortest possible generating algorithm is going to have to pick out the relevant dimension. So as the dimensionality of the predictor increases, the algorithm length will necessarily increase, just for information-theoretic reasons.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Why is Will Newsome doing this? My model of him just broke.
I'm going with this commenter being Will. What do I win?