You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on Open thread, Nov. 16 - Nov. 22, 2015 - Less Wrong Discussion

7 Post author: MrMind 16 November 2015 08:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 17 November 2015 09:53:54PM *  1 point [-]

I think I am giving up on correcting "google/wikipedia experts," it's just a waste of time, and a losing battle anyways. (I mean the GP here).


I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the 'best' fit - they're heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn't have - in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.

That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.