I've started learning Machine Learning (he!), and upon reading the first chapter of the most famous textbook I was already gasping for air.
For someone like me who grew into probability with Jaynes' book, seeing in the first chapter that algorithms are trained using multiple times the same data (cross-validation) was... annoying, let's say (I actually screamed at the book).
Is there a sane textbook on machine learning? I don't demand one that starts from objective bayesianism, that would be asking too much. But at least something that assumes bayesianism as a foundation? Pretty please?
Eventually it makes sense, I promise. "Bayesianism" in the sense of keeping track of every hypothesis is very computationally expensive - modern algorithms only keep track of a very small number of hypotheses (only those representable by a neural network [or what have you], and even then only those required to do gradient descent). This fact opens you up to the overfitting problem, where the simplest perfect hypothesis in your space actually has very little information about the true external reality. You need some way of throwing away the parts ...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.