IlyaShpitser comments on Open Thread, Jun. 29 - Jul. 5, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (210)
I'm very tempted to argue that it is!
But what I wanted to convey is that it feels like I'm supposed to learn something which is manifestly inferior, in its logical foundation, than what is already known and available.
And maybe under the constraint of computational cost the finishing point of the Bayesian and the frequentist approach is the same, but where's the proof? Where's the place where someone says: "This is Bayesian machine learning, but it's computationally too costly. So by making this and this simplifying assumptions, we end up with frequentist machine learning."?
Instead, what I read are things like: "In practice, Bayesian optimization has been shown to obtain better results in fewer experiments than grid search and random search" (from here).
Ok, thank you for your time.