You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on Open Thread, Jun. 29 - Jul. 5, 2015 - Less Wrong Discussion

5 Post author: Gondolinian 29 June 2015 12:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 30 June 2015 09:07:19AM *  0 points [-]

I'm very tempted to argue that it is!
But what I wanted to convey is that it feels like I'm supposed to learn something which is manifestly inferior, in its logical foundation, than what is already known and available.

And maybe under the constraint of computational cost the finishing point of the Bayesian and the frequentist approach is the same, but where's the proof? Where's the place where someone says: "This is Bayesian machine learning, but it's computationally too costly. So by making this and this simplifying assumptions, we end up with frequentist machine learning."?

Instead, what I read are things like: "In practice, Bayesian optimization has been shown to obtain better results in fewer experiments than grid search and random search" (from here).

Comment author: IlyaShpitser 30 June 2015 11:20:22AM 2 points [-]

I'm very tempted to argue that it is!

Ok, thank you for your time.