I'm very tempted to argue that it is!
But what I wanted to convey is that it feels like I'm supposed to learn something which is manifestly inferior, in its logical foundation, than what is already known and available.
And maybe under the constraint of computational cost the finishing point of the Bayesian and the frequentist approach is the same, but where's the proof? Where's the place where someone says: "This is Bayesian machine learning, but it's computationally too costly. So by making this and this simplifying assumptions, we end up with frequentist machine learning."?
Instead, what I read are things like: "In practice, Bayesian optimization has been shown to obtain better results in fewer experiments than grid search and random search" (from here).
There is the probabilistic programming community which uses clean tools (programming languages) to hand construct models with many unknown parameters. They use approximate bayesian methods for inference, and they are slowly improving the efficiency/scalability of those techniques.
Then there is the neural net & optimization community which uses general automated models. It is more 'frequentist' (or perhaps just ad-hoc ), but there are also now some bayesian inroads there. That community has the most efficient/scalable learning methods, but it isn't a...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.