I recently started watching an interesting lecture by Michael Jordan on Bayesians and frequentists; he's a pretty successful machine learning expert that takes both views in his work. You can watch it here: http://videolectures.net/mlss09uk_jordan_bfway/. I found it interesting because his portrayal of frequentism is much different than the standard portrayal on lesswrong. It isn't about whether probabilities are frequencies or beliefs, it's about trying to get a good model versus trying to get rigorous guarantees of performance in a class of scenarios. So I wonder why the meme on lesswrong is that frequentists think probabilities are frequencies; in practice it seems to be more about how you approach a given problem. In fact, frequentists seem more "rational", as they're willing to use any tool that solves a problem instead of constraining themselves to methods that obey Bayes' rule.
In practice, it seems that while Bayes is the main tool for epistemic rationality, instrumental rationality should oftentimes be frequentist at the top level (with epistemic rationality, guided by Bayes, in turn guiding the specific application of a frequentist algorithm).
For instance, in many cases I should be willing to, once I have a sufficiently constrained search space, try different things until one of the works, without worrying about understanding why the specific thing I did worked (think shooting a basketball, or riffle shuffling a deck of cards). In practice, it seems like epistemic rationality is important for constraining a search space, and after that some sort of online learning algorithm can be applied to find the optimal action from within that search space. Of course, this isn't true when you only get one chance to do something, or extreme precision is required, but this is not often true in everyday life.
The main point of this thread is to raise awareness of the actual distinction between Bayesians and frequentists, and why it's actually reasonable to be both, since it seems like lesswrong is strongly Bayesian and there isn't even a good discussion of the fact that there are other methods out there.
From what I understand, in order to apply Bayesian approaches in practical situations it is necessary to make assumptions which have no formal justification, such as the distribution of priors or the local similarity of analogue measures (so that similar but not exact predictions can be informative). This changes the problem without necessarily solving it. In addition, it doesn't address the issue of AI problems not based on repeated experience, e.g. automated theorem proving. The advantage of statistical approaches such as SVMs is that they produce practically beneficial results with limited parameters. With parameter search techniques they can achieve fully automated predictions that often have good experimental results. Regardless of whether Bayesianism is the law of inference, if such approaches cannot be applied automatically they are fundamentally incomplete and only as valid as the assumptions they are used with. If Bayesian approaches carry a fundamental advantage over these techniques why is this not reflected in their practical performance on real world AI problems such as face recognition?
Oh and bring on the down votes you theory loving zealots :)
Bayesian approaches tend to be more powerful than other statistical techniques in situations where there is a relatively limited supply of data. This is because Bayesian approaches, due to being model-based, tend to have a richer structure that allows it to take advantage of more of the structure of the data; a second reason is because Bayes allows for the explicit integration of prior assumptions and is therefore usually a more aggressive form of inference than most frequentist methods.
I tried to find a good paper demonstrating this (called "learning... (read more)