You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

anon85 comments on Approximating Solomonoff Induction - Less Wrong Discussion

6 Post author: Houshalter 29 May 2015 12:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 03 June 2015 01:42:14AM -1 points [-]

PAC learning, for instance, is fundamentally non-Bayesian. Saying that PAC learning approximates Bayesian inference is the same as saying that Bayesian inference approximates PAC learning. It's not a very meaningful statement.

I looked into PAC learning a bit when Scott Aaronson talked about it on his blog, and came to the following conclusion. 'Instead of saying “PAC-learning and Bayesianism are two different useful formalisms for reasoning about learning and prediction” I think we can keep just Bayesianism and reinterpret PAC-learning results as Bayesian-learning results which say that in some special circumstances, it doesn’t matter exactly what prior one uses. In those circumstances, Bayesianism will work regardless.'

Of course that was 7 years ago and I probably barely scratched the surface of the PAC learning literature even then. Are there any PAC learning results which can't be reinterpreted this way?

Comment author: anon85 03 June 2015 03:26:03AM 2 points [-]

PAC-learning has no concept of prior or even of likelihood, and it allows you to learn regardless. If by "Bayesianism" you mean "learning", then sure, PAC-learning is a type of Bayesianism. But I don't see why it's useful to view it that way (Bayes's rule is never used, for example).