You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

anon85 comments on Approximating Solomonoff Induction - Less Wrong Discussion

6 Post author: Houshalter 29 May 2015 12:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: anon85 02 June 2015 09:07:11AM 0 points [-]

Tell me, did Eliezer even address PAC learning in his writing? If not, I would say that he's being over-confident and ignorant in stating that Bayesian probability is all there is and everything else is a mere approximation.

Comment author: Manfred 03 June 2015 02:39:03AM 0 points [-]

PAC-learning is definitely something we don't talk about enough around here, but I don't see what the conflict is with it being an approximation of Bayesian updating.

Here's how I see it: You're updating (approximately) over a limited space of hypotheses that might not contain the true hypothesis, and then this idea that the best model in your space can still be approximately correct is expressible both on Bayesian and on frequentiist grounds (the approximate update over models being equivalent to an approximate update over predictions when you expect the universe to be modelable, and also the best model having a good frequency of success over the long run if the real universe is drawn from a sufficiently nice distribution).

But I'm definitely a n00b at this stuff, so if you have other ideas (and reading recommendations) I'd be happy to hear them.

Comment author: anon85 03 June 2015 03:29:27AM *  1 point [-]

Here's how I see it: You're updating (approximately) over a limited space of hypotheses that might not contain the true hypothesis, and then this idea that the best model in your space can still be approximately correct is expressible both on Bayesian and on frequentiist grounds (the approximate update over models being equivalent to an approximate update over predictions when you expect the universe to be modelable, and also the best model having a good frequency of success over the long run if the real universe is drawn from a sufficiently nice distribution).

The "update" doesn't use Bayes's rule; there's no prior; there's no concept of belief. Why should we still consider it Bayesian? I mean, if you consider any learning to be an approximation of Bayesian updating, then sure, PAC-learning qualifies. But that begs the question, doesn't it?