A good nutshell description of the type of Bayesianism that many LWers think correct is objective Bayesianism with critical rationalism-like underpinnings. Where recursive justification hits bottom is particularly relevant. On my cursory skim, Albert only seems to be addressing "subjective" Bayesianism which allows for any choice of prior.
It seems to think the problem of the priors does in Bayesianism :-(
Popper seems outdated. Rejecting induction completely is not very realistic.
When a theory starts being inconsistent with the data you just throw it out. But what if you were wrong about the theory being inconsistent?
That's just not what you do. "Throwing out theories" is more strongly suggested by falsificationism. A Bayesian approach recognises that observations are uncertain and fallible. Observations inconsistent with a theory are strong negative evidence, but they don't really "falsify" a theory.
Here is Yudkowsky on the topic:
On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
Popper "missed" confirmation by rejecting induction. He didn't get it, and now we know better.
A related point is that science does not start with a set of observations or data.
Observations come second. Priors come first.
Solomonoff induction doesn't care about criticism, let alone that it is pivotal in knowledge creation.
Critics might have a role to play for a resource-limited agent - for instance if they pointed out explanations that were short and were not yet receiving the proper consideration - or if they supplied more data.
Also some theories can't be refuted by empirical means, so what does Solomonoff Induction do about those?
It says to prefer the shorter one.
If Solomonoff Induction does not discard theories inconsistent with the data, then this is wrong:
http://wiki.lesswrong.com/wiki/Solomonoff_induction
Whether it does or does not isn't important to the main argument here.
Critics might have a role to play for a resource-limited agent - for instance if they pointed out explanations that were short and were not yet receiving the proper consideration - or if they supplied more data.
If consistent data makes a theory more probable, I might have expected a theory that has survived (non-empirical) criticism to b...
I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Any thoughts?