A good nutshell description of the type of Bayesianism that many LWers think correct is objective Bayesianism with critical rationalism-like underpinnings. Where recursive justification hits bottom is particularly relevant. On my cursory skim, Albert only seems to be addressing "subjective" Bayesianism which allows for any choice of prior.
It seems to think the problem of the priors does in Bayesianism :-(
Popper seems outdated. Rejecting induction completely is not very realistic.
I have read both Popper and Deutsch. Could you explain your comment about Deutsch?
You say human scientists do not face the same problem situation as Solomonoff Induction. But both are trying to create knowledge right? In Solomonoff Induction it is assumed that all knowledge comes to us via sensory organs as data streams and the task of the knowledge creator is to compress that data with the aim of making good predictions. This, it is held, is in some sense what scientists and all people do when they create knowledge and it is what the ideal knowledge creator should do. Critical rationalism rejects the idea that all knowledge comes to us via the senses - that is empiricism - and it rejects the idea that theories are just instruments for making predictions - that is instrumentalism.
You seem to think that predictive success can come without underlying explanations, as though explanations are optional. They are not. We can't just neglect explanations and think we can get on with the process of building an AI. That we cannot formalize our current knowledge about explanations in a nice piece of mathematics should not be a deterrent to trying to learn more.
I wonder what they think of the discussion of the Oracle in The Fabric of Reality, ch1.
I have just rediscovered an article by Max Albert on my hard drive which I never got around to reading that might interest others on Less Wrong. You can find the article here. It is an argument against Bayesianism and for Critical Rationalism (of Karl Popper fame).
Abstract:
Any thoughts?