Well, yes, it's not a perfect summary. I have no idea why they'd say Popper was working on Bayesianism - unless maybe "the problem" in that clause was the problem of induction, and something got lost in an edit.
But sometimes nitpicks aren't that important. Like, for example, it's spelled Vitanyi. But this isn't really a crushing refutation of your post (though it is a very convenient illustration). You shouldn't sweat this too much, because their textbook really is worth reading about algorithmic information theory.
You seem to be making an argument here against statements made in a particular book, and provide a lot of quotes, but not quotes of the specific statements you are arguing against. You claim they say Solomonoff induction solves the problem of induction, which it clearly doesn't in full generality because that would also mean it solves the problem of the criterion, epistemic circularity, and other formulations of what we might call the hard problem of epistemology (how do you go from knowing nothing to knowing something) in a justified way, yet on most accounts Solomonoff induction is usually argued to formalize and address induction up to the limit of systematization, which seems more relevant to the rest of what you get at in your post.
So, uh, I guess just what are you trying to argue here, other than that you think these coauthors made a mistake because you don't think they engaged with the literature on induction enough in the text of their book?
Doesn't Solomonoff induction at least make a step towards resolving epistemic circularity, since Solomonoff prior dominates (I don't remember in what way exactly) every probability distribution with the same or smaller support?
You seem to be making an argument here against statements made in a particular book, and provide a lot of quotes, but not quotes of the specific statements you are arguing against.
That's not an entirely bad thing. Addressing a specific text is better than addressing an imaginary statement of your opponents position, a straw man. But it can still amount to weakmanning, which I think is your complaint.
In addition, the future always resembles the past in some respects and not others, so saying the future resembles the past is irrelevant to creating and assessing ideas.
But not all inductivists believe in a version of inductivism that supposedly generates theories or scientific knowledge.
Its also possible to accept a version of induction that deals purely with the probabilities of future observations based on past observations.
(Here, both claims are popular... but not equivalent It is possible to reject the idea that Solomonoff inductors are actually generating theories, whilst accepting a probabilistic basis for induction in the second sense. As I keep pointing out, inductors of that kind can be shown to exist by construction).
in some respects and not others
You could have credited inductivists with having detailed ideas about the distinction.
But not all inductivists believe in a version of inductivism that supposedly generates theories or scientific knowledge.
That version of inductivism isn't in Li and Vitanyi who haven't even stated the problem described by critics of inductivism. Where is it?
"Bacon's method is an example of the application of inductive reasoning. However, Bacon's method of induction is much more complex than the essential inductive process of making generalizations from observations. Bacon's method begins with description of the requirements for making the careful, systematic observations necessary to produce quality facts. He then proceeds to use induction, the ability to generalize from a set of facts to one or more axioms. " WP.
But what is the point? Not many people are Baconians nowadays.
You said earlier:
But not all inductivists believe in a version of inductivism that supposedly generates theories or scientific knowledge.
What is the version of inductivism that generates no theories or scientific knowledge and what does it accomplish?
What is the version of inductivism that generates no theories or scientific knowledge and what does it accomplish?
There are many kinds of knowledge and learning that are useful but fall short of scientific knowledge. It is useful to any organism to learn from experience, and many can, even simple ones. There are many useful things learning algorithms can do. My cellphone has predictive text, which is based on learning: yours probably does too.
So how does a poor summary of Hume = a refutation Solomonoff's induction?
Could I say something like, Alanf wants us to think he refuted a book but he can't even spell the author's name right...
Ok, but what does this have to do with the capital of Italy?
In An Introduction to Kolmogorov Complexity and Its Applications 4th edition Li and Vitanyi claim that Solomonoff Induction solves the problem of induction. In Section 5.1.4 they write:
What Hume actually wrote about induction was (Section IV, Part II, pp. 16-17):
What Hume wrote isn’t that we can only use known data and methods. Rather, he said that no argument can prove that the future will resemble the past. So drawing conclusions about what will happen in the future from past data is illogical. He didn’t say that the only possible form of induction is deduction.
In addition, the future always resembles the past in some respects and not others, so saying the future resembles the past is irrelevant to creating and assessing ideas.
Popper wasn't trying to solve the problem of how to make Bayesian induction work. He claimed that induction was impossible, not that he had a way of making it work by finding the right prior (Realism and the Aim of Science, Chapter I, Section 3, I):
Li and Vitanyi want us to think they can solve the problem of induction, but they can’t even summarise the arguments against their position accurately.