I'm saying that Solomonoff induction doesn't contradict Bayes' theorem. The purpose of Solomonoff induction was to find an objective prior, but then after they discovered it, it included a way of updating too. Bayes' theorem turned out to be redundant. But since we're pretty sure Bayes' theorem is correct, it's nice to see that they don't contradict.
Solomonoff induction as opposed to what? Is there any choice of priors which does contradict Bayes' theorem?
You've read the introduction to Bayes' theorem. You've read the introduction to Solomonoff induction. Both describe fundamental theories of epistemic rationality. But how do they fit together?
It turns out that it’s pretty simple. Let’s take a look at Bayes’ theorem.
For a review:
In terms of Solomonoff induction:
The denominator is the same meaning as the numerator, except as a sum for every possible hypothesis. This essentially normalizes the probability in the numerators. Any hypotheses that do not match the data E exactly will cause P(E|Hi) = 0, and therefore that term will contribute nothing to the sum. If the hypothesis does output E exactly, then P(E|Hi) = 1, and the matching hypothesis contributes its weight to the renormalizing sum in the denominator.
Let's see an example with these things substituted. Here, the set of Hi is the set of hypotheses that match.
In summary; Bayes’ theorem says that once we find all matching hypotheses, we can find their individual probability by dividing their individual weight of
by the weights of all the matching hypotheses.
This is intuitive, and matches Bayes’ theorem both mathematically and philosophically. Updating will occur when you get more bits of evidence E. This will eliminate some of the hypotheses Hi, which will cause the renormalization in the denominator to get smaller.