Hi, my name is Jason, this is my first post. I have recently been reading about 2 subjects here, Calibration and Solomoff Induction; reading them together has given me the following question:
How well-calibrated would Solomonoff Induction be if it could actually be calculated?
That is to say, if one generated priors on a whole bunch of questions based on information complexity measured in bits - if you took all the hypotheses that were measured at 10% likely - would 10% of those actually turn out to be correct?
I don't immediately see why Solomonoff Induction should be expected to be well-calibrated. It appears to just be a formalization of Occam's Razor, which itself is just a rule of thumb. But if it turned out not to be well-calibrated, it would not be a very good "recipe for truth." What am I missing?
Viliam_Bur makes a great run-down of what's going on. For a more detailed introduction though, see this post explaining Solomonoff Induction, or perhaps you'd prefer to jump straight to this paragraph (Solomonoff's Lightsaber) that contains an explanation of why shorter (simpler) hypotheses are more likely under Solomonoff Induction.
To make the bridge between that and what Viliam is saying, basically, if we consider all mathematically possible universes, then half the universes will start with a 1, and the other half will start with a 0. Then a quarter wi...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.