The short answer is, you can't. Solomonoff induction is not computable, and moreover depends on your model of computation. Asymptotically, the model of computation makes no difference, but for any finite set of examples, there is (for example) a universal Turing machine with short codes for those examples built in.
Practical methods of choosing among models use completely different methods. One collection of such methods is called regularization.
Well that explains why I was struggling to find anything online!
Thanks for the link, I’ve been going through some of the techniques.
Using AIC the penalty for each additional parameter is a factor of e. For BIC the equivalent is so the more samples the more penalised a complex model is. For large n the models diverge - are there principled methods for choosing which regularisation to use?
Solomonoff induction is not computable because its hypothesis space is infinite, but Bucky is only asking about a finite subset.
In practice no one honestly computes the complexity of models in any of the fields I am familiar with, such as physics, bio, chem etc. Sometimes they count the number of parameters/degrees of freedom, like in your example. In reality there is a dearth of models that explain observations and predict something new, interesting and testable, so the issue rarely arises.
Minimum message length fitting uses an approximation of K-complexity and gets used sometimes when people want to fit weird curves in a sort of principled way. But "real" Solomonoff induction is about feeding literally all of your sensory data into the algorithm to get predictions for the future, not just fitting curves.
So I guess I'd say that it's possible to approximate K-complexity and use that in your prior for curve fitting, and people sometimes do that. But that's not necessarily going to be your best estimate, because your be...
First of all, this sounds weird - Solomonoff induction isn't about scoring two hypothesis. It includes something which does (something like) that, and then it does that for all possible programs/hypothesis (which is one reason why people are saying it's uncomputable*) and then it has a universal prior, and as it gets new evidence it updates that (infinite) prior probability distribution (which is the second reason why it's uncomputable*).
*a.k.a.: this takes literally forever.
AIXI is related. Being based on Solomonoff induction, it is also incomputable. However, there's AIXItl which is an approximation (with memory and time contraints), and it's okay at something like first person pac-man, so there probably is an approximation to Solomonoff induction. I don't know how useful it is, and I've never seen it used.
I also was interested in this question as it implies different answers about the nature of Occam razor. If the probability (from complexity) function quickly diminish, the simplest outcome is the most probable. However, if this function has a fat tail, its median could be somewhere half way to infinite complexity, which means that most true theories as incredible complex.
How do we choose the correct version of Occam's razor to use? As always, we use Occam's razor to give prior probabilities to each possibility (each version of Occam's razor), then update using real-world observations. There's a problem of circularity here, of course. I think that the version that humans intuitively use lies in a large region of the space of versions such that if you use one version from the region to choose a new version, and repeat this self-reflection, the process converges.
Solomonoff induction is generally given as the correct way to penalise more complex hypotheses when calculating priors. A great introduction can be found here.
My question is, how is this actually calculated in practice?
As an example, say I have 2 hypotheses:
A. The probability distribution of the output is given by the same normal distribution for all inputs, with mean μ and standard deviation σ.
B. The probability distribution of the output is given by a normal distribution depending on an input x with mean μ0+mx and standard deviation σ.
It is clear that hypothesis B is more complex (using an additional input [x], having an additional parameter [m] and requiring 2 additional operations to calculate) but how does one calculate the actual penalty that B should be given vs A?