You're saying a Solomonoff Inductor would be outperformed by a variant that weighted quick programs more favorably, I think. (At the very least, it makes approximations computable.)
Whether or not penalizing for space/time cost increases the related complexity metric of the standard model is an interesting question, and there's a good chance it's a large penalty since simulating QM seems to require exponential time, but for starters I'm fine with just an estimate of the Kolmogorov Complexity.
Well, I'm saying the possibility is worth considering. I'm hardly going to claim certainty in this area.
As for QM...
The metric I think makes sense is, roughly, observer-moments divided by CPU time. Simulating QM takes exponential time, yes, but there's an equivalent exponential increase in the number of observer-moments. So QM shouldn't have a penalty vs. classical.
On the flip side this type of prior would heavily favor low-fidelity simulations, but I don't know if that's any kind of strike against it.
In the post Complexity and Intelligence, Eliezer says that the Kolmogorov Complexity (length of shortest equivalent computer program) of the laws of physics is about 500 bits:
Where did this 500 come from?
I googled around for estimates on the Kolmogorov Complexity of the laws of physics, but didn't find anything. Certainly nothing as concrete as 500.
I asked about it on the physics stack exchange, but haven't received any answers as of yet.
I considered estimating it myself, but doing that well involves significant time investment. I'd need to learn the standard model well enough to write a computer program that simulated it (however inefficiently or intractably, it's the program length that matters not it's time or memory performance).
Based on my experience programming, I'm sure it wouldn't take a million bits. Probably less than ten thousand. The demo scene does some pretty amazing things with 4096 bits. But 500 sounds like a teeny tiny amount to mention off hand for fitting the constants, the forces, the particles, and the mathematical framework for doing things like differential equations. The fundamental constants alone are going to consume ~20-30 bits each.
Does anyone have a reference, or even a more worked-through example of an estimate?