The hypotheses in Solomonoff induction don't binarily assert whether an output is allowed or not. What they do output might differ based on the details of your implementation, however.
The simplest approach would be to require the hypotheses to give a definite output. It's true that then unpredictable or stochastic stuff cannot be fitted using a single hypothesis, but you can have an exponentially large family of hypotheses which are parameterized by a noise variable. There would then be some hypothesis within the family which matches the observations, and it would receive the probability.
Another option would be to allow the hypotheses to make probabilistic predictions, rather than deterministic predictions or logically binary nondeterministic predictions. In such a case, one would adjust the probability for each hypothesis continuously using Bayes' theorem, rather than throwing out hypotheses.
Since any probabilistic hypothesis can be turned into a family of deterministic hypotheses augmented with a stream of noise, and since any deterministic hypothesis is also a probabilistic hypothesis of similar complexity, these two approaches will yield essentially the same results/predictions.
I appreciate Solomonoff's success in generalizing Occam razor from just selecting the simplest hypothesis/model to adding probabilities to each of them.
But for instance the postion (motion) of any body (x, y, z) can be fit by an inequation
instead of writing actual Newton's laws (and adding very tight intervals for constants), and it is simpler, so the particular Solomonoff probablility would be greater (if I understand Solomonoff Induction correctly) even the model I stated above is apparently useless.
Or is SI meant to be used just for exact models? Then it might be completely useless, as long as almost nothing in this world can be fitted exactly according to my worldview.
Can you explain the above mentioned issues likely residing in my incomprehension?