There's a problem that has occurred to me that I haven't seen discussed anywhere: I don't think people actually wants to assign zero probability to all hypotheses which are not Turing computable. Consider the following hypothetical: we come up with a theory of everything that seems to explain all the laws of physics but there's a single open parameter (say the fine structure constant). We compute a large number of digits of this constant, and someone notices that when expressed in base 2, the nth digit seems to be 1 iff the nth Turing machine halts on the blank tape for some fairly natural ordering of all Turing machines. If we confirm this for a large number of digits (not necessarily consecutive digits- obviously some of the 0s won't be confirmable) shouldn't we consider the hypothesis the digits really are given by this simple but non-computable function? But if our priors assign zero probability to all non-computable hypotheses then this hypothesis must always be stuck with zero probability.
If the universe is finite we could approximate this function with a function which was instead "Halts within K" steps where K is some large number, but intutively this seems like a more complicated hypothesis than the original one.
I'm not sure what is a reasonable prior in this sort of context that handles this sort of thing. We don't want an uncountable set of priors. It might make sense to use something like hypotheses which are describable in Peano arithmetic or something like that.
At this stage, an uncomputable universe is an extraordinary hypothesis, that would require extraordinary evidence to support. A finite string of numbers seems like poor evidence for uncomputability. It seems much easier to believe that it is a computable approximation. So, I am inclined to side with Solomonoff induction here.
The uncomputable is always infinite. Our observations are always finite. I don't see how making such a leap could be justified.
So if a simple setup you can do at home with some liquid nitrogen, a laser and a kumquat appeared to allow direct hypercomputation - could instantaneously determine whether a particular Turing machine halted - and you were able to test this against all sorts of extraordinary examples, you would never come to the conclusion that it was a hypercomputation engine, instead building ever-more-complex computable models of what it was doing?