There's a problem that has occurred to me that I haven't seen discussed anywhere: I don't think people actually wants to assign zero probability to all hypotheses which are not Turing computable. Consider the following hypothetical: we come up with a theory of everything that seems to explain all the laws of physics but there's a single open parameter (say the fine structure constant). We compute a large number of digits of this constant, and someone notices that when expressed in base 2, the nth digit seems to be 1 iff the nth Turing machine halts on the blank tape for some fairly natural ordering of all Turing machines. If we confirm this for a large number of digits (not necessarily consecutive digits- obviously some of the 0s won't be confirmable) shouldn't we consider the hypothesis the digits really are given by this simple but non-computable function? But if our priors assign zero probability to all non-computable hypotheses then this hypothesis must always be stuck with zero probability.
If the universe is finite we could approximate this function with a function which was instead "Halts within K" steps where K is some large number, but intutively this seems like a more complicated hypothesis than the original one.
I'm not sure what is a reasonable prior in this sort of context that handles this sort of thing. We don't want an uncountable set of priors. It might make sense to use something like hypotheses which are describable in Peano arithmetic or something like that.
So if a simple setup you can do at home with some liquid nitrogen, a laser and a kumquat appeared to allow direct hypercomputation - could instantaneously determine whether a particular Turing machine halted - and you were able to test this against all sorts of extraordinary examples, you would never come to the conclusion that it was a hypercomputation engine, instead building ever-more-complex computable models of what it was doing?
A hotline to a computable halt-finding machine seems much more plausible than something uncomputable, yes. We have no idea how something uncomputable could possibly work. You should not give weight to the uncomputable while computable approximations exist.