"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
From our perspective, the length which it is computable is going to be arbitrary, and until we hit it, at each digit we will confront the same epistemic problem: "is the prefix of omega that seems to be embedded in our particular physics a finite computable prefix of omega and so perfectly computable and so our physics is perfectly computable, or is this a genuine uncomputable constant buried in our physics?"
This has been discussed in the past as the 'oracle hypercomputation' problem: suppose you found a device which claimed to be an oracle for Turing machines halting, and you test it out and it seems to be accurate on the n Turing machines you are able to run to the point where they either halt or loop their state. How much, if at all, do you credit its claim to be an oracle doing hypercomputation? Since, after all, it could just be a random number generator, and in 1 out of the 2^n possible outputs its predictions will be completely correct 'by chance', or it could be using heuristics or something. What is one's prior for hypercomputation/oracles being possible and how much does one update?
This was discussed on the advanced decision theory ML a while back, IIRC, and I don't think they came to any solid conclusion either way.