"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
Your integral treats distance as unquantised; it is not clear that the true QM theory does this - Planck distance. Moreover, implemented as a physical machine, your atoms are going to be bound together somehow, those bonds will be quantised, and then the average distance is itself quantised because you are dealing with sums over states with a definite average interatomic distance - you can move the whole machine, but you can't move just a part of the machine with arbitrary precision, you have to go between specific combinations of quantised interatomic binding states. Finally, just because a theory can express some quantity mathematically doesn't mean that the quantity meaningfully exists in the modelled system; what are the physical consequences of having voltage be X rather than X+epsilon? If you (or any physical system you care to name) can't measure the difference then it's not clear to me in what sense your machine is "outputting" the voltage.
So basically, what you're asking for is a finite-length procedure that will tell an irrational-number output from a finite-description-length output? The trouble is, there's no such procedure, as long as you can have a turing machine big enough to fool the finite-length procedure.
If you knew the size of the machine, though, you might be able to establish efficiency constraints and do a test, though.
As for the physics, I agree, fundamental quantization is possible, if untested. Hence why I said things like "hypothesized-continuous." Though once... (read more)