"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
I should clarify: Assumptions (1)-(4) in my comment were only supposed to be sufficient conditions for the impossibility of hypercomputation, not necessary conditions. The basic idea is this. If hypercomputation were physically possible, there would be some universe governed by our laws that contains a hypercomputational process. If the laws are computable, deterministic and only require finite precision input data, we could simulate this universe accurately on a computer if we provided it with the initial state. This would allow us to simulate a hypercomputational process on a Turing machine, and that is impossible. Therefore hypercomputation is incompatible with those assumptions.
The assumptions of determinism and finite precision are crucial for this argument because without them accurate simulation for an arbitrary length of time need not be possible. But there may be other arguments for the physical impossibility of hypercomputation that do not rely on these assumptions.
I do think assumptions (1) and (2) by themselves are insufficient, and the Malament-Hogarth setup is supposed to be an existence proof of this insufficiency. The laws of GR are computable, yet they seem to allow hypercomputation (perhaps the M-H setup is incompatible with some other fundamental law, as some of the posters are arguing, but that is irrelevant to this particular point).
ETA: It's also worth keeping in mind that non-deterministic, in this context, is not the same as probabilistic. If a spacetime is not globally hyperbolic, then field values on any spatial surface do not determine field values everywhere else. But this doesn't mean that the laws give you probabilities about what will happen outside the domain of dependence. The general relativistic laws are non-probabilistic.