"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
These are not the only options. Infinite sets have infinite proper subsets, so an object in a spatially infinite universe could have infinite size without taking up the entire universe. In a universe with an infinite amount of matter, a computational process could requisition an infinite proper subset of that matter as memory while still leaving plenty of matter to build other stuff. Or (if you don't take the cosmic censorship hypothesis as a constraint on physical possibility) you could have a naked singularity functioning as a white hole (the time reverse of a black hole, allowed by the time reversal invariant Einstein Field Equations) disgorging matter as needed for the computation.
That said, I am concerned about the fact that making the computational device too large would significantly modify the background metric, so (as shminux pointed out) one can't glibly consider a M-H spacetime, put a massive (perhaps infinitely large) Turing machine in it, and still assume that it is the same spacetime. It's not obvious to me that it would be impossible to have a device of this sort in an M-H spacetime, but neither is it obvious that it would be possible (and FWIW, I would bet against the possibility).
I think the right response is that infinite memory is always an idealization in discussions of computability. When we talk about the Church-Turing thesis as limning the notion of "computable", we are ignoring spatial constraints. Computability is a pure mathematical concept, not an engineering concept. When a theorist says that X is computable, she is not committing herself to the claim that the universe contains physical resources sufficient for the construction of a computer that implements X. Why should this usual standard become more rigid when we consider M-H spacetimes?
Every finite state machine will either halt or start repeating itself in finite time. This is guaranteed. To figure out whether a particular machine halts, simply wait until it either halts or enters a state it has entered before. One of these will happen within finite time, so you will always be able to determine within finite time whether or not the machine halts. It's true that if you want a single halting oracle that works for finite state machines of arbitrary size, it cannot itself be a finite state machine, it would have to be a Turing machine. Is this what you mean by the halting problem for FSAs? If so, then I agree, but that is a different sort of problem from the halting problem for Turing machines. My point was just that in the case of FSA's (unlike Turing machines) there's no computation that is ruled out solely due to the lack of infinite runtime; allowing infinite runtime doesn't increase the power of FSAs.