"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
The Turing model is pretty outdated, and doesn't really describe everything modern computers do.
However - you can run a simulation of a modern computer inside the Turing model (essentially running an interpreted, rather than compiled, language). Which means, very roughly, that any problems that are provably undecidable in the Turing model are necessarily undecidable in the RASP model.
In order to exceed the limitations of the Turing model, it isn't sufficient to be able to do things Turing computers can't; you must be capable of doing things Turing computers can't even simulate. An additional hardware logic gate doesn't cut it, if you can create the same logic by mixing existing logic gates. You have to create logic which cannot be described in existing logic systems. (If anybody wants to try this, good luck. The -thought- makes my brain hurt.)
Note that the proposed solution for, for example, the Halting Problem is not in fact solving something that Turing computers can't, as the solution is effectively to run a turing algorithm for an infinite number of steps. It's taking out one of the assumptions that went into the proof, that you don't -get- an infinite number of steps. And if anybody is taking this seriously, I pose a question: What happens when the cardinality of the infinite operations to solve the problem is greater than the cardinality of the the infinite time spent solving that problem?
ETA: -6? Seriously? I'm not sure exactly what's being downvoted here, but I assume it's my comment that the Turing model is outdated. Well, it is. Not going to apologize for it; computers haven't even vaguely resembled its common-memory-model in decades, and we regularly run algorithms which are far more efficient than the Turing model allows. Even RASP is hopelessly outdated at this point, but it is at least closer to what computers are actually doing. Given that the Church-Turing thesis has not in fact been proven (largely because those trying to prove it gave up on trying to define what exactly it meant), the Turing model largely persists because of its simplicity.
Taking the karma hit to clarify why I, at least, downvoted this comment. When you say the Turing model is outdated, you seem to be assuming that the model was originally intended as a physical model of how actual computers do (or should) work. But that was never its purpose. It was supposed to be a mathematical model that captures the intuitive notion of an effective procedure. All the talk of tapes and tape heads is just meant to aid understanding, and maybe that part is outdated, but the actual definition of a Turing machine itself can be given purely ma... (read more)