Qiaochu_Yuan comments on Can somebody explain this to me?: The computability of the laws of physics and hypercomputation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
I don't think computability is the right framework to think about this question. Mostly this is because, for somewhat silly reasons, there exists a Turing machine that answers any fixed finite sequence of questions. It's just the Turing machine which prints out the answers to those questions. "But which Turing machine is that?" Dunno, this is an existence proof. But the point is that you can't just say "you can't build a device that solves the halting problem" because, for any fixed finite set of Turing machines, you can trivially build such a device, you just can't recognize it once you've built it...
It follows that in the framework of computability, to distinguish a "real halting oracle" from a rock with stuff scribbled on it, you need to ask infinitely many questions. And that's pretty hard to do.
So I think the correct framework for thinking about this problem is not computability theory but complexity theory. And here it genuinely is an open problem whether, for example, it's possible to build a physical computer capable of solving some problems efficiently that a classical computer can't (e.g. a quantum computer). Scott Aaronson has written about this subject; see, for example, NP-complete problems and Physical Reality as well as Closed Timelike Curves Make Quantum and Classical Computing Equivalent.
In the framework of complexity theory, one can now also ask the following question: if someone built a device and claimed that it could solve some problems more efficiently than our devices can, how do we efficiently verify that their answers are right? This question is addressed by, for example, the framework of interactive proofs. As Scott Aaronson says, this allows us to make statements like the following:
Edit: I should clarify that when I said
there are actually two open problems here, one practical and one theoretical. The practical question is whether you can build quantum computers (and physics is relevant to this question). The theoretical question is whether a quantum computer, if you could build one, actually lets you solve more problems efficiently than you could with a classical computer; this is the question of whether P = BQP.
There's another issue also worth pointing out: The classical analog of BQP isn't P. The classical analog is BPP. It is widely believed that P=BPP, but if P!= BPP then the relevant question in your context will be whether BPP=BQP.
This isn't quite what this question asks. We know that some things are faster with a quantum computer than what can be one in the best case classical situation. Grover's algorithm is one example. The question of whether P=BQP is one formulation of that for yes and no questions. But even this isn't quite accurate. For example, it could be that quantum computers can do some extremely high degree polynomial speed-up even while P=BQP. But this is essentially a minor nitpick.
I think Toby Ord actually has a good response to this objection:
Recall that a physical computer is technically a finite state machine (it doesn't have infinite memory). This is another reason I don't think Turing machines are a good formalism for talking about computation in the real world.
The former suggestion seems like the more important point here. While true that the hypercomputer's behavior can be simulated on a Turing machine, this is only true of the single answer given and received, not of the abstractly defined problem being solved. The hypercomputer still cannot be proven by a Turing machine to have knowledge of a fact that a Turing machine could not.
And so the words "shaky ground" are used loosely here. The argument doesn't refute the theoretic "existence" of recursive computation any more than it refutes the existence of theoretic hypercomputation. That finite state machines are the only realizable form of Turing machine is hardly a point in their generalized disfavor.
solving is not the same as providing the answer
Yes, but the distinction between the two isn't captured by computability theory (unless you are allowed to pose infinitely many problems). Also, if the universe is spatially infinite, it can solve the halting problem in a deeply silly way, namely there could be an infinite string of bits somewhere, each a fixed distance from the next, that just hardcodes the solution to the halting problem.
This is obviously unsatisfying, but part of the reason it's unsatisfying is that even if such an infinite string of bits existed (and even if we somehow verified that it was trustworthy), it would constitute a horribly inefficient algorithm for solving the halting problem: moving at constant speed, it would take time exponential in the length of the code for a Turing machine to go to the place in the universe where the bit determining whether that Turing machine halts is stored. But this is a complexity consideration, not a computability consideration.
Or one selected at random from an infinite class of problems.
That's why both computability theory and complexity theory require algorithms to have finite sized sourcecode.