I was planning to post this in the main area, but my thoughts are significantly less well-formed than I thought they were. Anyway, I hope that interested parties find it nonetheless.

In the Carnegie Mellon 2011 Buhl Lecture, Scott Aaronson gives a remarkably clear and concise review of P, NP, other fundamentals in complexity theory, and their quantum extensions. In particular, beginning around the 46 minute mark, a sequence of examples is given in which the intuition from computability theory would have accurately predicted physical results (and in some cases this actually happened, so it wasn't just hindsight bias). 

In previous posts we have learned about Einstein's arrogance and Einstein's speed. This pattern of results flowing from computational complexity to physical predictions seems odd to me in that context. Here we are using physical computers to derive abstractions about the limits of computation, and from there we are successfully able to intuit limits of physical computation (e.g. brains computing abstractions of the fundamental limits of brains computing abstractions...) At what point do we hit the stage where individual scientists can rationally know that results from computational complexity theory are more fundamental than traditional physics? It seems like a paradox wholly different than Einstein rationally knowing (from examining bits of theory-space evidence rather than traditional-experiment-space evidence) that relativity would hold true. In what sort of evidence space can physical brain computation yielding complexity limits count as bits of evidence factoring into expected physical outcomes (such as the exponential smallness of the spectral gap of NP-hard-Hamiltonians from the quantum adiabatic theorem)?

Maybe some contributors more well-versed in complexity theory can steer this in a useful direction.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 5:31 PM
[-][anonymous]13y70

Roughly the (physical) Church-Turing thesis says that any physical process is computable. This is a very well established principle (as he says, quantum computation can't compute anything that Turing machines can't so the discovery/invention of that didn't challenge the thesis). In the talk he gives lots of examples and evidence for a sharpened version "the extended Church-Turing thesis" which says that polynomial time and space efficiently realizable physical processes can be computed in polynomial time. It's true that quantum computation can compute some things more efficiently but we don't have any reason to think they break this thesis yet.

I could build (physically) as many battery powered NAND gates as I have space for and connect them up to solve the problem of, for example, adding two n bit numbers together: I would need O(n) gates to implement it. Similarly I could (in principle) build battery powered quantum gates, and for a few problems I would need asymptotically less quantum gates than nand (classical) gates to implement it. For an abstract problem, one might find a far better way (in terms of space and time) to physically calculate the answer than using classical, or even quantum gates, than its complexity class permits: That would smash the extended thesis. He shows a bunch of failed attempts (using soapy water, taking advantage of black holes, ...) and it's clear that we have no reason to assume such a thing exists.

I think the human brain is a lot like the soapy water: it works well for trivial (in the sense of Zeilberger) problems like "designing a nuclear power plant" or "proving Fermats last theorem", to the point that it's actually mystifying (we don't know how to make computers program from abstract specifications or prove theorems at any level even close to competitive with humans) but it doesn't scale up: you can't view brain size as a variable, so it can't modeled as a formal language or complexity class. A proper understanding of how brains and intelligence in general works will change that and either let us view the (trans)human brain with a variable, or it could show us that a recursively self improving AI would start failing after it got to a certain stage (just as the soapy water failed when there were too many pegs).

I think the view that complexity classes are/may be more fundamental than traditional physics is probably true in a vague sense but it's also a slightly misleading way to state what's happening. It's probably similar to the situation with quantum mechanics and quantum electrodynamics (as mnielsen says in postulates of quantum mechanics I). QED (which describes physics) is built upon the postulates of QM (which is just a basic setup for anything "quantum"), we don't need to know anything about QED to do quantum computation - but to build a physical quantum computer we do.

In the talk he gives lots of examples and evidence for a sharpened version "the extended Church-Turing thesis" which says that polynomial time and space physical processes can be computed in polynomial time. It's true that quantum computation can compute some things more efficiently but we don't have any reason to think they break this thesis yet.

Can you elaborate on this a bit? There are problems that are suspected to not be in P (such as factoring) but that are in BQP. Since factoring is in BQP, we could create a physical process that factors numbers in polynomial time, and would be (conjecturally) unable to simulate that process in polynomial time.

[-][anonymous]13y00

Thanks for pointing out my mistake: where I wrote the precise but wrong "polynomial time and space physical processes can be computed in polynomial time", Scott wrote the imprecise but not-wrong "efficiently realizable physical processes can be computed in polynomial time" and as you demonstrated mine doesn't make sense.

On the other hand, Scott wrote in lecture 14 of his Quantum Computing Since Democritus course:

the argument is that quantum computing must be impossible because it violates the Extended Church-Turing Thesis. That is, we know that quantum computing can't be possible (assuming BPP≠BQP), because we know that BPP defines the limit of the efficiently computable. So, we have this thesis, and quantum computing violates the thesis, so it must be impossible

More recently, there was a lot of discussion in the comments of his blog post about various different attempts at defining an "extended Church-Turing thesis". There doesn't seem to be a good consensus.

In what sort of evidence space can physical brain computation yielding complexity limits count as bits of evidence factoring into expected physical outcomes (such as the exponential smallness of the spectral gap of NP-hard-Hamiltonians from the quantum adiabatic theorem)?

These sorts of questions are really tough. However, for most useful purposes it seems that we don't need to ask this (difficult) question. The main issue is that our empirical investigation of our universe suggests that the physical laws don't allow us to do lots of efficient computation (most obviously, whatever the physical laws are, it seems that we can't easily use them to compute NP complete problems in polynomial time, and certainly can't do so for #P). So most of the conclusions that Scott is arriving at come in some sense from empirical observations about the world around us. They just aren't a class of observations that we're used to thinking about as observations of the physical world (they are much more abstract than something like "all electrons have the same rest mass").