While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.
A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.
Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.
Actually, protein phosphorylation (like many other biochemical and biophysical processes, such as ion channel gating) is based on quantum tunneling. It may well be irrelevant, as the timing of the process can probably be simulated well enough with pseudo-random numbers, but on an off-chance that "true randomness" is required, a purely classical approach might be inadequate.
Holy crap that comment (posted very quickly from a tablet hence the typos) produced a long comment thread.
Yes quantum tunneling goes on in a lot of biological processes because it happens in chemistry. There is nothing special about neurology there. I was mostly referring to writings I've seen where someone proposed that humans must be doing hypercomputation because we dont blow up at the godel incompleteness theorem (which made a cognitive scientist in my circle laugh due to the fact that we just don't actually deal with the logic) and another that actu... (read more)