While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.
A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.
Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.
Why would QC be relevant? What quantum effects does the brain exploit? Or what classical algorithms which are key to AI tasks would benefit so enormously from running on a genuine quantum computer (as opposed to a quantum or quantum-inspired algorithm running on a classical computer) that they would make the difference between AI being possible and impossible?
The thought is not that QC is actually likely to be necessary for AI, just that, with all the people saying AI is impossible (or saying things that make it sound like they think AI is impossible, without being quite so straightforward about it), it would be interesting to find people who think AI is [i]just hard enough[/i] to require QC.
My own view, though, is that AI is neither impossible nor would require anything like QC.
(Edit: if I had to ... (read more)