While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.
A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.
Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.
I don't think that most (perhaps not all) people who say such things (QC is necessary for AI) understand both what building blocks might be needed for AI and what quantum computers actually can and can't do better or worse than classical computers. Sounds like people throwing two awesome (but so far impractical) concepts they've heard about together randomly, hoping for an even more awesome statement. Like "for colonizing Mars it's necessary that we build room-temperature superconductors first".
Please excuse the ridicule, but I don't see how large quantum computers are necessary for AI. They certainly are helpful, but then, room-temperature superconductors also are...
It's the quantum syllogism:
(1. need not apply e.g. if you are Roger Penrose, but it's still logically fallacious.)