While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.
A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.
Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.
Actually, its content has not changed, what has changed is how the content is presented. Note how "quantum computing", or quantum this or that, is usually brought up like a mystifying secret ingredient to e.g. consciousness/qualia (Penrose) or any other property that some would rather see remain mystic. Quantum computing actually is a thing, but an "AI is unfeasible until we do quantum x" just pattern matches too well to arbitrary mystic roadblocks.
Since the point was lost with the humor apparently either disliked or taken at face value, I've decided to edit the comment such that the same point is made, in clearer presentation, so that those that just skim comments (and their vote counts) and are quick to judge before parsing are accommodated as well.