While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.

A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.

Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.

New Comment
101 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

To me it seems straightforward: Intelligence is magical. Classical computers are not magical. Quantum computing is magical. Therefore we need quantum computing for AI.

However, if after a few years quantum computing becomes non-magical, it will become obvious that we need something else.

1ikrase
Do they play Mass Effect? It's possible that they picked it up from sci-fi in which A) its' required or B) in which brains are considered quantum.
0Osiris
I am reminded of Asimov's "positronic brain" and how he came up with it. Perhaps the new goal of research in artificial intelligence should be coming up with new magical terms and explaining as little as possible. It could earn enough money and public interest to create an artificial person... The forms of intelligence I am familiar with (really only one kind, from a materials points of view) are not enough to discuss what is truly necessary for successful AI.
[-]gwern210

Why would QC be relevant? What quantum effects does the brain exploit? Or what classical algorithms which are key to AI tasks would benefit so enormously from running on a genuine quantum computer (as opposed to a quantum or quantum-inspired algorithm running on a classical computer) that they would make the difference between AI being possible and impossible?

9ChrisHallquist
1. No reason I know of. 2. None, in my opinion (and, I think the opinion of most neuroscientists) 3. None that I know of. The thought is not that QC is actually likely to be necessary for AI, just that, with all the people saying AI is impossible (or saying things that make it sound like they think AI is impossible, without being quite so straightforward about it), it would be interesting to find people who think AI is [i]just hard enough[/i] to require QC. My own view, though, is that AI is neither impossible nor would require anything like QC. (Edit: if I had to make a case that AI is likely to require QC, I might focus on brain emulation and citing the fact that quantum chemistry models increase exponentially in their computational demands as the number of atoms increases. In reality, I think we'd likely be able to find acceptable approximations for doing brain emulation, but maybe someone could take this kind of arguments and strengthen it. At least, it would be somewhat less surprising to me than if the brain turned out to be a quantum computer in a stronger sense.)
4jsteinhardt
This post made me realize that the following fun fact: if AI were in BQP but not in BPP, then that would provide non-negligible evidence for anthropics being valid.
5ESRogs
Could you flesh that out a bit? Is the idea that it's just one more case where a feature of our universe turns out to be necessary for consciousness?
3jsteinhardt
Yes, and a pretty weird feature at that (being in BQP but not P is pretty odd unless BQP was designed to contain the problem in the first place).
0ESRogs
Gotcha, thanks.
[-][anonymous]150

No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosporyulation on microtubules or whatever important for neuron function. They're all either physicists who don't understand the biology or computer scientists who don't understand the biology. Nothing happens in neural activity or long-term-potentiation or other processes that cannot be accounted for by chemical processes, even if we don't understand exactly the how of some of them. The open questions are mostly exactly how neurons are able to change their excitability and structure over time and how they manage to communicate in large scale systems.

8Shmi
Actually, protein phosphorylation (like many other biochemical and biophysical processes, such as ion channel gating) is based on quantum tunneling. It may well be irrelevant, as the timing of the process can probably be simulated well enough with pseudo-random numbers, but on an off-chance that "true randomness" is required, a purely classical approach might be inadequate.
7Eliezer Yudkowsky
Quantum tunneling != quantum computing. Quantum 'randomness' != quantum computing. No one has ever introduced, even in principle, a cognitive algorithm that requires quantum 'randomness' as opposed to thermal noise.
5jsteinhardt
How could "true randomness" be required, given that it's computationally indistinguishable from pseudorandomness?
4shinoteki
If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.
5Eliezer Yudkowsky
What? No it's not. There are no pseudo-random generators truly ultimately indistinguishable in principle from the 'branch both ways' operation in quantum mechanics, the computations all have much lower Kolmogorov complexity after running for a while. There are plenty of cryptographically strong pseudo-random number generators which could serve any possible role a cognitive algorithm could possibly demand for a source of bits guaranteed not to be expectedly correlated with other bits playing some functional role, especially if we add entropy from a classical thermal noise source, the oracular knowledge of which would violate the second law of thermodynamics. This is not an open problem. There is nothing left to be confused about.
8Paul Crowley
A proof that any generator was indistinguishable from random, given the usual definitions, would basically be a proof that P != NP, so it is an open problem. However we're pretty confident in practice that we have strong generators.
5paulfchristiano
As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don't have to be able to fool an adversary, to fool yourself. This is sometimes sloganized as "randomness never helps unless non-uniformity always helps," since it is obvious that P << E and generally believed that P/poly is about as strong as P for "uniform" problems. It would be a big shock if P/Poly was so much bigger than P. But of course, in the worlds where you can't derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.
3Paul Crowley
Are you referring to this result? Doesn't seem to be identical to what you said, but very close.
5paulfchristiano
Yeah, I was using "derandomize" slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling "hardness vs. randomness"). If poly size circuits can't compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can't compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can't quite get BPP = P from P/poly < E.
2Eliezer Yudkowsky
Can you refer me to somewhere to read more about the "usual definitions" that would make this true? If I know the Turing machine, I can compare the output to that Turing machine and be pretty sure it's not random after running the generator for a while. Or if the definition is just lack of expected correlation with bits playing a functional role, then that's easy to get. What's intermediate such that 'indistinguishable' randomness means P!=NP?

You don't sound like you're now much less confident you're right about this, and I'm a bit surprised by that!

I got the ladder down so I could get down my copy of Goldreich's "Foundations of Cryptography", but I don't quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It's secure if every distinguisher's advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

There are also intros available for free on Oded Goldreich's FoC website.

Here's my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you're playing rock-paper-scissors with someone and you're using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.

In non-adversarial situations (which may be what Eliezer had in mind) you'd have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it's technically distinguishable.

-1Eliezer Yudkowsky
Okay, makes sense if you define "distinguishable from random" as "decodable with an amount of computation polynomial in the randseed size". EDIT: Confidence is about standard cryptographically strong randomness plus thermal noise being sufficient to prevent expected correlation with bits playing a functional role, which is all that could possibly be relevant to cognition.
6Paul Crowley
Decoding isn't the challenge; the challenge is to make a guess whether you're seeing the output of the PRG or random output. Your "advantage" is Adv_PRG[Distinguisher] = P(Distinguisher[PRG[seed]] = "PRG") - P(Distinguisher[True randomness] = "PRG")
6JoshuaZ
Note that this is standard notation when one discusses pseudorandom generators. Hence Ciphergoth's comment about "the usual definitions."
0Eliezer Yudkowsky
(Nods.)
0ThisSpaceAvailable
For it to be an open problem, there would have to not be a proof either way. Since Eliezer is claiming (or, at least, implying) that there is a proof that there is no PRNG indistinguishable, arguing that there is no proof that there is a PRNG indistinguishable doesn't show that it is an open problem.
0Luke_A_Somers
Quite. They seem to be agreeing that any PRNG can in principle distinguished, and then Eliezer goes on to say that a mind is a place that will not be able to make that distinction - which ciphergoth didn't begin to address.
2jsteinhardt
You missed the key word "computationally". Of course a pseudorandom generator is a mathematically distinct object, but not in a way that the universe is capable of knowing about (at least assuming that there are cryptographic pseudorandom generators that are secure against quantum adversaries, which I think most people believe).
4[anonymous]
Holy crap that comment (posted very quickly from a tablet hence the typos) produced a long comment thread. Yes quantum tunneling goes on in a lot of biological processes because it happens in chemistry. There is nothing special about neurology there. I was mostly referring to writings I've seen where someone proposed that humans must be doing hypercomputation because we dont blow up at the godel incompleteness theorem (which made a cognitive scientist in my circle laugh due to the fact that we just don't actually deal with the logic) and another that actually was posted here that proposed that digital information was somehow being stored in the pattern of phosphorylation of subunits of microtubules (which made multiple cell biologists laugh because those structures are so often erased and replaced and phosphorylation is ridiculously dynamic and moderated by the randomness of enzymes hitting substrates via diffusion and not retained on any one molecule for long). In the end it mostly serves to just modify the electrical properties of the membranes and their ability to chemically affect and be affected by each other. As for 'true randomness', we don't run on algorithms, we run on messy noisy networks. If we must frame the way cells work in terms of simulation of gross behavior its a whole lot more like noisy differential equations than discrete logic. I fail to see any circumstance in which you need quantum effects to make those behave as they usually do. On top of that, every single cell is a soup of trillions of molecules bouncing off each other at dozens of meters per second like lottery balls. If that's not close enough to 'true randomness', such that you somehow need quantum effects like the decay of atoms, what is?
3Luke_A_Somers
Even if the Mersenne twister isn't good enough, you could still get a quantum noise generator hooked up. And that's basically a classical device, certainly doesn't need any coherence.
2Dreaded_Anomaly
In case anybody needs one: ANU Quantum Random Numbers Server
2Shmi
I suppose we ought to define what "classical" and "quantum" means.
3DanielLC
It's a quantum effect, but it's one that's easily taken advantage of, as opposed to the crazy difficult stuff a quantum computer can do. As such, a computer that can do that can be considered classical. For that matter, transistors work by exploiting quantum effects. We still don't call them quantum computers.
1Luke_A_Somers
Thanks for the first paragraph. I came here to clarify this, but you beat me to it. More clearly: a quantum noise generator can have a design such that someone who only understands classical mechanics will understand based on that design that it is a noise generator. They just won't catch the detail that this noise has an additional property. The above statement may depend on the implementation, but I meant in principle, so there it is.
-1DanielLC
Someone who only understands classical mechanics will not understand a noise generator. Classical physics is deterministic, so noise generators are impossible.
1Luke_A_Somers
Only if you're omniscient. A noise generator is a way of controllably injecting your ignorance of some system into a particular channel.
0DanielLC
You don't need a quantum computer to exploit quantum effects for random number generation. I've heard it's common to do that by sending electricity backwards through a diode and amplifying it.

There's an overview of the "quantum mind" debate among academics (whether quantum effects play an important role in the function of the brain) in FHI's Whole Brain Emulation Roadmap (page 37). This isn't quite the same question you're asking (since even if the brain uses quantum computing, an AI may be able to avoid it through some kind of algorithmic workaround), but I'd guess that most supporters of the "quantum mind" hypotheses would also answer "yes" to your question.

5OrphanWilde
I think there's an important distinction to be drawn between human-level AI and human-like AI, as far as the "quantum mind" hypothesis and its relationship to quantum computing goes. It could be a necessary ingredient to consciousness while being unimportant for intelligence more generally.
1jsteinhardt
Really? I think it's plausible that quantum effects play an important role in the brain, but I'd be very surprised if that was actually an obstacle to AI.

Quantum effects or quantum computation? Technically our whole universe is a quantum effect, but most of it can't be regarded as doing information processing, and of the parts that do information processing, we don't yet know of any that are faster on account of quantum superpositions maintained against decoherence.

2jsteinhardt
I'm not sure where the line would be drawn; I think it's possible that neurons are getting speedups by exploiting quantum effects. I don't think it's using it to solve problems that aren't in P.
2Eliezer Yudkowsky
My understanding is that any speedup would be fairly implausible, I mean isn't the whole lesson of l'affaire D-Wave that you need maintained quantum coherence and that requires quantum error-correction which is why Scott Aaronson didn't believe the D-Wave claims? Or is that just an unusually crisp human-programming way of doing things?
[-]TrE100

I don't think that most (perhaps not all) people who say such things (QC is necessary for AI) understand both what building blocks might be needed for AI and what quantum computers actually can and can't do better or worse than classical computers. Sounds like people throwing two awesome (but so far impractical) concepts they've heard about together randomly, hoping for an even more awesome statement. Like "for colonizing Mars it's necessary that we build room-temperature superconductors first".

Please excuse the ridicule, but I don't see how large quantum computers are necessary for AI. They certainly are helpful, but then, room-temperature superconductors also are...

It's the quantum syllogism:

  1. I don't understand quantum.
  2. I don't understand consciousness
  3. Therefore, consciousness involves quantum.

(1. need not apply e.g. if you are Roger Penrose, but it's still logically fallacious.)

4Eliezer Yudkowsky
Penrose would claim not to understand how 'collapse' occurs.
2nigerweiss
When I was younger, I picked up 'The Emperor's New Mind' in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.
[-]Shmi60

Anything a quantum computer can do, a classical computer can do, if slower.

6ChrisHallquist
Yes, I know. The point is that it seems to be generally accepted that some things (particularly, certain kinds of code breaking) are only likely to only become doable in a realistic amount of time with quantum computing, so some people (I'm not one of them) might think AI is in a similar boat.

We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.

Are you arguing with students of Deepak Chopra?

2jsteinhardt
While I doubt AI needs QC, I don't think this argument works. Your same argument seems to rule out birds exploiting quantum phenomena to navigate, yet they are thought to do so.
6JoshuaZ
There's a difference between exploiting quantum phenomena and using entanglement. There's a large set of quantum mechanical behavior which doesn't really add much computationally. (To some extent this is part of why we don't call our normal laptops quantum computers even though transistors and hard drives use quantum mechanics to work.)
5Luke_A_Somers
Precisely. That's why we shouldn't be calling our brains 'quantum' either... Or if we do, then that is in no way an argument against our using our current off-the-shelf 'quantum' computers! Entanglement is what QM does that classical can't do directly (can in sim, of course). Everything else is just funny force laws.
2Luke_A_Somers
No, it doesn't. I addressed the ion current nature of nerve action potentials. Birds' directional sensing couples to such a system but is not made of it.
3Shmi
Then the discussion should be about the amount of computations required, not about classical vs quantum.

It's not possible to discuss "the amount of computations required" without specifying a model of computation. Chris is asking whether an AI might be much slower on a classical computer than a quantum computer, to the extent that it's practically infeasible unless large scale quantum computing is feasible. This is a perfectly reasonable question to ask and I think your objection must be due to an over-literal interpretation of his post title or some other misunderstanding.

5Shmi
I agree, there are more steps in between "AI is hard" and "we need QC". However, from what I understand, those who say "QC is required for AI" just use this "argument" (e.g. "AI is at least as hard as code breaking") as an excuse to avoid thinking about AI, not as a thoughtful conclusion from analyzing available data.

Who thinks quantum computing will be necessary for AI?

David Pearce for one:

The theory presented predicts that digital computers - and all inorganic robots with a classical computational architecture - will 1) never be able efficiently to perform complex real-world tasks that require that the binding problem be solved; and 2) never be interestingly conscious since they are endowed with no unity of consciousness beyond their constituent microqualia - here hypothesized to be the stuff of the world as described by the field-theoretic formalism of physics.

2davidpearce
Alas so. IMO a solution to the phenomenal binding problem (cf. http://cdn.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf) is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why classical digital computers are (and will remain) insentient zombies, not unitary minds. This conjecture may be false; but it has the virtue of being testable. If / when our experimental apparatus allows probing the CNS at the sub-picosecond timescales above which Max Tegmark ("Why the brain is probably not a quantum computer") posits thermally-induced decoherence, then I think we'll get a huge surprise! I predict we'll find, not random psychotic "noise", but instead the formal, quantum-coherent physical shadows of the macroscopic bound phenomenal objects of everyday experience - computationally optimised by hundreds of millions years of evolution, i.e. a perfect structural match. (cf. http://consc.net/papers/combination.pdf) By contrast, critics of the quantum mind conjecture must presumably predict we'll find just "noise".
0huh
Your first link appears to be broken. It seems possible that the OpenWorm project to emulate the brain of a C. Elegans flatworm on a classical computer may yield results prior to the advent of experimental techniques capable of " probing the CNS at ... sub-picosecond timescales." Would you consider a successful emulation of worm behavior evidence against the need for quantum effects in neuronal function, or would you declare it the worm equivalent of a P-Zombie?
4davidpearce
Huh, yes, in my view C. elegans is a P-zombie. If we grant reductive physicalism, the primitive nervous system of C. elegans can't support a unitary subject of experience. At most, its individual ganglia (cf. http://www.sfu.ca/biology/faculty/hutter/hutterlab/research/Ce_nervous_system.html) may be endowed with the rudiments of unitary consciousness. But otherwise, C. elegans can effectively be modelled classically. Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious" http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-130208.pdf) But exactly the same dilemma confronts those who treat neurons as essentially discrete, membrane-bound classical objects. Even if (rightly IMO) we take Strawsonian physicalism seriously (cf. http://en.wikipedia.org/wiki/Physicalism#Strawsonian_physicalism) then we still need to explain how classical neuronal "mind-dust" could generate bound experiential objects or a unitary subject of experience without invoking some sort of strong emergence.
0wedrifid
I think you're right. Mind you I suspect saying that I disagreed per se would be being generous.
2davidpearce
Wedrifid, yes, if Schwitzgebel's conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a "zombie". Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we're excluding for the USA. Let's assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren't we mere patterns of "mind dust"? None of this might seem relevant to ChrisHallquist's question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn't refuted quantum mind; he's made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is "noise", then I don't see how reductive physicalism can be saved.
0timtyler
Really? A poll seems as though it would be in order. Maybe it it explained exactly what was meant by "conscious" there might even be a consensus on the topic.
4davidpearce
Tim, perhaps I'm mistaken; you know lesswrongers better than me. But in any such poll I'd also want to ask respondents who believe the USA is a unitary subject of experience whether they believe such a conjecture is consistent with reductive physicalism?
0elharo
Interesting project. I would consider such a result to be at least weak evidence against the need for quantum effects in neuronal function, maybe stronger. It would be still stronger evidence if the project managed to produce such an emulation on the same scale and energy budget as a flatworm brain. And strongest of all if they manage to hook the flatworm brain up to an actual flatworm body and drive it without external input.

Quantum computers can be simulated on classical computers with exponential slow down. So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine. Note also that BQP (the set of efficiently computable problems by a quantum computer) is believed (although not proven) to not contain any NP complete problems.

Note also that at a purely practical level, since quantum computers can do a lot of things better than classical computers and our certainty about their strength is much lower, trying to run an AI on a quantum computer is a really bad idea if you take the threat of AI going FOOM seriously.

6jsteinhardt
An exponential slowdown basically means that it can't be done. If you have an oracle for EXPTIME then you're basically already set for most problems you could want to solve.
3JoshuaZ
So that is practically true, and in fact. EXPTIME is one of the few things we can show is large enough that we can even prove it properly contains P. But, in context this isn't as bad as it looks. The vast majority of interesting things we can do on quantum computers take in practice much less than exponential time (look at factoring for example). In fact, BQP actually lives inside PSPACE, so this shouldn't be that surprising. But practical issues aside , most of the arguments about using quantum computers to do AI or consciousness involve claims that they are fundamentally necessary. The fact that we can simulate them with sufficient slow down demonstrates that at least that version of that thesis is false.

These people's objections are not entirely unfounded. It's true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today's hardware to do it.

Th... (read more)

3nigerweiss
Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain. I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.
-2GeraldMonroe
An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator. And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

In principle, it should be quite possible to map a human brain, replace each neuron with a chip, and have a human-level AI. Such a design would not have the long-term adaptability of the human brain, but it'd pass a Turing test trivially. Obviously, the cost involved is prohibitive, but it should be a sufficient boundary case to show that QC is not strictly necessary. It may still be helpful, but I'm sufficiently skeptical of the viability of commercialized QC to believe that the first "real" AI will be built from silicon.

3Luke_A_Somers
Note for the downvoters of the above: I suspect you're downvoting because you think a complete hardware replacement of neurons would result in long-term adaptibility. This is so, but is not what was mentioned here - replacing each neuron with a momentarily equivalent chip that does not have the ability to grow new synaptic connections would provide consciousness but would run into long-term problems as described.
0Alsadius
Yeah, I was using the non-adaptive brain as a baseline reducto ad absurdum. Obviously, it's possible to do better - the computing power wasted in the above design would be monumental, and the human brain is not such a model of efficiency that I don't think you can do better by throwing a few extra orders of magnitude at it. But it's something that even an AI skeptic should recognize as a possibility.
4TheOtherDave
If we're going to be picky, also the idea that only neurons are relevant isn't right; if you replaced each neuron with a neuron-analog (a chip or a neuron-emulation-in-software or something else) but didn't also replace the non-neuron parts of the cognitive system that mediate neuronal function, you wouldn't have a working cognitive system. But this is a minor quibble; you could replace "neuron" with "cell" or some similar word to steelman your point.
2nigerweiss
Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

I don't think we'll need quantum computing specifically for AI.

I do think that it's possible, though, that we might need to make significant improvements in hardware before we can run anything like a human-level AI.

I begin to think that we should taboo the words "AI" and "intelligence" when talking about these subjects. It's not obvious to me that, for example, whole brain emulation and automated game playing have much in common at all. There are other forms of "AI" as well. Consequently we seem to be talking past each other as often as not.

2TheOtherDave
For a while I got into the habit of talking about systems that optimize their environment for certain values, rather than talking about intelligences (whether NI, AI, or AGI). I haven't found that it significantly alters the conversations I'm in, but I find it gives me more of a sense that I know what I'm talking about. (Which might be a bad thing, if I don't.)

There are a lot of things we simply don't know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don't think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.... (read more)

7Nisan
Skimming the article you linked, it looks like Penrose believes human mathematical intuition comes from quantum-gravitational effects. So on Penrose's view it might be possible that AGI requires a quantum-gravitational hypercomputer, not just a quantum computer.
3JoshuaZ
Note that according to Scott Aaronson (in his recent book), Penrose thinks that human minds can solve the Halting problem and conjectures that humans can even solve the Halting problem for machines with access to a Halting oracle.
3nigerweiss
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don't have any money right now to propose a bet, but if it turns out that the brain can't be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat. Daniel Dennet's papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.
1Baughn
If you find someone to bet against you, I'm willing to eat half the hat.
0[anonymous]
We could split it three ways, provided agreeing in principle despite doubting that an actual complete human brain will ever be simulated counts.
1DSherron
"More susceptible" is not the same as "susceptible". If it's bigger than an atom, we don't need to take quantum effects into account to get a good approximation, and moreover any effects that do happen are going to be very small and won't affect consciousness in a relevant way (since we don't experience random changes to consciousness from small effects). There's no need to accurately model the brain to perfect detail, just to roughly model it, which almost certainly does not involve quantum effects at all. Incidentally, there's nothing special about quantum randomness. Why should consciousness be related to splitting worlds in a special way? Once you drop the observer-focused interpretations, there's nothing related between them. If the brain needs randomness there are easier sources.
[-]iDante-10

There will be AI long before there are quantum computers.

There are already quantum computers. Just really small quantum computers.

1Flipnash
Therefore, AI has already arrived. /joke

My video games have had AI for decades. Awful AI, but not really any more awful than a quantum computer that successfully factors the number 15.

0JoshuaZ
So, DanielILC has pointed out that there are already quantum computers. A charitable interpretation of your statement might be that there will be AI long before there are general quantum computers powerful enough to do practical computations. Is this what you meant? If so, can you explain what leads to this conclusion?

At the very least, I'm relatively certain that quantum computing will be necessary for emulations. It's difficult to say with AI because we have no idea what their cognitive load is like, considering we have very little information on how to create intelligence from scratch yet.

1JoshuaZ
Why?