Human brains are demonstrably not equivalent in power to each other, let alone AGIs. Try teaching an 85 IQ person quantum physics and tell me our brains are universal learning machines.
Some brains being broken in various ways is not evidence that other brains are not universal learning machines. My broken laptop is not evidence that all computers are not turing machines.
I think this elides my query.
Even if it's true, it has no bearing on the question. The human range may be very wide, but it does not follow that it is so narrow, that more powerful systems exist. It does not touch on "the class of problems that human civilisation can solve without developing human level general AI", which is my proxy for the class of problems the human brain can solve.
And like the fact that some human can accomplish a task means the brain is capable of a task. The fact that some humans cannot accomplish the same task has no bearing on that.
There are problems that are complex for humans to solve without the help of a computer. To the extent that it's possible for humans to develop AGI, you can say that if you allow help from computers any problem that AGI can solve is per definition also a problem that humans can solve.
If you ask a question such as "Why is move X the better go move than Y" then today in some cases the only good answer is "Because AlphaGo says so". It might be possible for an AGI to definitely say that move X is better than Y and give the perfect go move for any situation, but the reasoning might be too complex for humans to understand without falling back on "because the neural net says so".
It might be that you can argue that the neural net that can definitely say what the best go move happens to be is no AGI. If you however look at a real-world problem like making the perfect stock investment given available data which requires integrating a lot of different data sources the complexity of that problem might need a neural net that has AGI complexity.
If we are fundamentally non-universal, there are problems we cannot even describe. Fermat's Last Theorem cannot even be stated in Piraha
the question isn't what class of problems can be understood, it's how efficiently can you jump to correct conclusions, check them, and build on them. any human can understand almost any topic, given enough interest and willingness to admit error that they actually try enough times to fail and see how to correct themselves. but for some, it might take an unreasonably long time to learn some fields, and they're likely to get bored before perseverance compensates for efficiency at jumping to correct conclusions.
in the same way, a sufficiently strong ai is likely to be able to find cleaner representations of the same part of the universe's manifold of implications, and potentially render the implications in parts of possibility space much further away than a human brain could given the same context, actions, and outcomes.
in terms of why we expect it to be stronger, because we expect someone to be able to find algorithms that are able to model the same parts of the universe as advanced physics folks study, with the same or better accuracy in-distribution and/or out-of-distribution, given the same order amount of energy burned as it takes to run a human brain. once the model is found it may be explainable to humans, in fact! the energy constraint seems to push it to be, though not perfectly. and likely the stuff too complex for humans to figure out at all is pretty rare - it would have to be pseudo-laws about a fairly large system, and would probably require seeing a huge amount of training data to figure it out.
semi-chaotic fluid systems will be the last thing intelligence finds exact equations for.
It's a bit of a strange question - why care if humans will solve everything that an AI will solve?
But ok.
Suppose you put an AI to solving a really big instance of a problem that it's really good at, so big of an instance that it takes an appreciable fraction of the lifespan of the universe to solve it.
In that case you already seem to be granting that it may be that it will take humans much longer to solve it, which I would assume could imply that humans run out of time before they don't have enough resources in the universe to solve it.
It's a bit of a strange question - why care if humans will solve everything that an AI will solve?
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the "human - ant dynamic" a very poor expression for the difference between human cognition and superintelligent cognition).
...But ok.
Suppose you put an AI to solving a really big instance of a problem that it's really good at, so big of an instance that it takes an appreciable fraction o
Strong upvote for the great question—I don't have a definite answer for you, and would potentially be willing to concede the "universality" of human brains by your definition. I'm not sure how much that changes anything, though. For all practical purposes, I think we're in agreement that, say, most complex computational problems can't be solved by humans within a reasonable timeframe, but could be solved by a sufficiently large superintelligence fairly quickly.
Faster clock cycles (5 GHz vs 0.1 - 2 GHz)
This is a typo; the source says "average firing rates of around 0.1Hz-2Hz", not GHz. This seems too low as a "clock speed", since obviously we can think way faster than 2 operations per second; my cached belief was 'order of 100 Hz'.
Thanks for pointing out the typo.
The cached belief is something I've repeatedly heard from Yudkowsky and Bostrom (or maybe I just reread/relistened to the same pieces from them), but as far as I'm aware, it has not proper citations.
I recall some mild annoyance at it not being substantiated. And I trust AI impacts' judgment here better than Yudkowsky/Bostrom.
This seems too low as a "clock speed", since obviously we can think way faster than 2 operations per second; my cached belief was 'order of 100 Hz'.
I think that's average firing rates. An average rate of < 1 thought per second doesn't actually seem implausible? Our burst cognitive efforts exceed that baseline ("overclocking"), but it tires us out pretty quickly.
[Previously]
Introduction
After reading and updating on the answers to my previous question, I am still left unconvinced that the human brain is qualitatively closer to chimpanzee (let alone an ant/earthworm) than it is to hypothetical superintelligences.
I suspect a reason behind my obstinacy is an intuition that human brains are "universal" in a sense that chimpanzee brains are not. So, you can't really have other engines of cognition that are more "powerful" than human brains (in the way a Turing Machine is more powerful than a Finite State Automaton), only engines of cognition that are more effective/efficient.
By "powerful" here, I'm referring to the class of "real world" problems that a given cognitive architecture can learn within a finite time.
Core Claim
Human civilisation can do useful things that chimpanzee civilisation is fundamentally incapable of:
There do not seem to be similarly useful things that superintelligences are capable of that humans are also fundamentally incapable of. Useful things that we could never accomplish in the expected lifetime of the universe.
Superintelligences seem like they would just be able to do the things we are already — in principle — capable of, but more effectively and/or more efficiently.
Cognitive Advantages of Artificial Intelligences
I expect a superintelligence to be superior to humans quantitatively via:
(All of the above could potentially be a several orders of magnitude difference vs homo sapiens brain given sufficient compute.)
And qualitatively via:
Cognitive Superiority of Artificial Intelligence
I think the aforementioned differences are potent. And would confer the AI considerable advantage over humans:
For example:
Equivalent Power?
My intuition is that there will be problems that it would take human mathematicians/scientists/philosophers centuries to solve that such an AI can probably get done in reasonable time frames. That's powerful.
But it still doesn't feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/more efficiently than humans. Solve problems faster than we can.
It doesn't feel like the AI can solve problems that humans will never solve period[2] in the way that humans can solve many problems that chimpanzees will never solve period[3](most of mathematics, physics, computer science, etc).
It feels to me that the human brain — though I'm using human civilisation here as opposed to any individual human — is still roughly as "powerful" as this vastly superior engine of cognition. We can solve the exact same problems as superintelligences; they can just do it more effectively/efficiently.
I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?
Universality?
One reason the human brain would be equivalently powerful to a superintelligence would be that the human brain is "universal" in some sense (note that it would have to be a sense in which chimpanzee brains are not universal). If the human brain was capable of solving all "real world" problems, then of course there wouldn't be any other engines of cognition that were strictly more powerful.
I am not able to provide a rigorous definition of the sense of "universality" I mean here — but to roughly gesture in the direction of the concept I have in mind — it's something like "can eventually learn any natural "real world"[4] problem set/domain that another agent can learn".
Caveat
I think there's an argument that if there are (real world) problems that human civilisation can never solve[5] no matter what, we wouldn't be able to conceive/imagine them. I think this is kind of silly/find myself distrustful/sceptical of that line of reasoning.
We have universal languages (our natural languages also seem universal), so a description of such problems should be presentable in such languages. Though perhaps the problem description is too large to fit in working memory. But even then, it can still be stored electronically.
But more generally, I do not think that "I can coherently describe a problem" implies "I can solve the problem". There are many problems that I can describe but not solve[6], and I don't expect this to be broadly different for humans. If there are problems we cannot solve, I would still expect that we are able to describe them. I welcome suggestions for problems that you think human civilisation can never solve, but it's not particularly my primary inquiry here.
To be clear, I do not actually expect that the raw speed difference between CPU clock cycles and neuronal firing rates will straightforwardly translate to a speed of thought difference between human and artificial cognition (I expect a great many operations may be involved in a single thought, and I suspect intelligence won't just be that easy), but the sheer 9 order of magnitude difference does deserve consideration.
Furthermore, it needs to be stressed that the 0.1 - 2Hz figure is a baseline/average rate. Our maximum rate during periods of intense cognitive effort could well be significantly higher (this may be thought of as "overclocking").
To be clear, when I say "humans will never solve", I am imagining human civilisation not an individual human scientists. There are some problems that remained unsolved by civilisation for centuries. And while we may accelerate our solution of hard problems by developing thinking machines, I think we are only accelerating said solutions. I do not think there are problems that civilisation will just never solve if we never develop human level general AI.
Assuming that the intelligence of chimpanzees is roughly held constant or only drifts within a narrow range across generations. Chimpanzees evolving to considerably higher levels of intelligence would not still be "chimpanzees" for the purpose of my questions.
Though it may be better to replace "real world" with "useful". There may be some practical tasks that some organisms engage in, which the human brain cannot effectively "learn". But those tasks aren't useful for us to learn, so I don't feel they would be necessary for the notion of universality I'm trying to gesture at.
In case it was not clear, for the purposes of this question, the problems that "human civilisation can solve" refer to those problems that human civilisation can solve within the lifetime of the universe without developing human level general AI
The list of unsolved problems in computer science
The list of unsolved problems in physics
...
Provide many other examples