The reason why the human brain can get away with such a low "clock speed" is because intelligence is an embarrassingly parallel problem. Realtime constraints and the clock speed of a chip puts a limit on how deep the stack of neural net layers can be, but no limit on how wide the neural net can be, and according to deep learning theory, a wide net is complete for all problems.
We also haven't seen yet how big an impact neuromorphic architectures could be. It could be several orders of magnitude. Add in the ability of multiple intelligent units to work together just like humans do (but with less in-fighting) and it's hard to say just how much effective collective intelligence they could express.
Thanks for the reply. Do you have any position or intuitions on question 1 or 2?
Does more inference compute speed up inference time?
Meant to comment on this a while back but forgot. I have thought about this also and broadly agree that early AGI with 'thoughts' at GHz levels is highly unlikely. Originally this was because pre-ML EY and the community broadly associated thoughts with CPU ops but in practice thoughts are more like forward passes through the model.
As Connor Sullivan says, the reasons brains can have low clock rates is that our intelligence algorithms are embarrassingly parallel, as is current ML. Funnily enough, for large models (and definitely if we were to run forward passes through NNs as large as the brain), inference latency is already within an OOM or so of the brain (100ms). Due to parallelisation, you can distribute your forward pass across many GPUs to potentially decrease latency but eventually will get throttled by the networking overhead.
The brain, interestingly, achieves its relatively low latency by being highly parallel and shallow. The brain is not that many 'layers' deep. Even though each neuron is slow, the brain can perform core object recognition in <300ms at about 10 synaptic transmissions from retina -> IT. This is compared to current resnets which are >>10 layers. It does this through some combination of better architecture, better inference algorithm, and adaptive compute which trades space for time. i.e. you don't have do all your thinking in a forward pass but instead have recurrent connections so you can keep pondering and improving your estimations through multiple 'passes'.
Neuromorphic hardware can ameliorate some of these issues but not others. Potentially, it allows for much more efficient parallel processing and lets you replace a multi-GPU cluster with a really big neuromorphic chip. Theoretically this could enable forward passes to occur at GHz speed but probably not within the next decade (technically if you use pure analog or optical chips you can get even faster forward passes!). Downsides are unknown hardware difficulty for more exotic designs and general data movement costs on chip. Also energy intensity will be huge at these speeds. Another bottleneck you end up with in practice is simply speed of encoding/decoding data at the analog-digital interface.
Even based on GPU clusters, early AGI can probably improve inference latency by a few OOMs to 100-1000s of forward passes per second just from low hanging hardware/software improvements. Additional benefits AGI could have are:
1.) Batching. GPUs are great at handling batches rapidly. The AGI can 'think' about 1000 things in parallel. The brain has to operate on batch size 1. Interestingly this is also a potential limitation of a lot of neuromorphic hardware as well.
2.) Direct internal access to serial compute. Imagine you had a python repl in your brain you could query and instantly get responses. Same with instant internal database lookup.
Strongly upvoted, I found this very valuable/enlightening. I think you should make this a top level answer.
"Just like the smartest humans alive only a thousand times faster" is actually presented as a conservative scenario to pump intuition in the right direction. It's almost certainly achievable by known physics, even if it would be very expensive and difficult for us to achieve directly.
An actual superintelligence will be strictly better than that, because its early iterations will design systems better than we can, and later iterations running on those systems will be able to design systems more effective than we can possibly imagine or even properly comprehend if it were explained to us. They might not be strictly faster, but speed is much easier to extrapolate and communicate than actual superhuman intelligence. People can understand what it means to think much faster in a way that they fundamentally can't think of actually being smarter.
So in a way, the actual question asked here is irrelevant. A speedup is just an analogy to try to extrapolate to something - anything - that is vastly more capable than our thought processes. The reality would be far more powerful still in ways that we can't comprehend.
The reality would be far more powerful still in ways that we can't comprehend.
I am unconvinced by this.
I get your broader point though.
That said, I am still curious about how pragmatic speed superintelligences are in practice. I don't think it's an irrelevant question.
Disclaimer
I am very ignorant about machine learning.
Introduction
I've frequently heard suggestions that a superintelligence could dominate humans by thinking a thousand or million times faster than a human. Is this actually a feasible outcome for prosaic ML systems?
Why I Doubt Speed Superintelligence
One reason I think this might not be the case is that the "superpower" of speed superintelligences is faster serial thought. However, I'm under the impression that we're already running into fundamental limits to the serial processing speed and can't really make them go much faster:
Of course the "clock rate" of the human brain is much slower, but it's not like ML models are ever going to run on processors with significantly faster clock rates. Even in 2062, we probably will not have any production processors with > 50 GHz base clock rate (it may well be considerably slower). Rising compute availability for ML will continue to be driven by parallel processing techniques.
GPT-30 would not have considerably faster serial processing than GPT-3. And I'm under the impression that "thinking speed" is mostly a function of serial processing speed?
Questions
The above said, my questions: