I often encounter some confusion about whether the fact that synapses in the brain typically fire at frequencies of 1-100 Hz while the clock frequency of a state-of-the-art GPU is on the order of 1 GHz means that AIs think "many orders of magnitude faster" than humans. In this short post, I'll argue that this way of thinking about "cognitive speed" is quite misleading.
The clock speed of a GPU is indeed meaningful: there is a clock inside the GPU that provides some signal that's periodic at a frequency of ~ 1 GHz. However, the corresponding period of ~ 1 nanosecond does not correspond to the timescale of any useful computations done by the GPU. For instance; in the A100 any read/write access into the L1 cache happens every ~ 30 clock cycles and this number goes up to 200-350 clock cycles for the L2 cache. The result of these latencies adding up along with other sources of delay such as kernel setup overhead etc. means that there is a latency of around ~ 4.5 microseconds for an A100 operating at the boosted clock speed of 1.41 GHz to be able to perform any matrix multiplication at all:
The timescale for a single matrix multiplication gets longer if we also demand that the matrix multiplication achieves something close to the peak FLOP/s performance reported in the GPU datasheet. In the plot above, it can be seen that a matrix multiplication achieving good hardware utilization can't take shorter than ~ 100 microseconds or so.
On top of this, state-of-the-art machine learning models today consist of chaining many matrix multiplications and nonlinearities in a row. For example, a typical language model could have on the order of ~ 100 layers with each layer containing at least 2 serial matrix multiplications for the feedforward layers[1]. If these were the only places where a forward pass incurred time delays, we would obtain the result that a sequential forward pass cannot occur faster than (100 microseconds/matmul) * (200 matmuls) = 20 ms or so. At this speed, we could generate 50 sequential tokens per second, which is not too far from human reading speed. This is why you haven't seen LLMs being serviced at per token latencies that are much faster than this.
We can, of course, process many requests at once in these 20 milliseconds: the bound is not that we can generate only 50 tokens per second, but that we can generate only 50 sequential tokens per second, meaning that the generation of each token needs to know what all the previously generated tokens were. It's much easier to handle requests in parallel, but that has little to do with the "clock speed" of GPUs and much more to do with their FLOP/s capacity.
The human brain is estimated to do the computational equivalent of around 1e15 FLOP/s. This performance is on par with NVIDIA's latest machine learning GPU (the H100) and the brain achieves this performance using only 20 W of power compared to the 700 W that's drawn by an H100. In addition, each forward pass of a state-of-the-art language model today likely takes somewhere between 1e11 and 1e12 FLOP, so the computational capacity of the brain alone is sufficient to run inference on these models at speeds of 1k to 10k tokens per second. There's, in short, no meaningful sense in which machine learning models today think faster than humans do, though they are certainly much more effective at parallel tasks because we can run them on clusters of multiple GPUs.
In general, I think it's more sensible for discussion of cognitive capabilities to focus on throughput metrics such as training compute (units of FLOP) and inference compute (units of FLOP/token or FLOP/s). If all the AIs in the world are doing orders of magnitude more arithmetic operations per second than all the humans in the world (8e9 people * 1e15 FLOP/s/person = 8e24 FLOP/s is a big number!) we have a good case for saying that the cognition of AIs has become faster than that of humans in some important sense. However, just comparing the clock speed of a GPU to the synapse firing frequency in the human brain and concluding that AIs think faster than humans is a sloppy argument that neglects how training or inference of ML models on GPUs actually works right now.
While attention and feedforward layers are sequential in the vanilla Transformer architecture, they can in fact be parallelized by adding the outputs of both to the residual stream instead of doing the operations sequentially. This optimization lowers the number of serial operations needed for a forward or backward pass by around a factor of 2 and I assume it's being used in this context. ↩︎
Thanks. I’m not Eliezer so I’m not interested in litigating whether his precise words were justified or not. ¯\_(ツ)_/¯
I’m not sure we’re disagreeing about anything substantive here.
That’s probably not what I meant, but I guess it depends on what you mean by “task”.
For example, when a human is tasked with founding a startup company, they have to figure out, and do, a ton of different things, from figuring out what to sell and how, to deciding what subordinates to hire and when, to setting up an LLC and optimizing it for tax efficiency, to setting strategy, etc. etc.
One good human startup founder can do all those things. I am claiming that one AI can do all those things too, but at least 1-2 OOM faster, wherever those things are unconstrained by waiting-for-other-people etc.
For example: If the AI decides that it ought to understand something about corporate tax law, it can search through online resources and find the answer at least 10-100× faster than a human could (or maybe it would figure out that the answer is not online and that it needs to ask an expert for help, in which case it would find such an expert and email them, also 10-100× faster). If the AI decides that it ought to post a job ad, it can figure out where best to post it, and how to draft it to attract the right type of candidate, and then actually write it and post it, all 10-100× faster. If the AI decides that it ought to look through real estate listings to make a shortlist of potential office spaces, it can do it 10-100× faster. If the AI decides that it ought to redesign the software prototype in response to early feedback, it can do so 10-100× faster. If the AI isn’t sure what to do next, it figures it out, 10-100× faster. Etc. etc. Of course, the AI might use or create tools like calculators or spreadsheets or LLMs, just as a human might, when it’s useful to do those things. And the AI would do all those things really well, at least as well as the best remote-only human startup founder.
That’s what I have in mind, and that’s what I expect someday (definitely not yet! maybe not for decades!)