The amount of compute required to emulate the human brain depends on the level of detail we want to emulate.
Back in 2008, Sandberg and Bostrom proposed the following values:
Level of emulation detail | FLOPS required to run the brain emulation in real-time |
Analog network population model | 10^15 |
Spiking neural network | 10^18 |
Electrophysiology | 10^22 |
Metabolome | 10^25 |
Proteome | 10^26 |
States of protein complexes | 10^27 |
Distribution of protein complexes | 10^30 |
Stochastic behavior of single molecules | 10^43 |
Today I've encountered an interesting piece of data on GPT-3 (source):
- GPT-3 required ~10^15 FLOPS for inference.
- It required ~10^23 FLOPS to train it [Note: the training took some months. It would require ~10^30 FLOPS to train it from zero in one second]
As far as I know, GPT-3 was the first AI with the range and the quality of cognitive abilities comparable to the human brain (although still far from reaching the human level on many tasks).
Coincidentally(?), GPT-3 requires 10^15 - 10^30 FLOPS to operate at the brain's speed, which is roughly the same amount of compute necessary to run a decent emulation of the human brain.
The range of possible compute is almost infinite (e.g. 10^100 FLOPS and beyond). Yet both intelligences are in the same relatively narrow range of 10^15 - 10^30 (assuming the human brain emulation doesn't need to be nano-level detailed).
Is it a coincidence, or is there something deeper going on here?
This could be important for both understanding the human brain, and for predicting how far we are from the true AGI.
There was a recent post estimating that GTP-3 is equivalent to about 175 bees. There is also a comment there asserting that a human is about 140k bees.
I would be very interested if someone could explain where this huge discrepancy comes from. (One estimate is equating synapses with parameters, while this one is based on FLOPS. But there shouldn't be such a huge difference.)
Author of the post here — I don’t think there’s a huge discrepancy here, 140k/175 is clearly within the range of uncertainty of the estimates here!
That being said the Bee post really shouldn’t be taken too seriously. 1 synapse is not exactly one float 16 or int8 parameter, etc