I always feel like this question is missing something, because it forgets the biggest cost for evolution: deploying the algorithms in the real world and seeing what effects they have.
That is, even if you had all the compute necessary to simulate human brains for all of evolutionary time, you would also need to figure out the effects of the brains on the real world in order to know their usefulness in practice. Doing this directly requires compute on the order of magnitude of the real world, though obviously in practice a lot of this can be optimized away.
Interesting, my first reaction was that evolution doesn't need to "figure out" the extended phenotype (= "effects on the real world") It just blindly deploys its algorithms, and natural selection does the optimization.
But I think what you're saying is, the real world is "computing" which individuals die and which ones reproduce, and we need a way to quantify that computational work. You're right!
FLOPs = floating point operations per second.
FLOPz = the same thing, I think (it's used as if it's in the FLOPs unit).
I don't remember the sources for everything. If you want to get a more accurate estimate I recommend re-running with your own numbers.
Here are some estimates of brain compute.
Here's an estimate of the mass of a human brain.
Here's an estimate of current animal biomass.
Here are brain to body mass ratios for different species. Here's an estimate of the composition of animal biomass which should help figure out which brain-to-body-mass numbers to use.
Here's a Quora question about changes in Earth biomass over time.
(I think if you spent some time on these estimates they'd turn out different from the numbers in the model; we did this mostly to check rough order of magnitude over the course of a couple hours, finding that evolution will not be simulable with foreseeable compute.)
Great links, thank you!!
So your focus was specifically on the compute performed by animal brains.
I expect total brain compute is dwarfed by the computation inside cells (transcription & translation). Which in turn is dwarfed by the computation done by non-organic matter to implement natural selection. I had totally overlooked this last part!
Non-brain matter is most of the compute for a naive physics simulation, however it's plausible that it could be sped up a lot, e.g. the interiors of rocks are pretty static and similar to each other so maybe they can share a lot of computation. For brains it would be harder to speed up the simulation without changing the result a lot.
A common misconception is to envision genomic evolution as a matter of single base substitutions (i.e. an adenine becoming a guanine, etc).
Truth is, most of real genome evolution leaps are created by structural variations, i.e. from whole genome/chromosome duplications to microdeletions/microduplications fostered by the existence of repeated sequences interspersed in the genome (mostly due to [retro-]transposons), which act as hotspots for copy/paste/delete events.
And these repeated sequences aren't equal among time and species.
This is a poorly thought out question.
Evolution implies a direction of travel driven by selection pressure, e.g., comparative fitness within an environment.
A sequence of random processes that are not driven by some selection pressure is just, well, random.
What is the metric for computational effort?
Are you actually interested in computational resources consumed, or percentage of possibilities explored?
The main reason for past discussions of this question has been to upperbound the amount of compute necessary to create AGI: "if evolution could create humans with X yottaflops total, then we can certainly create an AGI with <=X yottaflops - if only by literally simulating molecule by molecule the evolution of humanity". Basically, the worst possible biological anchor estimate. (Personally, I think it's so vacuous an upper bound as to not have been worth the energy which has already been put into thinking about it.)
Hmm how would you define "percentage of possibilities explored"?
I suggested several metrics, but I am actively looking for additional ones, especially for the epigenome and for communication at the individual level (e.g. chemical signals between fungi and plants, animal calls, human language).
I’m looking for estimates of the total compute available to evolution:
Total number of cell divisions
Total number of times a DNA nucleotide was transcribed to RNA
Total number of times an RNA codon was translated to an amino acid
Total number of base pairs in all genomes
Total entropy of the set of all genomes
I’d like to identify other significant sources of compute available to evolution (epigenome? lateral gene transfer? interspecies communication?) and how to quantify them.
I’m looking for estimates of total compute over all 4 billion years of Earth life, and also how compute varied with time.
Grateful for any leads!