I’m looking for estimates of the total compute available to evolution:

Total number of cell divisions

Total number of times a DNA nucleotide was transcribed to RNA

Total number of times an RNA codon was translated to an amino acid

Total number of base pairs in all genomes

Total entropy of the set of all genomes

I’d like to identify other significant sources of compute available to evolution (epigenome? lateral gene transfer? interspecies communication?) and how to quantify them.

I’m looking for estimates of total compute over all 4 billion years of Earth life, and also how compute varied with time.

Grateful for any leads!

New Answer
New Comment

1 Answers sorted by

tailcalled

40

I always feel like this question is missing something, because it forgets the biggest cost for evolution: deploying the algorithms in the real world and seeing what effects they have.

That is, even if you had all the compute necessary to simulate human brains for all of evolutionary time, you would also need to figure out the effects of the brains on the real world in order to know their usefulness in practice. Doing this directly requires compute on the order of magnitude of the real world, though obviously in practice a lot of this can be optimized away.

Interesting, my first reaction was that evolution doesn't need to "figure out" the extended phenotype (= "effects on the real world") It just blindly deploys its algorithms, and natural selection does the optimization.

But I think what you're saying is, the real world is "computing" which individuals die and which ones reproduce, and we need a way to quantify that computational work. You're right!

2tailcalled
I should add: I think it is the hardest-to-compute aspects of this that are the most important to the evolution of general intelligence. With a "reasonable" compute budget, you could set up a gauntlet of small tasks that challenge your skills in various ways. However, this could probably be Goodharted relatively "easily". But the real-world isn't some closed short-term system; it also tests you against effects that take years or even decades to become relevant, just as hard as it tests you for effects that immediately become relevant. And that is something I think we will have a hard time with optimizing our AIs against.
1redbird
You're saying AI will be much better than us at long-term planning? It's hard to train for tasks where the reward is only known after a long time (e.g. how would you train for climate prediction?)
2tailcalled
No, I'm saying that AI might be much worse than us at long-term planning, because evolution has selected us on the basis of very long chains of causal effects, whereas we can't train AIs on the basis of such long chains of causal effects.
1tailcalled
Yep.
12 comments, sorted by Click to highlight new comments since:

Here's a model made by Median Group estimating total brain compute in evolutionary history. Sorry it's not better documented, this was put together quickly on-the-fly.

What are FLOPz and FLOPs ?

What sources did you draw from to estimate the distributions?

FLOPs = floating point operations per second.

FLOPz = the same thing, I think (it's used as if it's in the FLOPs unit).

I don't remember the sources for everything. If you want to get a more accurate estimate I recommend re-running with your own numbers.

Here are some estimates of brain compute.

Here's an estimate of the mass of a human brain.

Here's an estimate of current animal biomass.

Here are brain to body mass ratios for different species. Here's an estimate of the composition of animal biomass which should help figure out which brain-to-body-mass numbers to use.

Here's a Quora question about changes in Earth biomass over time.

(I think if you spent some time on these estimates they'd turn out different from the numbers in the model; we did this mostly to check rough order of magnitude over the course of a couple hours, finding that evolution will not be simulable with foreseeable compute.)

Great links, thank you!!

So your focus was specifically on the compute performed by animal brains.

I expect total brain compute is dwarfed by the computation inside cells (transcription & translation). Which in turn is dwarfed by the computation done by non-organic matter to implement natural selection. I had totally overlooked this last part!

Non-brain matter is most of the compute for a naive physics simulation, however it's plausible that it could be sped up a lot, e.g. the interiors of rocks are pretty static and similar to each other so maybe they can share a lot of computation. For brains it would be harder to speed up the simulation without changing the result a lot.

Awesome!!! Exactly the kind of thing I was looking for

A common misconception is to envision genomic evolution as a matter of single base substitutions (i.e. an adenine becoming a guanine, etc).

Truth is, most of real genome evolution leaps are created by structural variations, i.e. from whole genome/chromosome duplications to microdeletions/microduplications fostered by the existence of repeated sequences interspersed in the genome (mostly due to [retro-]transposons), which act as hotspots for copy/paste/delete events.

And these repeated sequences aren't equal among time and species. 

This is a poorly thought out question.

 

Evolution implies a direction of travel driven by selection pressure, e.g., comparative fitness within an environment.

 

A sequence of random processes that are not driven by some selection pressure is just, well, random.

 

What is the metric for computational effort?

 

Are you actually interested in computational resources consumed, or percentage of possibilities explored?

The main reason for past discussions of this question has been to upperbound the amount of compute necessary to create AGI: "if evolution could create humans with X yottaflops total, then we can certainly create an AGI with <=X yottaflops - if only by literally simulating molecule by molecule the evolution of humanity". Basically, the worst possible biological anchor estimate. (Personally, I think it's so vacuous an upper bound as to not have been worth the energy which has already been put into thinking about it.)

AGI timeline is not my motivation, but the links look helpful, thanks!

Hmm how would you define "percentage of possibilities explored"? 

I suggested several metrics, but I am actively looking for additional ones, especially for the epigenome and for communication at the individual level (e.g. chemical signals between fungi and plants, animal calls, human language).

Chemical space, https://en.wikipedia.org/wiki/Chemical_space, is one candidate for a metric of the possibilities.

 

The book "Chemical Evolution: Origins of the Elements, Molecules and Living Systems" by  Stephen F. Mason might well contain the kinds of calculations you are looking for.