In the strongest sense, neither the human brain analogy nor the evolution analogy really apply to AI. They only apply in a weaker sense where you are aware you're working with analogy, and should hopefully be tracking some more detailed model behind the scenes.
The best argument to consider human development a stronger analogy than evolutionary history is that present-day AIs work more like human brains than they do like evolution. See e.g. papers finding that you can use a linear function to translate some concepts between brain scans and internal layers in a LLM, or the extremely close correspondence between ConvNet feature and neurons in the visual cortex. In contrast, I predict it's extremely unlikely that you'll be able to find a nontrivial correspondence between the internals of AI and evolutionary history or the trajectory of ecosystems or similar.
Of course, just because they work more like human brains after training doesn't necessarily mean they learn similarly - and they don't learn similarly! In some ways AI's better (backpropagation is great, but it's basically impossible to implement in a brain), in other ways AI's worse (biological neurons are way smarter than artificial 'neurons'). Don't take the analogy too literally. But most of the human brain (the neocortex) already learns its 'weights' from experience over a human lifetime, in a way that's not all that different from self-supervised learning if you squint.
If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution.
Just to make sure I fully understand your argument, is this paraphrase correct?
"Suppose we have the compute theoretically required to simulate the human brain down to an adequate granularity for obtaining its intelligence (which might be at the level of cells instead of, say, the atomic level). Even so, one has to consider the compute required to actually build such a simulation, which could be much larger as the human brain was built by the full universe."
(My personal view is that the opposite direction is true: it seems with recent evidence that we can pareto-exceed human intelligence while being very far from the compute required to simulate a brain. An idea I've seen floating around here is that natural selection built our brain randomly with a reward function that valued producing offspring so there is a lot of architecture that is irrelevant to intelligence)
My take is that it is irrelevant so I want to hear opposing viewpoints.
The really simple argument for its irrelevance is that evolution used a lot more compute to produce human brains than the compute inside a single human brain. If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution. (This includes compute to simulate environment, which Ajeya Cotra's bioanchors wrongly ignores.)
What am I missing?