Yesterday, I wrote:

Science has a very exact idea of the capabilities of evolution. If you praise evolution one millimeter higher than this, you're not "fighting on evolution's side" against creationism. You're being scientifically inaccurate, full stop.

In this post I describe some well-known inefficiencies and limitations of evolutions. I say "evolutions", plural, because fox evolution works at cross-purposes to rabbit evolution, and neither can talk to snake evolution to learn how to build venomous fangs.

So I am talking about limitations of evolution here, but this does not mean I am trying to sneak in creationism. This is standard Evolutionary Biology 201. (583 if you must derive the equations.) Evolutions, thus limited, can still explain observed biology; in fact the limitations are necessary to make sense of it. Remember that the wonder of evolutions is not how well they work, but that they work at all.

Human intelligence is so complicated that no one has any good way to calculate how efficient it is. Natural selection, though not simple, is simpler than a human brain; and correspondingly slower and less efficient, as befits the first optimization process ever to exist. In fact, evolutions are simple enough that we can calculate exactly how stupid they are.

Evolutions are slow. How slow? Suppose there's a beneficial mutation which conveys a fitness advantage of 3%: on average, bearers of this gene have 1.03 times as many children as non-bearers. Assuming that the mutation spreads at all, how long will it take to spread through the whole population? That depends on the population size. A gene conveying a 3% fitness advantage, spreading through a population of 100,000, would require an average of 768 generations to reach universality in the gene pool. A population of 500,000 would require 875 generations. The general formula is

  • Generations to fixation = 2 ln(N) / s

where N is the population size, and (1 + s) is the fitness. (If each bearer of the gene has 1.03 times as many children as a non-bearer, s = 0.03.)

Thus, if the population size were 1,000,000—the estimated population in hunter-gatherer times—then it would require 2763 generations for a gene conveying a 1% advantage to spread through the gene pool.

This should not be surprising; genes have to do all their own work of spreading. There's no Evolution Fairy who can watch the gene pool and say, "Hm, that gene seems to be spreading rapidly—I should distribute it to everyone." In a human market economy, someone who is legitimately getting 20% returns on investment—especially if there's an obvious, clear mechanism behind it—can rapidly acquire more capital from other investors; and others will start duplicate enterprises. Genes have to spread without stock markets or banks or imitators—as if Henry Ford had to make one car, sell it, buy the parts for 1.01 more cars (on average), sell those cars, and keep doing this until he was up to a million cars.

All this assumes that the gene spreads in the first place. Here the equation is simpler and ends up not depending at all on population size:

  • Probability of fixation = 2s

A mutation conveying a 3% advantage (which is pretty darned large, as mutations go) has a 6% chance of spreading, at least on that occasion. Mutations can happen more than once, but in a population of a million with a copying fidelity of 10^-8 errors per base per generation, you may have to wait a hundred generations for another chance, and then it still has an only 6% chance of fixating.

Still, in the long run, an evolution has a good shot at getting there eventually. (This is going to be a running theme.)

Complex adaptations take a very long time to evolve. First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool. Only then can another allele B, which depends on A, begin rising to fixation. A fur coat is not a strong advantage unless the environment has a statistically reliable tendency to throw cold weather at you. Well, genes form part of the environment of other genes, and if B depends on A, B will not have a strong advantage unless A is reliably present in the genetic environment.

Let's say that B confers a 5% advantage in the presence of A, no advantage otherwise. Then while A is still at 1% frequency in the population, B only confers its advantage 1 out of 100 times, so the average fitness advantage of B is 0.05%, and B's probability of fixation is 0.1%. With a complex adaptation, first A has to evolve over a thousand generations, then B has to evolve over another thousand generations, then A* evolves over another thousand generations... and several million years later, you've got a new complex adaptation.

Then other evolutions don't imitate it. If snake evolution develops an amazing new venom, it doesn't help fox evolution or lion evolution.

Contrast all this to a human programmer, who can design a new complex mechanism with a hundred interdependent parts over the course of a single afternoon. How is this even possible? I don't know all the answer, and my guess is that neither does science; human brains are much more complicated than evolutions. I could wave my hands and say something like "goal-directed backward chaining using combinatorial modular representations", but you would not thereby be enabled to design your own human. Still: Humans can foresightfully design new parts in anticipation of later designing other new parts; produce coordinated simultaneous changes in interdependent machinery; learn by observing single test cases; zero in on problem spots and think abstractly about how to solve them; and prioritize which tweaks are worth trying, rather than waiting for a cosmic ray strike to produce a good one. By the standards of natural selection, this is simply magic.

Humans can do things that evolutions probably can't do period over the expected lifetime of the universe. As the eminent biologist Cynthia Kenyon once put it at a dinner I had the honor of attending, "One grad student can do things in an hour that evolution could not do in a billion years." According to biologists' best current knowledge, evolutions have invented a fully rotating wheel on a grand total of three occasions.

And don't forget the part where the programmer posts the code snippet to the Internet.

Yes, some evolutionary handiwork is impressive even by comparison to the best technology of Homo sapiens. But our Cambrian explosion only started, we only really began accumulating knowledge, around... what, four hundred years ago? In some ways, biology still excels over the best human technology: we can't build a self-replicating system the size of a butterfly. In other ways, human technology leaves biology in the dust. We got wheels, we got steel, we got guns, we got knives, we got pointy sticks; we got rockets, we got transistors, we got nuclear power plants. With every passing decade, that balance tips further.

So, once again: for a human to look to natural selection as inspiration on the art of design, is like a sophisticated modern bacterium trying to imitate the first awkward replicator's biochemistry. The first replicator would be eaten instantly if it popped up in today's competitive ecology. The same fate would accrue to any human planner who tried making random point mutations to their strategies and waiting 768 iterations of testing to adopt a 3% improvement.

Don't praise evolutions one millimeter more than they deserve.

Coming tomorrow: More exciting mathematical bounds on evolution!


1 Graur, D. and Li, W.H. 2000. Fundamentals of Molecular Evolution, 2nd edition. Sinauer Associates, Sunderland, MA.

2 Haldane, J. B. S. 1927. A mathematical theory of natural and artificial selection. IV. Proc. Camb. Philos. Soc. 23:607-615.

New Comment
68 comments, sorted by Click to highlight new comments since:

Eliezer,

Your posts on evolution are fantastic. I hope there will be many more of them.

Eliezer, it certainly seems that you go over your "writer's molasse". Congrats!

This is an excellent post. I have revised my opinion on evolution.

The same fate would accrue to any human planner who tried making random point mutations to their strategies and waiting 768 iterations of testing to adopt a 3% improvement.

The counterargument here is that it might be worth doing if each iteration of testing is cheap enough. With enough computing power, one might very well be able to do that much simulation cheaply and quickly. (One reason genetic algorithms work is that they're simple enough that we understand how they work, even if their outputs are unpredictable. We aren't capable of modeling human creativity yet, so throwing enough computing power at a much dumber algorithm that we know works will still give amazingly good results.)

Nice calculations!

But don't these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?

It would seem that, using the same approach toward a nervous system would lead one to calculate the adaptiveness of a dendrite - or less. Uh, what is a part of nervous system operation that seems comfortably "understood" to the same extent as AGTC operations? Whatever part that is, would, in a fair comparison, be what could be compared to the mechanism these calculations describe. Yes?

Anyway, isn't it premature to assert, "Natural selection, though not simple, is simpler than a human brain", given the current understanding of either?

And, please, let's not go too far along the road of "Look how smart we are! Evolution didn't produce diddly, while, in only 4 hundred years we have produced millions of My Little Pony dolls." Evolution produced cow pies, which we are still struggling with, after all. :)

Speculation of what nervous systems and genetic evolution do in common sure seems like fertile ground, though. It would be interesting to know, for instance, what's both necessary and sufficient to describe both.

"According to biologists' best current knowledge, evolutions have invented a fully rotating wheel on a grand total of three occasions"

Eliezer, what do you mean by this? I think my brain is not working today or something cause this seems like it's either a joke (which I do not get) or a reference to something in biology (which I am not aware of).

Other than that bit of confusion, this was a fantastic post. I think the last few things you've written on evolution should be required reading in every biology class (especially high school ones). So many well intentioned people have a severe misunderstanding of evolution and what you've written I think can clear it up.

The same fate would accrue to any human planner who tried making random point mutations to their strategies and waiting 768 iterations of testing to adopt a 3% improvement.

If your engineers are struggling to produce even 1% improvements in your design, and the benefits of even a marginal improvement are sufficiently large, it might well be worth trying such a strategy. The more complex and humanly-incomprehensible the system is, the more likely that such a strategy would yield bonuses that rational analysis couldn't reach, as well.

Rationality is unspeakably powerful, but it's not everything.

"evolutions have invented a fully rotating wheel on a grand total of three occasions"

Eliezer, what do you mean by this?

Here's one

[-]40

we can't build a self-replicating system the size of a butterfly

I must have missed when we built any self-replicating systems at all. . .

A well-equipped machine shop, paired with a smelter, and some stacks of blueprints inscribed in iron plates, could probably be used to produce all it's component parts from naturally-occurring materials. Of course, it would be dependent on human crafters and technicians... but there are species of insect which can only reproduce parasitically, so that's not an automatic disqualification.

Rational thought needs a knowledge base; given that, it can outperform evolution. When the knowledge base is lacking and improving it is difficult, then an evolutionary strategy may be the best course. Lots of examples of genetic algorithms accomplishing what rational design couldn't (with the current knowledge base) at TalkOrigins.

The Mantis Shrimp (http://en.wikipedia.org/wiki/Mantis_shrimp) forms a crude wheel to maneuver on land.

But I still can't think of any examples of wheels in nature that use axles and are large enough to be more than a free floating rotating object. Maybe this seems an arbitrary threshold, but I think usually when we marvel at the wheel, we're marvelling at axles, and their ability to support weight and radically reduce friction when moving big heavy things, all while holding the object basically level. While the cellular turbines that power us are pretty fascinating in their own right, it'd be interesting if anyone could think of biological wheels with axles that were a bit bigger. So far, I can't think of any outside of science fiction.*

*Pullman's "The Amber Spyglass" even gives a somewhat plausible evolutionary background for his axled and wheeled Mulefa. Any others?

[+]/ehj2-50

Great to see more thoughts on evolution from you Eliezer - good stuff.

The three known times evolutions invented a freely rotating wheel are: ATP synthase, the bacterial flagellum, and an obscure third example discovered recently which I forget.

But don't these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?

Those are average cases, not lower bounds. (It would be very surprising to see it happen either ten times faster or ten times slower.) Tomorrow we will discuss upper bounds.

Everyone: There's a lot of hype surrounding genetic algorithms. DO NOT GET YOUR INFORMATION FROM BUSINESS BOOKS PRAISING THE VALUE OF CHAOS. Read AI textbooks instead. Genetic algorithms are okay (human-competitive) at simultaneously optimizing 37 different criteria using some kind of single shape that can be continuously deformed. They're okay at designing algorithms with clearly defined success criteria that run fast most of the time in 37 lines of code. They suck like a vacuum cleaner at designing anything larger than that - defeated by the same exponential explosion that consumes most AI algorithms. Most genetic algorithms are not biologically realistic - the ones that do straight beam search, straight hill-climbing, typically do as well or better than the ones that try to imitate sexual reproduction. Remember that it took billions of years of evolution before the Cambrian explosion. Our genetic algorithms haven't gotten to the level of multicellular organisms or sex yet.

Back in my undergrad days, a fellow student of mine implemented a genetic algorithm on a field-programmable gate array with the intention of performing computations. Once he got the thing working in the first place, it took him half a semester to get it able to pass the 7 bits from the 7 input channels to the 7 output channels, in order. He didn't have time left over to try anything more complicated.

So, yeah.

Well genetic algorithms work by making assumptions about the problem space, mainly that better solutions are very likely to be found close to other good solutions. If the assumption is not true or only weakly true, than of course it isn't going to work. Like if beneficial mutations are extremely rare or practically non-existent.

My point is that it depends entirely on the problem and how it's represented. Some problems work really well for GAs, and some don't at all.

woah. thanks for the links about the evolved wheels. that's pretty awesome stuff right there hah.

Computing power is also increasing exponentially.

What fraction of the population must have the gene for it to be considered "fixated"; absolutely each an every member?

[+][anonymous]-50
Remember that it took billions of years of evolution before the Cambrian explosion.

That would be a powerful argument, if biological evolution had a telos that included the production of complex multicellular organisms. Sadly, it doesn't. Astronomically speaking, rock-eating, single-celled creatures are probably far better equipped for survival in this universe than we are.

Evolution seems to have you outnumbered.

What fraction of the population must have the gene for it to be considered "fixated"; absolutely each an every member?

In the equations, yes, I believe that's what's being calculated.

If this seems extreme, consider a complex machine like an eye, which probably has at least 100 genes, maybe 1000 if you count the supporting visual areas of the brain, and imagine that each gene is independently at 99% frequency in the population.

But yes, you could overlap to some degree in the evolution of complex machines; there'd be significant pressure for B once A was 50% frequent. I don't know off the top of my head how to calculate time to 50% frequency.

Probability of fixation = 2s

there are assumptions that go into this calculation, of course. the most important ones are the mutation is dominant and that the population size is constant. if the populations size is decreasing, the fixation probability goes down, and vice versa. deviation from dominance as well decreases the fixation probability.

This should not be surprising; genes have to do all their own work of spreading. There's no Evolution Fairy who can watch the gene pool and say, "Hm, that gene seems to be spreading rapidly - I should distribute it to everyone." In a human market economy, someone who is legitimately getting 20% returns on investment - especially if there's an obvious, clear mechanism behind it - can rapidly acquire more capital from other investors; and others will start duplicate enterprises. Genes have to spread without stock markets or banks or imitators - as if Henry Ford had to make one car, sell it, buy the parts for 1.01 more cars (on average), sell those cars, and keep doing this until he was up to a million cars.

Fantastic paragraph in a really interesting piece.

Then other evolutions don't imitate it. If snake evolution develops an amazing new venom, it doesn't help fox evolution or lion evolution.

The only nitpick would be the possible spread of genes through horizontal gene transfer, but in mammals that seems like it would be trivial in most any sense of the word.

Its a good post, but didn't get to the effect of million/billions of experiments (organisms) going on simultaneously. So it might take a long time for a particular improvement to get fixed, if there is selection pressure something will change to meet that. There are lots of simultaneous pathways to the end (reproduction).

While I stand by my first comment, in the interests of appropriate uncertainty I couldn't resist submitting for thought this quote from one of Eliezer's favorite papers, The Psychological Foundations of Culture

"The fact that evolution is not a process that works by "intelligence" cuts both ways, however. Precisely because modifications are randomly generated, adaptive design solutions are not precluded by the finite intelligence of any engineer. Consequently, evolution can contrive subtle solutions that only a superhuman, omniscient engineer could have intentionally designed."

Yes, and Tooby and Cosmides ended up on the wrong side of the argument with Tversky and Kahneman about to what degree biases are contextually adaptive rather than reflective of computational limits.

I've noticed that none of my heroes, not even Douglas Hofstadter or Eric Drexler, seem to live up to my standard of perfection. Always sooner or later they fall short. It's annoying, you know, because it means I have to do it.

Yes, and Tooby and Cosmides ended up on the wrong side of the argument with Tversky and Kahneman about to what degree biases are contextually adaptive rather than reflective of computational limits.

That seems reasonable.

In your opinion, are they right about to what degree human intelligence is dominated by domain-specific modules, and that that's a consequence of combinatorial explosion and the frame problem? Since reading it yesterday some of my barely conscious assumptions about intelligence have evaporated and I've started seeing words like "intelligence" and "learning" to be acting as curiosity-stoppers in many contexts. Thanks.

In your opinion, are they right about to what degree human intelligence is dominated by domain-specific modules, and that that's a consequence of combinatorial explosion and the frame problem?

They're certainly righter than the Standard Social Sciences Model they criticize, but swung the pendulum slightly too far in the new direction. Human beings are capable of learning a tremendously wider range of nonancestral tasks than chimpanzees, and precisely due to the combinatorial explosion, this cannot be itself explained by postulating any amount of domain-specific modularity. The brain is modular, and some of these modules are certainly domain-specific, but the key modularity is the orthagonalization of intelligence into architectural components like memory and category formation, not domain-specific procedures. The heart is not a specialized organ for running down prey, it's a specialized organ for pumping blood.

In a sense, my paper "Levels of Organization in General Intelligence" can be seen as a reply to Tooby and Cosmides on this issue; though not, in retrospect, a complete one.

...we only really began accumulating knowledge, around... what, four hundred years ago?

Surely longer than that...what am I missing?

Wonderful essay, but when you write, "There's no Evolution Fairy who can watch the gene pool and say, "Hm, that gene seems to be spreading rapidly - I should distribute it to everyone", aren't you leaving out the classic evolution fairy, Cupid? Mate attractiveness is not usually random, and Cupid does seem to usually use some kind of fitness testing. I grant fully that a potential mate may not recognize the newly evolved trait as an advantage, but potential mates usually can perceive secondary indicators such as better health, higher group status, more active pursuit of mating, or a Mercedes.

Or am I missing something?

Wonderful essay, but when you write, "There's no Evolution Fairy who can watch the gene pool and say, "Hm, that gene seems to be spreading rapidly - I should distribute it to everyone", aren't you leaving out the classic evolution fairy, Cupid?

What Eliezer is saying here is that biological adaptations cannot get fixed in a population by "becoming common knowledge" and being universally adopted, the way innovations in the business and engineering world spread. Even an artificial breeding program has to work within these constraints: there's a limit to how much reproductive skew you can supply.

Depends on how you think about and define a 1% advantage. You are using the biological definition, which is that having the gene gives you 1% more offspring on average. If however my genes make me 1% faster than everyone else that is a 100% advantage in winning the race, which can lead to large advantage in reproductive success. In this way a gene that generates a minor performance advantage can spread rather quickly.

IMO, "evolution" is singular for good reason - the ever-deepening mutual symbiosis of the biosphere. Species evolution simply does not take place in isolation. Anyway, "evolutions" seems to me to serve little purpose - except for making biologists splutter indignantly.

I cover the Gene A / Gene B business in: http://alife.co.uk/essays/species_unity/

...and the evolution is stupid business in: http://alife.co.uk/essays/evolution_sees/

for a human to look to natural selection as inspiration on the art of design, is like a sophisticated modern bacterium trying to imitate the first awkward replicator's biochemistry ...

Indeed evolution is a weak, slow, dumb process, but the key point is that it has operated over a staggering time frame and thus is has in fact created an extraordinary number of design brilliancies, many of which exceed their technological counterparts. Quick examples are low power flight and the human brain. Although it's important to note that evolution operates using simple repetition of trial and error routines, moving away from failure and not towards success, I'd hardly discard the insights from that process in favor of non-evolutionarily derived activities.

Great post once again. There is one thing I wonder about : AFAIK there is our DNA a huge amount (my biology teacher at highschool was saying 90% I didn't double-check his figure) of code that is not "active", genes that are not used by the body to synthesize proteins. To my programmer mind they look like C source code that was disabled at compile time (say with a #ifndef SYMBOL #endif block, and SYMBOL not provided at compile time).

Would that somehow mitigate the fact that if gene B requires gene A, then gene B would be disappear unless gene A is spread ? Doesn't that allow some genes to be there dormant, to be activated (by a later mutation) and given a new try later on, either when some other gene is present, or when the environment change ? Or I misunderstood that part on "non-active DNA" ?

[-][anonymous]10

2s

TYPE ERROR. Consider fitness=.75.

Why has no one pointed this out? Am I missing something?

The formula is an approximation which is accurate for small values of s. Which is the domain we care about, since you don't get huge fitness gains from a single random mutation.

[-][anonymous]00

What transformation would make the formula correct? Like does it actually output odds? Or is it one of those convienient linearizations that melts down if you go to far?

Is there a formula for the approximate error?

I haven't found the full text of the paper it was derived in, but the discussion I did find says that it's a matter of approximating assumptions that were necessary to make the analysis tractable in the first place (to someone without a computer, since it was 1927), not a summary of a more complex closed-form solution. So yes, convenient linearizations. The more general case has probably been been analyzed since then, but I wouldn't know where to look.

Generations to fixation = 2 ln(N) / s Probability of fixation = 2s

Are there domains limits for 's' on those equations?

It seems to me that if you have an s that is negative, you get a nonsensical answer for both equations. Though I suppose that equation one's answer can be interpreted to mean that it would take -[Generations to fixation] generations for a bad, but ubiquitous mutation to get winnowed down to one individual.

But with the second equation, a negative s leads to a less than zero chance of fixation and an s higher than 50% leads to a greater than 100% chance of fixation.

[-][anonymous]00

"we got guns, we got knives, we got pointy sticks; we got rockets,"

From now on I'm going to read everything EY writes in the voice of Private Hudson from Aliens.

[This comment is no longer endorsed by its author]Reply

A gene conveying a 3% fitness advantage, spreading through a population of 100,000, would require an average of 768 generations to reach universality in the gene pool.

Generations to fixation = 2 ln(N) / s = 2 ln(100000)/1.03 = 22.36 != 768

I'm confused.

[-]CCC20

(1+s), not s, is the fitness; s=0.03, not 1.03.

Thank you.

[-][anonymous]00

Mutations conveying fitness advantage are tricky things in that there's always a trade-off between resources to build self and resources to reproduce (r/K strategies), and before you know what is the specific species' strategy, you can't decide if a (possible) 3% increase in offspring is going to change anything. (How do you measure it, anyway? Increased speed of replication? Increased probability of offspring survival to adulthood? It should be different for, say, always-free-living things and adults-forming-colonies things.)

Do you apply your model to asexually reproducing organisms, too? To parthenogenetic reproduction?

It is also unclear where you say 'gene' and mean 'a new gene', and not 'a different allele of the same gene' (which seems to be easier to do - insert a nucleotide here or cut it there, etc.)

Mutations can be somatic, and only part of the offspring will get them (if pieces of the 'mother' fall off and regrow).

And a gene (allele?) that goes on its merry way through 768 generations might get to be wide-spread through pure accident, if the organisms are sufficiently 'complex' and 'large' - it would take a lot of time, and populations don't usually stay the same size that long.

Also, I don't think why there is any reason to consider 'evolution' an optimizer rather than a simple diversifier. Everything that is not prohibited by thermodinamicc gets a shot.

Humans can do things that evolutions probably can't do period over the expected lifetime of the universe.

This does beg the question, How, then, did an evolutionary process produce something so much more efficient than itself?

(And if we are products of evolutionary processes, then all our actions are basically facets of evolution, so isn't that sentence self-contradictory?)

The evolutionary process produced humans, and humans can create certain things that evolution wouldn't have been able to produce without producing something like humans to indirectly produce those things. Your question is no more interesting than, "How could humans have built machines so much faster at arithmetic than themselves?" Well, humans can build calculators. That they can't be the calculators that they create doesn't demand an unusual explanation.

Well, humans can build calculators. That they can't be the calculators that they create doesn't demand an unusual explanation.

Yes, but don't these articles emphasise how evolution doesn't do miracles, doesn't get everything right at once, and takes a very long time to do anything awesome? The fact that humans can do so much more than the normal evolutionary processes can marks us as a rather significant anomaly.

Not really. Birds can fly better than evolution can. As far as intelligence goes, we're far from the only animals who can make tools. Since this typically takes less than a year, they're already faster than most versions of the mindless process called evolution.

'I say "evolutions", plural, because fox evolution works at cross-purposes to rabbit evolution, and neither can talk to snake evolution to learn how to build venomous fangs.'

Interestingly, as we're getting better at analyzing genomes, we're discovering that this isn't strictly true.  Rabbit and fox cross-pollinating with snake would be a bit of a stretch maybe, but there are actually a number of what we once thought to be entirely separate lines of evolution which genetic testing has revealed to be true-breeding hybrids between a set of nearby species.

Also, it's looking like viruses can play a fairly substantial role in picking up genes from one species and ferrying them around to others.

Of course, all of that will pale in comparison to genetic engineering once we finish sorting that out.

"Science has a very exact idea of the capabilities of evolution."

Very exact? That's a pretty bold claim. And what do we mean by capabilities? Are we referring to biological evolution on Earth or the algorithm of evolution in a broader sense?

How fast mutations spread doesn't apply to the later. That is a tunable parameter in artificial evolution (which usually centers around not letting good mutations monopolize my population too quickly). The same can be said about what mutations occur (they don't have to be random in an artificial environment) or how "breeding" works. These and other parameters are biological accidents on Earth, not part of the core algorithm.

If you want to say "it's inefficient to mimic nature" then you should base that on the performance of humans who mimic nature not on nature itself, those are two different things. Evolutionary algorithms often work well in practice (including in situations where there is not an alternative that comes close), hence the more relevant evidence does not corroborate your hypothesis.