Do you remember the Bill Nye–Ken Ham debate of 2014? I'm relying on my memories from when I first watched it, so apologies if I get something wrong.
I wish someone more talented than me would write something that draws parallels between Kinds vs. Species and Narrow AI vs. General AI, if the comparison is actually accurate and doesn't create confusion.
My impression is that Kinds share a property with Narrow AI in that when people talk about how Narrow AI can't do X (presumably due to perceived technical limitations), I'm reminded of Ken Ham saying something along the lines of Kinds can't evolve X (presumably for faith-preserving reasons).
We're shown the Kind dog reproducing new dogs with novel properties over short periods of time, but still within the same Kind, as these properties are similar and identifiable by our human intuitions as having dog origins. We're also shown fruit flies, with their short life cycle, reproducing generations with increasing genetic drift over human observable timescales, but they still belong to the same Kind. There's a boundary, a line that can't be crossed, where one Kind might beget a different Kind, where an almost-chicken might lay the first chicken egg.
Ken Ham never seems to internalize that small changes over short periods of time predict large changes over long periods of time. I wonder if a similar analogy might be drawn with Narrow AI, which seems to generalize capabilities with more computing power and algorithmic efficiency.
If it’s fine for me to enter the discussion
It seems to me that:
A very effective narrow AI is an AI that can solve certain closed-ended problems very effectively, but can’t generalise.
Since agents are necessarily limited in the number of factors that they can account for in their calculations, open-ended problems are fundamentally closed-ended problems but with influxes of mixed more-or-less undetermined data that affect what solutions are viable (so we can’t easily compute how that data will affect the space of possible actions, at least initially). But there are open-ended problems that have so many possible factors that need to be accounted for (like ‘solving the economy and increasing growth’), that the space of possible actions that a general system (like a human) can conceivably take to solve one of those problems effectively IS the space of all possible actions that a narrow AI need to consider to solve the problem as effectively as a human would, at the very least.
At that point, a “narrow AI that can solve an open-ended problem” is at least as general as an average human. If the number of possible actions that it can take increases then it's even more general than the average human.
Kinds and species are fundamentally the same thing.