I like this way of expressing it. Thanks for sharing.
I think it's the same core thing I was pointing at in "We're already in AI takeoff", only it goes in the opposite direction for metaphors. I was arguing that it's right to view memes as alive for the same reason we view trees and cats as alive. Grey seems to be arguing to set aside the question and just look at the function. Same intent, opposite approaches.
I think David Deutsch's article "The Evolution of Culture" is masterful at describing this approach to memetics.
This is an interesting view on AI, but IMO I don't really share this view, and think that the evolutionary/memetic aspect of AI is way overplayed, compared to other factors that make AI powerful.
A big reason for that is that there will be higher-level bounds on what exactly is selected for, and in particular one big difference between computer code used on AI and genetic code is that genetic code has way less ability to error-correct than basically all AI code, and it's in a weird spot of reliability where random mutations are frequent enough to drive evolution, but not so frequent as to cause organisms to outright collapse within seconds or minutes.
Another reason is that effective AI architectures can't go through simulated evolution, since that would use up too much compute for training to work (We forget that evolution had at a lower bound 10e46 FLOPs to 10e48 FLOPs to get to humans).
A better analogy is within human-lifetime learning.
I basically agree with Steven Byrnes's case against evolution, and think that evolutionary analogies are very overplayed in the popular press:
The 'evolutionary pressures' being discussed by CGP Grey is not the direct gradient descent used to train an individual model. Instead, he is referring to the whole set of incentives we as a society put on AI models. Similar to memes - there is no gradient descent on memes.
(Apologies if you already understood this, but it seems your post and Steven Byrne's post focus on training of individual models)
Fair enough on that difference between the societial level incentives on AI models and the individual selection incentives on AI models.
My main current response is to say that I think the incentives are fairly weak predictors of the variance in outcomes, compared to non-evolutionary forces at this time.
However, I do think this has interesting consequences for AI governance (since one of the effects is to make societal level incentives become more relevant, compared to non-evolutionary forces.)
This actually was a new way of thinking about it or at least articulating it, for me. Thanks for the link!
In my post, A path to Human Autonomy, I describe AI and bioweapons as being in a special category of self-replicating threats. If autonomous self-replicating nanotech were developed, it would also be in this category.
Humanity has a terrible track record when it comes to handling self-replicating agents which we hope to deploy for a specific purpose. For example:
In episode 158 of Cortext podcast, CGP Grey gives their high-level reason why they are worried about AI.
My one line summary: AI should not be compared to nuclear weapons but instead to biological weapons or memes, which evolve under the implicit evolutionary pressures that exist, leading to AI's that are good at surviving and replicating.
The perspective is likely known by many in the community already, but I had not heard it before. Interestingly, there have actually been experiments where they just put random strings of code in an environment where they interact, and self-replicating code appeared. See Cognitive Revolution podcast on 'Computational Life: How Self-Replicators Arise from Randomness', with Google researchers Ettore Randazzo and Luca Versari.
I quote the relevant part of the podcast below, but I recommend listening because the emotion and delivery is impactful. It is from 1:22:00 onwards.