Epistemic status: The idea here has likely been articulated before, I just haven't noticed it, so it might be worth pointing it out again.
Foom describes the idea of a rapid AI takeoff caused by an AI's ability to recursively improve itself. Most discussions about Foom assume that each next iteration of improved models can in principle be developed and deployed in a short amount of time. Current LLMs require huge amounts of data and compute to be trained. Even if GPT-4 or similar models were able to improve their own architecture, they would still need to be trained from scratch using that new architecture. This would take a long time and can't easily be done without people noticing. The most extreme Foom scenarios of models advancing many generations in < 24 hours seem therefore unlikely in the current LLM training paradigm.
There could be paths towards Foom with current LLMs that don't require new, improved models to be trained from scratch:
- A model might figure out how to adjust its own weights in a targeted way. This would essentially mean that the model has solved interpretability. It seems unlikely to me that it is possible to get to this point without running a lot of compute-intensive experiments.
- It's conceivable that the recursive self-improvement that leads to Foom doesn't happen on the level of the base LLM, but on a level above that, where multiple copies of a base model are called in a way that results in emergent behavior or agency, similar to what Auto-GPT is trying to do. I think this approach can potentially go a long way, but it might ultimately limited by how smart the base model is.
Insofar as it is required to train a new model with 100s of billions of parameters from scratch in order to make real progress towards AGI, there is an upper limit to how fast recursive self-improvement can progress.
I up-voted your post because I think this is a useful discussion to have, although I am not inclined to use the same argument and my position is more conditional. I learned this lesson from the time I played with GPT-3, which seemed to me as a safe pathway toward AGI, but I failed to anticipate how all the guardrails to scale back deployment were overrun by other concerns, such as profits. It is like taking a safe pathway and incrementally make it more dangerous over time. In the future, I expect something similar to happen to GPT-4, e.g. people develop hardware to put it directly on a box/device and selling it in stores. Not just as a service, but as a tool where the hardware is patented/marketed. For now, it looks like the training is the bottleneck for deployment, but I don't expect this to stay as there are many incentives to bring the training costs down. Also, I think one should be careful about using flaws of architecture as argument against the path toward self-improvement. There could be a corresponding architecture design that provides a work-around that is cheaper. The basic problem is that we only see a limited number of options while the world processes in parallel many more options that are available to a single person.