Hello Zvi,
I don't agree with everything you on every point, but I find your writing to be extremely high-quality and informative. Keep up the great work
AutoGPT is an excellent demonstration of the point. Ask someone on this forum 5 years ago whether they think AGI might be a series of next token predictors strung together with modular cognition occurring in English and they would have called you insane.
Yet if that is how we get something close to AGI it seems like a best case scenario since intrepretability is solved by default and you can measure alignment progress very easily.
Reality is weird in very unexpected ways.
Your model assumes lot about the nature of AGI. Sure if you jump directly to “we’ve created coherent, agential, strategic strong AGI, what happens now?” you end up with a lot of default failure modes. The cruxes of disagreement are along what does AGI actually look like in practice and what are the circumstances around it’s creation?
Is it Agential? Does it have strategic planning capabilities that it tries to act on in the real world? Current systems don’t look like this.
Is it coherent? Even if it has the capability to strategically plan is it able t
AI Regulation will make the problem worse seems like a very strong statement that is unsupported by your argument. Even in your scenario where large training runs are licensed this will make things more expensive, increase the cost of training runs, and generally slow things down, particularly if it prevents smaller AI companies from pushing the frontier of research.
To take your example of GDPR, the draft version of the EU's AI Act seems so convoluted that it will cost companies a lot of money to comply and make investing in small AI startups more ri... (read more)