This is basically a version of Tyler Cowen's argument (summarized here: https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html) for why AGI won't change things as quickly as we think because once intelligence is no longer a bottleneck, the other constraints become much more binding. Once programmers no longer become the bottleneck, we'll be bound by the friction that exists elsewhere within companies.
Yes, except that as soon as AI can replace the other sources of friction, we'll have a fairly explosive takeoff; he thinks these sources of friction will stay forever, while I think they are currently barriers, but the engine for radical takeoff isn't going to happen via traditional processes adopting the models in individual roles, it will be via new business models developed to take advantage of the technology.
Much like early TV was just videos of people putting on plays, and it took time for people to realize the potential - but once they did, they didn't make plays that were better for TV, they did something that actually used the medium well. And what using AI well would mean, in context of business implications is cutting out human delays, inputs, and required oversight. Which is worrying for several reasons!
This is basically a version of Tyler Cowen's argument (summarized here: https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html) for why AGI won't change things as quickly as we think because once intelligence is no longer a bottleneck, the other constraints become much more binding. Once programmers no longer become the bottleneck, we'll be bound by the friction that exists elsewhere within companies.