The first practical steam engine was built by Thomas Newcomen in 1712. It was used to pump water out of mines.

“Old Bess,” London Science Museum
“Old Bess,” London Science Museum Photo by the author

An astute observer might have looked at this and said: “It’s clear where this is going. The engine will power everything: factories, ships, carriages. Horses will become obsolete!”

This person would have been right—but they might have been surprised to find, two hundred years later, that we were still using horses to plow fields.

Sacaton Indian Reservation, early 1900s. Library of Congress
Sacaton Indian Reservation, early 1900s. Library of Congress

In fact, it took about a hundred years for engines to be used for transportation, in steamships and locomotives, both invented in the early 1800s. It took more than fifty years just for engines to be widely used in factories.

What happened? Many factors, including:

  • The capabilities of the engines needed to be improved. The Newcomen engine created reciprocal (back-and-forth) motion, which was good for pumping but not for turning (e.g., grindstones or sawmills). In fact, in the early days, the best way to use a steam engine to run a factory was to have it pump water upsteam in order to add flow to a water wheel! Improvements from inventors like James Watt allowed steam engines to generate smooth rotary motion.
  • Efficiency was low. Newcomen engines used an enormous amount of still-relatively-expensive energy, for the work they generated, so they could only be profitably used where energy was cheap (e.g., at coal mines!) and where the work was high-value. Watt engines were much more efficient owing mainly to the separate condenser. Later engines improved the efficiency even more.
  • Steam engines were heavy. The first engines were therefore stationary; a Newcomen engine might be housed in a small shed. Even Watt’s engine was too heavy for a locomotive. High-pressure technology was needed to shrink the engine to the point where it could propel itself on a vehicle.
  • Better fuels were needed. Steam engines consumed dirty coal, which belched black smoke, often full of nasty contaminants like sulfur. Coal is a solid fuel, meaning it has to be transported in bins and shoveled into the firebox. In the late 1800s, more than 150 years after Newcomen, the oil industry began, creating a refined liquid fuel that could be pumped instead of shoveled and that gave off much less pollution.
  • Ultimately, a fundamental platform shift was required. Steam engines never became light enough for widespread adoption on farms, where heavy machinery would damage the soil. The powered farm tractor only took off with the invention of the internal combustion engine in the early 20th century, which had a superior power-to-weight ratio.

Not only did the transition take a long time, it produced counterintuitive effects. At first, the use of draft horses did not decline: it increased. Railroads provide long-haul transportation, but not the last mile to farms and houses, so while they substitute for some usage of horses, they are complementary to much of it. An agricultural census from 1860 commented on the “extraordinary increase in the number of horses,” noting that paradoxically “railroads tend to increase their number and value.” A similar story has been told about how computers, at first, increased the demand for paper.

Engines are not the only case of a relatively slow transition. Electric motors, for instance, were invented in the late 1800s, but didn’t transform factory production until about fifty years later. Part of the reason was that to take advantage of electricity, you can’t just substitute a big central electric motor in place of a steam or gas engine. Instead, you need to redesign the entire factory and all the equipment in it to use a decentralized set of motors, one powering each machine. Then you need to take advantage of that to change the factory layout: instead of lining up machines along a central power shaft as in the old system, you can now reorganize them for efficiency according to the flow of materials and work.

All of these transitions may have been inevitable, given the laws of physics and economics, but they took decades or centuries from the first practical invention to fully obsoleting older technologies. The initial models have to be improved in power, efficiency, and reliability; they start out suitable for some use cases and only later are adapted to others; they force entire systems to be redesigned to accommodate them.

At Progress Conference 2024 last weekend, Tyler Cowen and Dwarkesh Patel discussed AI timelines, and Tyler seemed to think that AI would eventually lead to large gains in productivity and growth, but that it would take longer than most people in AI are anticipating, with only modest gains in the next few years. The history of other transitions makes me think he is right. I think we already see the pattern fitting: AI is great for some use cases (coding assistant, image generator) and not yet suitable for others, especially where reliability is critical. It is still being adapted to reference external data sources or to use tools such as the browser. It still has little memory and scant ability to plan or to fact-check. All of these things will come with time, and most if not all of them are being actively worked on, but they will make the transition gradual and “jagged.” As Dario Amodei suggested recently, AI will be limited by physical reality, the need for data, the intrinsic complexity of certain problems, and social constraints. Not everything has the same “marginal returns to intelligence.”

I expect AI to drive a lot of growth. I even believe in the possibility of it inaugurating the next era of humanity, an “intelligence age” to follow the stone age, agricultural age, and industrial age. Economic growth in the stone age was measured in basis points; in the agricultural age, fractions of a percent; in the industrial age, single-digit percentage points—so sustained double-digit growth in the intelligence age seems not-crazy. But also, all of those transitions took a long time. True, they were faster each time, following the general pattern that progress accelerates. But agriculture took thousands of years to spread, and industry (as described above) took centuries. My guess is the intelligence transition will take decades.

New Comment
6 comments, sorted by Click to highlight new comments since:

The difference between AI and all other tech is that in case of all other tech transition work was bottlenecked by humans. It was humans who should have made technology more efficient and integrate it into economy. In case of sufficiently advanced agentic AI you can just ask it "integrate into economy pls" and it will get the job done. That's why AIs want to be agentic.

Will AI companies solve problems on the way to robust agency and if yes, then how fact? I think, correct answer is "I don't know, nobody knows." Maybe the last breakthrough is brewed right now in basement of SSI. 

Yeah, I think a genuinely large difference between the AI transition and other transitions is that for at least some applications of AI, you can remove the bottleneck of humans needing to integrate new tech which will expand over time, and the corrected conclusion to the post is this is why humans want tool AIs to be autonomous.

That said, I don't think that the transition is literally as fast as "someone finds the secret in a basement in SSI", but yes this cuts the time from decades to months-years for the transition (which is both slow and also wildly fast.)

Another way to conceive of this is that it takes a certain number of competence-adjusted engineer hours to perform an integration of a novel technology into existing processes.

If AI is able to supply the engineer-hours for its own integration, it seems clear that this would change the wall-clock-time of the integration.

If the first thing AI is integrated into is automating AI R&D, then the AI's competence will rise as an industrial output of the process being integrated. Which further accelerates the process.

The result is dramatic changes over a few months or couple of years.

Also, whether or not AI is integrated into the economy is kind of a side-note if you are facing the possibility of an agent far smarter than any human that has ever lived, and also able to parallelize copies of itself and run at 100s of times human speed. So even discussing integration into the economy as relevant presumes a plateau of capability at approximately human-level. What grounds do we have for expecting that?

I'm going to ignore all "AI is different" arguments for the sake of this comment, even though I agree with some of them. Let's assume I grant all your points. The agricultural revolution took a couple of millennia. The industrial revolution took a couple of centuries. And now, the AI revolution will take decades.

This means I can equivalently restate your conclusion as, "Human activity will lose almost all economic value by the time my newborn niece would have finished grad school." This is certainly slower than many timeline predictions today, but it's hardly "slow" by most standards, and is in fact still faster than the median timelines of most experts as of 5 years ago.

Of course, one of the important facts about these past transitions is that each petered out after bootstrapping civilization far enough to start the next one that's 10x faster. So, if the world in 2047 is 1000x richer and moving at AGI speeds compared to today, then the next 1000x change should take a few years, and the next one after that a few months. This still implies "singularity by 2050." We'd probably have about an extra decade to ensure our survival, though, which I would agree is great.

Seconding quetzal_rainbow’s comment. Another way to put it is:

  • If your reference class is “integrating a new technology into the economy”, then you’d expect AI integration to unfold over decades.
  • …But if your reference class is “integrating a new immigrant human into the economy—a human who is already generally educated, acculturated, entrepreneurial, etc.”, then you’d expect AI integration to unfold over years, months, even weeks. There’s still on-the-job training and so on, for sure, but we expect the immigrant human to take the initiative to figure out for themselves where the opportunities are and how to exploit them.

We don’t have AI that can do the latter yet, and I for one think that we’re still a paradigm-shift away from it. But I do expect the development of such AI to look like “people find a new type of learning algorithm” as opposed to “many many people find many many new algorithms for different niches”. After all, again, think of humans. Evolution did not design farmer-humans, and separately design truck-driver-humans, and separately design architect-humans, etc. Instead, evolution designed one human brain, and damn, look at all the different things that that one algorithm can figure out how to do (over time and in collaboration with many other instantiations of the same algorithm etc.).

How soon can we expect this new paradigm-shifting type of learning algorithm? I don’t know. But paradigm shifts in AI can be frighteningly fast. Like, go back a mere 12 years ago, and the entirety of deep learning was a backwater. See my tweet here for more fun examples.

I definitely agree. No matter how useful something will end up being, or how simple it seems the transition will be, it always takes a long time because there is always some reason it wasn't already being used, and because everyone has to figure out how to use it even after that.

For instance, maybe it will become a trend to replace dialogue in videogames with specially trained LLMs (on a per character basis, or just trained to keep the characters properly separate). We could obviously do it right now, but what is the likelihood of any major trend toward that in even five years? It seems pretty unlikely. Fifteen? Maybe. Fifty? Probably a successor technology trying to replace them. (I obviously think AI in general will go far slower than its biggest fans / worriers think.)