The first practical steam engine was built by Thomas Newcomen in 1712. It was used to pump water out of mines.

“Old Bess,” London Science Museum
“Old Bess,” London Science Museum Photo by the author

An astute observer might have looked at this and said: “It’s clear where this is going. The engine will power everything: factories, ships, carriages. Horses will become obsolete!”

This person would have been right—but they might have been surprised to find, two hundred years later, that we were still using horses to plow fields.

Sacaton Indian Reservation, early 1900s. Library of Congress
Sacaton Indian Reservation, early 1900s. Library of Congress

In fact, it took about a hundred years for engines to be used for transportation, in steamships and locomotives, both invented in the early 1800s. It took more than fifty years just for engines to be widely used in factories.

What happened? Many factors, including:

  • The capabilities of the engines needed to be improved. The Newcomen engine created reciprocal (back-and-forth) motion, which was good for pumping but not for turning (e.g., grindstones or sawmills). In fact, in the early days, the best way to use a steam engine to run a factory was to have it pump water upsteam in order to add flow to a water wheel! Improvements from inventors like James Watt allowed steam engines to generate smooth rotary motion.
  • Efficiency was low. Newcomen engines used an enormous amount of still-relatively-expensive energy, for the work they generated, so they could only be profitably used where energy was cheap (e.g., at coal mines!) and where the work was high-value. Watt engines were much more efficient owing mainly to the separate condenser. Later engines improved the efficiency even more.
  • Steam engines were heavy. The first engines were therefore stationary; a Newcomen engine might be housed in a small shed. Even Watt’s engine was too heavy for a locomotive. High-pressure technology was needed to shrink the engine to the point where it could propel itself on a vehicle.
  • Better fuels were needed. Steam engines consumed dirty coal, which belched black smoke, often full of nasty contaminants like sulfur. Coal is a solid fuel, meaning it has to be transported in bins and shoveled into the firebox. In the late 1800s, more than 150 years after Newcomen, the oil industry began, creating a refined liquid fuel that could be pumped instead of shoveled and that gave off much less pollution.
  • Ultimately, a fundamental platform shift was required. Steam engines never became light enough for widespread adoption on farms, where heavy machinery would damage the soil. The powered farm tractor only took off with the invention of the internal combustion engine in the early 20th century, which had a superior power-to-weight ratio.

Not only did the transition take a long time, it produced counterintuitive effects. At first, the use of draft horses did not decline: it increased. Railroads provide long-haul transportation, but not the last mile to farms and houses, so while they substitute for some usage of horses, they are complementary to much of it. An agricultural census from 1860 commented on the “extraordinary increase in the number of horses,” noting that paradoxically “railroads tend to increase their number and value.” A similar story has been told about how computers, at first, increased the demand for paper.

Engines are not the only case of a relatively slow transition. Electric motors, for instance, were invented in the late 1800s, but didn’t transform factory production until about fifty years later. Part of the reason was that to take advantage of electricity, you can’t just substitute a big central electric motor in place of a steam or gas engine. Instead, you need to redesign the entire factory and all the equipment in it to use a decentralized set of motors, one powering each machine. Then you need to take advantage of that to change the factory layout: instead of lining up machines along a central power shaft as in the old system, you can now reorganize them for efficiency according to the flow of materials and work.

All of these transitions may have been inevitable, given the laws of physics and economics, but they took decades or centuries from the first practical invention to fully obsoleting older technologies. The initial models have to be improved in power, efficiency, and reliability; they start out suitable for some use cases and only later are adapted to others; they force entire systems to be redesigned to accommodate them.

At Progress Conference 2024 last weekend, Tyler Cowen and Dwarkesh Patel discussed AI timelines, and Tyler seemed to think that AI would eventually lead to large gains in productivity and growth, but that it would take longer than most people in AI are anticipating, with only modest gains in the next few years. The history of other transitions makes me think he is right. I think we already see the pattern fitting: AI is great for some use cases (coding assistant, image generator) and not yet suitable for others, especially where reliability is critical. It is still being adapted to reference external data sources or to use tools such as the browser. It still has little memory and scant ability to plan or to fact-check. All of these things will come with time, and most if not all of them are being actively worked on, but they will make the transition gradual and “jagged.” As Dario Amodei suggested recently, AI will be limited by physical reality, the need for data, the intrinsic complexity of certain problems, and social constraints. Not everything has the same “marginal returns to intelligence.”

I expect AI to drive a lot of growth. I even believe in the possibility of it inaugurating the next era of humanity, an “intelligence age” to follow the stone age, agricultural age, and industrial age. Economic growth in the stone age was measured in basis points; in the agricultural age, fractions of a percent; in the industrial age, single-digit percentage points—so sustained double-digit growth in the intelligence age seems not-crazy. But also, all of those transitions took a long time. True, they were faster each time, following the general pattern that progress accelerates. But agriculture took thousands of years to spread, and industry (as described above) took centuries. My guess is the intelligence transition will take decades.

New Comment
16 comments, sorted by Click to highlight new comments since:

Seconding quetzal_rainbow’s comment. Another way to put it is:

  • If your reference class is “integrating a new technology into the economy”, then you’d expect AI integration to unfold over decades.
  • …But if your reference class is “integrating a new immigrant human into the economy—a human who is already generally educated, acculturated, entrepreneurial, etc.”, then you’d expect AI integration to unfold over years, months, even weeks. There’s still on-the-job training and so on, for sure, but we expect the immigrant human to take the initiative to figure out for themselves where the opportunities are and how to exploit them.

We don’t have AI that can do the latter yet, and I for one think that we’re still a paradigm-shift away from it. But I do expect the development of such AI to look like “people find a new type of learning algorithm” as opposed to “many many people find many many new algorithms for different niches”. After all, again, think of humans. Evolution did not design farmer-humans, and separately design truck-driver-humans, and separately design architect-humans, etc. Instead, evolution designed one human brain, and damn, look at all the different things that that one algorithm can figure out how to do (over time and in collaboration with many other instantiations of the same algorithm etc.).

How soon can we expect this new paradigm-shifting type of learning algorithm? I don’t know. But paradigm shifts in AI can be frighteningly fast. Like, go back a mere 12 years ago, and the entirety of deep learning was a backwater. See my tweet here for more fun examples.

[-]gwern2721

Maybe a better framing would be the economic perspective from Hanson's growth paper: "is AI a complement or is it a substitute?" Does AI assist a human worker (or a human organization), making them more productive, functioning as simply a kind of tool (or 'capital') which multiplies their labor; or does it replace that human worker/organization? When it's the former, it may indeed take a very long time; but the latter can happen instantly.

No one can force a freelance artist to learn to use Photoshop or how to best use some snazzy new feature, and artists will be learning the ins-and-outs of their new technologies and workflows for many decades to come and slowly becoming more productive thanks to their complementing by digital illustration tools. Whereas on the other hand, their employers can replace them potentially in minutes after the next big Midjourney upgrade.*

More historically, in colonization, a group of settlers may simply arrive literally overnight in their wagons and set up a new town (eg. a gold rush boomtown), and begin replacing the local indigenous peoples, without any sort of centuries-long gradual '+2% local per capita GDP growth per year until convergence' using only the original local indigenous people's descendants.

* A personal example: when I wanted more fancy dropcaps for Gwern.net, I was contacting human artists and trying to figure out how much it would cost and what the workflow was, and how many thousands of dollars & months of back-and-forth a good dropcap set might cost, and if I would have to settle for instead something like 1 custom dropcap per essay. When Midjourney became reasonably adequate at v5 & DALL-E at 3, I didn't spend decades working with artists to integrate AI into their workflow and complement their labor... I substituted AI for artists: stopped my attempt to use them that night, and never looked back. When I made 10 dropcaps for this year's Halloween theme (the 'purple cats' got particularly good feedback because they're adorable), this is something I could never do with humans because it would be colossally expensive and also enormously time-consuming to do all that just for a special holiday mode which is visible a few hours out of the year. At this point, I'm not sure how many artists or font designers I would want to use even if they were free, because it means I don't have to deal with folks like Dave or have one of my projects delayed or killed by artists, or the hassle of all the paperwork and payments, and I get other benefits like extremely rapid iteration & exploration of hundreds of possibilities without wearing out their patience etc.

IMO, a lot of basic cruxes for differing views on the impact of AI in the 21st century ultimately depend on the question "Can AI be a substitute for the majority of economically relevant tasks a human does, and then become a substitute for any new industry?"

If the answer is yes, a lot of the more radical worldviews become on the table. If the answer is no, then I'd probably agree with a lot of the more moderate views on AI impacts.

Indeed, I'd argue AI as substitute for basically all human tasks that are relevant to the economy should replace the AGI notion often flown around, since it's more clear and provides less opportunities for motte and balieys and other bad arguments often thrown around.

Worldwide sentiment is pretty against immigration nowadays. Not that it will happen, but imagine if anti-immigration sentiment could be marshalled into a worldwide ban on AI development and deployment. That would be a strange, strange timeline.

Does the median immigrant ‘integrate into the economy’ to any notable extent in months or weeks?

I can easily imagine someone with already a high rank, reputation, merit, etc., in their home country doing so by say immigrating and quickly landing a job at JP Morgan Chase in a managing director position and proceed to actually oversee some important desk within a short timeframe.

But that is the 99.99th+ percentile of immigration.

Most people need to eat something [citation needed] and it's hard to eat if you don't work.

How does this relate to the degree of integration into an economy?

You can eat just fine in any developed country via picking up odd jobs here and there. But clearly a managing director at JP Morgan overseeing an important desk is at a qualitatively different level.

Okay, I don't understand what do you mean by "degree of intergration". If we lived in a world where immigrant could have "high degree of intergration" within months, what would we have observed?

The difference between AI and all other tech is that in case of all other tech transition work was bottlenecked by humans. It was humans who should have made technology more efficient and integrate it into economy. In case of sufficiently advanced agentic AI you can just ask it "integrate into economy pls" and it will get the job done. That's why AIs want to be agentic.

Will AI companies solve problems on the way to robust agency and if yes, then how fast? I think, correct answer is "I don't know, nobody knows." Maybe the last breakthrough is brewed right now in basement of SSI. 

Yeah, I think a genuinely large difference between the AI transition and other transitions is that for at least some applications of AI, you can remove the bottleneck of humans needing to integrate new tech which will expand over time, and the corrected conclusion to the post is this is why humans want tool AIs to be autonomous.

That said, I don't think that the transition is literally as fast as "someone finds the secret in a basement in SSI", but yes this cuts the time from decades to months-years for the transition (which is both slow and also wildly fast.)

Another way to conceive of this is that it takes a certain number of competence-adjusted engineer hours to perform an integration of a novel technology into existing processes.

If AI is able to supply the engineer-hours for its own integration, it seems clear that this would change the wall-clock-time of the integration.

If the first thing AI is integrated into is automating AI R&D, then the AI's competence will rise as an industrial output of the process being integrated. Which further accelerates the process.

The result is dramatic changes over a few months or couple of years.

Also, whether or not AI is integrated into the economy is kind of a side-note if you are facing the possibility of an agent far smarter than any human that has ever lived, and also able to parallelize copies of itself and run at 100s of times human speed. So even discussing integration into the economy as relevant presumes a plateau of capability at approximately human-level. What grounds do we have for expecting that?

I'm going to ignore all "AI is different" arguments for the sake of this comment, even though I agree with some of them. Let's assume I grant all your points. The agricultural revolution took a couple of millennia. The industrial revolution took a couple of centuries. And now, the AI revolution will take decades.

This means I can equivalently restate your conclusion as, "Human activity will lose almost all economic value by the time my newborn niece would have finished grad school." This is certainly slower than many timeline predictions today, but it's hardly "slow" by most standards, and is in fact still faster than the median timelines of most experts as of 5 years ago.

Of course, one of the important facts about these past transitions is that each petered out after bootstrapping civilization far enough to start the next one that's 10x faster. So, if the world in 2047 is 1000x richer and moving at AGI speeds compared to today, then the next 1000x change should take a few years, and the next one after that a few months. This still implies "singularity by 2050." We'd probably have about an extra decade to ensure our survival, though, which I would agree is great.

The problem is that accepting this argument involves ignoring how AI keeps on blitzing past supposed barrier after barrier. At some point, a rational observer needs to be willing to accept that their max likelihood model is wrong and consider other possible ways the world could be instead.

There are also many ways the max likelihood model could be consistent with very rapid near-term change, too.

One is that, like in past transitions, the faster growth isn't an exponential, it gets faster and then eventually peters out, like any s-curve. If look at the world from 1700 to now, the industrial revolution is a sum of many individual such curves, but even so, the fastest years/decades of growth globally were ~50x faster than the slowest. If you shorten 1000x growth down to a couple of decades and assume a similar distribution of growth rates, then it matters a whole lot whether 2024 is year 1, year 5, or what. We could be 7 years into a two decade transition that began with transformer architecture, or two decades into a fifty year transition that started with some other machine learning advance, and those would be consistent with both the OP and "Things are about to move ridiculously fast." 

In other words: Sustained faster-than-population economic growth didn't show up in Britain until a century or so into the industrial revolution began, peak global growth was a century or so after that, and in recent years the largest remaining countries have been catching up even faster than that even while growth in the UK and US and EU are slower than past peaks. If this were transitional year 7 of 20, and peak growth in the industrial revolution was 5-10%/yr, and this transition is 10x faster, than it's plausible to expect 1 year economic doubling times in each of several years between now and the early 2030s.

The OP seems to assume we're in year 1 or so out of 20-50, and that the most significant or fastest changes will happen near the end of that window. I'm not quite sure why I should agree with those assumptions.

[-]Abe10

This argument seems to be a one by analogy. steam engine:industrial revolution::???:machine learning. But as you can see there's a term in the analogy I don't understand. Is ??? chatgpt? LLMs? Transformers? AlexNet? The internet? Digital computers? Something that hasn't yet been invented?

I definitely agree. No matter how useful something will end up being, or how simple it seems the transition will be, it always takes a long time because there is always some reason it wasn't already being used, and because everyone has to figure out how to use it even after that.

For instance, maybe it will become a trend to replace dialogue in videogames with specially trained LLMs (on a per character basis, or just trained to keep the characters properly separate). We could obviously do it right now, but what is the likelihood of any major trend toward that in even five years? It seems pretty unlikely. Fifteen? Maybe. Fifty? Probably a successor technology trying to replace them. (I obviously think AI in general will go far slower than its biggest fans / worriers think.)