1. Intro

Before vaccine-induced-neo-techno-optimism was all the rage, it was fashionable and popular (in some circles) to bemoan our era as one of disturbing technological stagnation. Almost a decade ago, economists like Paul Krugman and Larry Summers started lamenting the secular stagnation in economic growth. More recently, a small cohort have argued pretty convincingly that technological progress - the fundamental driver of growth and improved standard of living - is slowing down.

A few sourced from the first link above:

Just a few days ago, Jason Crawford wrote a nice summary of the evidence. Less rigorous but perhaps more impactful have been the bounty of pithy aphorisms from technologists like Peter Thiel:

  • They promised us flying cars and all we got was 140 characters.
  • You could say that all these gadgets and devices, they dazzle us but they also distract us from the ways in which our larger surroundings are strangely old. So we run cell phones while we’re riding in a 19th-century subway system in New York. San Francisco, the housing stock looks like it’s from the 50s and 60s, it’s mostly quite decrepit and incredibly hard to change these sort of things. So you have bits making progress, atoms are strangely very stuck.

2. The Problem

My purpose here isn’t to debunk the claims that technological progress is slower than it used to be, than it ought to be, or than many think it is. After writing a paper on the topic, I tend to agree with Cowen and Thiel. One thing, though, rubs me the wrong way. Quite often, techno-pessimist papers and articles point to things like a slowdown in Moore’s Law as evidence for technological stagnation.

Don’t take my word for it. From Cowen and Southewood’s paper:

Still, the exact same data used to illustrate Moore’s Law now suggest that Moore’s Law definitely is slowing down. And that is evidence for a scientific slowdown in what arguably has been the world’s single most dynamic sector, namely information processing.

From Are Ideas Getting Harder to Find?:

The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s.

From Isolated Demands for Rigour in New Optimism:

But wait a minute, Intel is no longer the most Moore’s Law-relevant company. Their 7nm process was delayed to 2022, and they no longer lead the pack.

Instead, TSMC is now one of only two fabs (including Samsung) able to keep up with Moore’s Law. (For what it’s worth, they also manufactured the Apple M1 chip.) This is their R&D data from Bloom, with the last few years added.

What concerns me is that these authors select Moore’s law for analysis because it is a salient demonstration of extraordinary past technological achievement. In other words, it isn’t merely a random draw from the countless plausible metrics of scientific progress (say, cost of shipping one kilogram from New York to L.A. or the proportion of infants who live to 100). There’s a term for this fallacy: selection bias.

3. Why it’s misleading

If we look at the progress within various scientific fields and industries over time, there will be plenty of variation; in any given decade, some fields will dramatically improve their understanding of the world - perhaps through something like one of Kuhn’s Scientific Revolutions - while others toil away for slow, marginal advances. More dramatically, some disciplines come into existence (computer science), while others fizzle away as their fundamental assumptions are exposed as useless or incorrect (alchemy, astrology).

However, successes and disappointments are unlikely to be equally salient. People remember how their lives were changed by the radio, automobile, or smartphone, but don’t automatically pay attention to the science and technology that aren’t causing much change.

That’s why so much of the techno-pessimist literature has to use thought experiments (“Go into a room and subtract off all of the screens. How do you know you’re not in 1973, but for issues of design?” from Crawford’s “Technological stagnation”) and the like to remind us of all the progress that isn’t happening. People naturally notice Facebook, Uber, AlphaGo, and rapid vaccine production, but have to be explicitly reminded that things like physical infrastructure and transportation are largely no better than they were a few decades ago.

This asymmetry has a few interesting consequences. First, it likely causes us to intuitively overestimate current technological progress before we do the more systematic analyses like those I’ve cited. Second, it means that more rigorous and systematic analyses, which tend to compare the rates of change in various metrics between past and present, are prone to do exactly the opposite.

Here’s Why

Any comparison of current to past technological progress is likely to select the most salient, well-known indicators of scientific and technological progress. For the reason I just described, these indicators are likely to be associated with the most successful, rapidly-advancing fields and industries.

There’s no better example than Moore’s Law. In the last 50 years or so, the (literally) exponential rise in transistor density has enabled computation to transition from an academic novelty to a core component of virtually every facet of modern life.

The timing here isn’t coincidental; the most successful and influential academics and entrepreneurs like Cowen (59) and Thiel (53) grew up with Moore’s Law and all the progress it represents in full effect.

Of course, this sort of selection bias sets you up for disappointment thanks to regression to the mean. Whether we’re talking about NFL teams, mutual funds, or scientific fields, the most successful units today are probably going to look a little more average tomorrow.

The cost of solar power has been plummeting recently, making it cost-competitive with fossil fuels. As you can see, though, this is not representative of other forms of energy - including climate-friendly ones like wind and nuclear.

I’m not sure whether this trend has a name yet, but let’s call it Noore’s Law. In 40 years, the next generation of economists might ruefully point to the flatlining cost of solar energy as evidence for technological stagnation. It’s not that this is incorrect - it’s that failing to consider it in the context of other, more normal metrics (like the costs of wind and nuclear energy) assigns Noore’s Law undue importance.

4. Not Just Transistors

I’ve used Moore’s Law to illustrate my point so far because it is such a perfect example of a metric pre-selected to show technological slowdown. Tons of authors refer to it, its timing is perfect, and it clearly represents an especially impactful and successful past industry.

That said, there are plenty of other examples. Both “Is the rate of scientific progress slowing down?” and “Are Ideas Getting Harder to Find,” for instance, find that crop yield (such as bushels of wheat per acre) growth rates are declining, even though the number of researchers involved is growing.

Agricultural productivity has been a central concern of human civilization for thousands of years, so this does seem a less arbitrary choice than Moore’s Law. Nonetheless, it seems very likely to me that such a metric appears to be a good measure of scientific progress precisely because crop yields have increased so dramatically over the last few hundred years.

I doubt this is nefarious or intentional distortion. Instead, we simply think of those things which have dramatically increased in the recent past as the kind of thing that are supposed to keep increasing.

And an exception to the rule

Let’s take another example that, while indeed selected for past success in this way, is nonetheless evidence for a technological resurgence: vaccines.

From Noah Smith’s article

Vaccines are perhaps the most dramatic medical success of recent history. Diseases that used to kill thousands of people have become a thing of the past, all thanks to an extremely cheap and widely-available technology. So, rates of communicable disease, direct rates of vaccination, or a more qualitative assessment of vaccine development speed/impressiveness/quality are all pretty salient examples of past technological achievement.

Ex ante, we should expect analyzing these metrics over time to be biased toward showing technological slowdown. However, regression to the mean is a tendency, not a law. Sometimes, data points that are already far from the mean at time t get even further further at t+1.

Vaccines are a case in point. Matt Yglesias, in “Some optimism about America's Covid response,” summarizes the neo-techno-optimist take:

And what’s particularly great about these new vaccines is they’re the fastest we’ve ever seen developed (the previous record was four to five years), and they’re based on a whole new kind of vaccine technology. So we’re not only getting new vaccines but new ways to make vaccines, which is really cool.

I’m nowhere near knowledgeable enough to assess whether the recent COVID vaccines indeed represent significant progress, but I’ll take Yglesias at his word. Even though vaccination seems to show evidence in the direction opposite that of Moore’s Law and agricultural productivity, all three are selected for analysis (and thus biased) because they show past technological progress.

5. The mandatory concession paragraph

None of this is to say that we shouldn’t look at things like Moore’s law. These metrics are misleading when decontextualized for precisely the same reason they have and continue to be important for society. Likewise, using them as evidence weakens but does not negate one’s argument. In fact, I tend to agree with the techno-pessimists that tech progress is slower than in the 19th and 20th centuries.

Nonetheless, a good evaluation of the state of science and tech needs to account for the selection bias I’ve described in order to draw more rigorous, robust, and accurate conclusions about the state of our society.

New Comment
10 comments, sorted by Click to highlight new comments since:

The decline in solar costs is known as Swanson's Law.

Thank you! Should have known someone would have beat me to it. 

I understand your argument that there's a systematic bias from tracking progress on relatively narrow metrics. If progress is uneven across different areas at different times, then the areas that saw progress in the recent past may not be the same areas in which we see progress today.

You don't seem to make any suggestions on what would be a better metric to use. But to me it seems like the simplest solution is just to use broader metrics. For example, instead of tracking the cost of installing solar panels, we could measure the total cost of our electric grid (perhaps including environmental concerns such as carbon emissions as one part of that cost).

Along those lines, the broadest metrics we have are macroeconomic statistics such as GDP per capita. The arguments I've seen for stagnation (mostly from Jason Crawford or Tyler Cowen) already use the recent observed slowdown in GDP growth extensively.

If we see the same trend across most areas and most levels of metrics (both narrow, specific use cases and overall summary statistics) - isn't that strong evidence in favor of the stagnation hypothesis?

Or do you think there are no reliable metrics for measuring progress as a whole?

Basically agree with this suggestion: broader metrics are more likely to be unbiased over time. Even the electric grid example, though, isn't ideal because we can imagine a future point where going from $0.0001 to $0.000000001 per kilowatt-hour, for example, just isn't relevant. 

Total factor productivity and GDP per capita are even better, agreed. 

While a cop-out, my best guess is that a mixture of qualitative historical assessments (for example, asking historians, entrepreneurs, and scientists to rank decades by degree of progress) and using a variety of direct and indirect objective metrics (ex. patent rates, total factor productivity, cost of energy, life expectancy) is the best option. Any single or small group of metrics seems bound to be biased in one way or another. Unfortunately, it's hard to figure out how to weight and compare all of these things. 

While a cop-out, my best guess is that a mixture of qualitative historical assessments (for example, asking historians, entrepreneurs, and scientists to rank decades by degree of progress) and using a variety of direct and indirect objective metrics (ex. patent rates, total factor productivity, cost of energy, life expectancy) is the best option. 

Patent rates aren't an objective measure of innovation. Cutting down the number of trival patents might very well mean increased and not decreased innovation.

I meant objective in the sense that the metric itself is objective, not that it is necessarily a good indicator of innovation. Yes, you're right. I do like Cowen and Southewood's method of only looking at patents registered all of the U.S., Japan, and E.U. 

The subjects making the judgment seem here to be burocrats in the patent office. I don't see how that's substantially more objective then historians making judgments.

Fair point, but you'd have to think that the tendencies of the patent officers changed over time in order to foreclose that as a good metric. 

I do think that standards of what is a trivial invention change over time. There are court cases that invalidate certain patents and then patent officers change their patent giving to not give out the kind of patents that are likely to be declared invalid. Laws also change.

They promised us HK-Aerials and all we got were Predators.