Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"

This piece starts to make the case that we live in a remarkable century, not just a remarkable era. Previous pieces in this series talked about the strange future that could be ahead of us eventually (maybe 100 years, maybe 100,000).

Summary of this piece:

  • We're used to the world economy growing a few percent per year. This has been the case for many generations.
  • However, this is a very unusual situation. Zooming out to all of history, we see that growth has been accelerating; that it's near its historical high point; and that it's faster than it can be for all that much longer (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years).
  • The world can't just keep growing at this rate indefinitely. We should be ready for other possibilities: stagnation (growth slows or ends), explosion (growth accelerates even more, before hitting its limits), and collapse (some disaster levels the economy).

The times we live in are unusual and unstable. We shouldn't be surprised if something wacky happens, like an explosion in economic and scientific progress, leading to technological maturity. In fact, such an explosion would arguably be right on trend.

For as long as any of us can remember, the world economy has grown1 a few percent per year, on average. Some years see more or less growth than other years, but growth is pretty steady overall.2 I'll call this the Business As Usual world.

In Business As Usual, the world is constantly changing, and the change is noticeable, but it's not overwhelming or impossible to keep up with. There is a constant stream of new opportunities and new challenges, but if you want to take a few extra years to adapt to them while you mostly do things the way you were doing them before, you can usually (personally) get away with that. In terms of day-to-day life, 2019 was pretty similar to 2018, noticeably but not hugely different from 2010, and hugely but not crazily different from 1980.3

If this sounds right to you, and you're used to it, and you picture the future being like this as well, then you live in the Business As Usual headspace. When you think about the past and the future, you're probably thinking about something kind of like this:

Business As Usual

I live in a different headspace, one with a more turbulent past and a more uncertain future. I'll call it the This Can't Go On headspace. Here's my version of the chart:

This Can't Go On4

Which chart is the right one? Well, they're using exactly the same historical data - it's just that the Business As Usual chart starts in 1950, whereas This Can't Go On starts all the way back in 5000 BC. "This Can't Go On" is the whole story; "Business As Usual" is a tiny slice of it.

Growing at a few percent a year is what we're all used to. But in full historical context, growing at a few percent a year is crazy. (It's the part where the blue line goes near-vertical.)

This growth has gone on for longer than any of us can remember, but that isn't very long in the scheme of things - just a couple hundred years, out of thousands of years of human civilization. It's a huge acceleration, and it can't go on all that much longer. (I'll flesh out "it can't go on all that much longer" below.)

The first chart suggests regularity and predictability. The second suggests volatility and dramatically different possible futures.

One possible future is stagnation: we'll reach the economy's "maximum size" and growth will essentially stop. We'll all be concerned with how to divide up the resources we have, and the days of a growing pie and a dynamic economy will be over forever.

Another is explosion: growth will accelerate further, to the point where the world economy is doubling every year, or week, or hour. A Duplicator-like technology (such as digital people or, as I’ll discuss in future pieces, advanced AI) could drive growth like this. If this happens, everything will be changing far faster than humans can process it.

Another is collapse: a global catastrophe will bring civilization to its knees, or wipe out humanity entirely, and we'll never reach today's level of growth again.

Or maybe something else will happen.

Why can't this go on?

A good starting point would be this analysis from Overcoming Bias, which I'll give my own version of here:

  • Let's say the world economy is currently getting 2% bigger each year.5 This implies that the economy would be doubling in size about every 35 years.6
  • If this holds up, then 8200 years from now, the economy would be about 3*1070 times its current size.
  • There are likely fewer than 1070 atoms in our galaxy,7 which we would not be able to travel beyond within the 8200-year time frame.8
  • So if the economy were 3*1070 times as big as today's, and could only make use of 1070 (or fewer) atoms, we'd need to be sustaining multiple economies as big as today's entire world economy per atom.

8200 years might sound like a while, but it's far less time than humans have been around. In fact, it's less time than human (agriculture-based) civilization has been around.

Is it imaginable that we could develop the technology to support multiple equivalents of today's entire civilization, per atom available? Sure - but this would require a radical degree of transformation of our lives and societies, far beyond how much change we've seen over the course of human history to date. And I wouldn't exactly bet that this is how things are going to go over the next several thousand years. (Update: for people who aren't convinced yet, I've expanded on this argument in another post.)

It seems much more likely that we will "run out" of new scientific insights, technological innovations, and resources, and the regime of "getting richer by a few percent a year" will come to an end. After all, this regime is only a couple hundred years old.

(This post does a similar analysis looking at energy rather than economics. It projects that the limits come even sooner. It assumes 2.3% annual growth in energy consumption (less than the historical rate for the USA since the 1600s), and estimates this would use up as much energy as is produced by all the stars in our galaxy within 2500 years.9)

Explosion and collapse

So one possible future is stagnation: growth gradually slows over time, and we eventually end up in a no-growth economy. But I don't think that's the most likely future.

The chart above doesn't show growth slowing down - it shows it accelerating dramatically. What would we expect if we simply projected that same acceleration forward?

Modeling the Human Trajectory (by Open Philanthropy’s David Roodman) tries to answer exactly this question, by “fitting a curve” to the pattern of past economic growth.10 Its extrapolation implies infinite growth this century. Infinite growth is a mathematical abstraction, but you could read it as meaning: "We'll see the fastest growth possible before we hit the limits."

In The Duplicator, I summarize a broader discussion of this possibility. The upshot is that a growth explosion could be possible, if we had the technology to “copy” human minds - or something else that fulfills the same effective purpose, such as digital people or advanced enough AI.

In a growth explosion, the annual growth rate could hit 100% (the world economy doubling in size every year) - which could go on for at most ~250 years before we hit the kinds of limits discussed above.11 Or we could see even faster growth - we might see the world economy double in size every month (which we could sustain for at most 20 years before hitting the limits12), or faster.

That would be a wild ride: blindingly fast growth, perhaps driven by AIs producing output beyond what we humans could meaningfully track, quickly approaching the limits of what's possible, at which point growth would have to slow.

In addition to stagnation or explosive growth, there's a third possibility: collapse. A global catastrophe could cut civilization down to a state where it never regains today's level of growth. Human extinction would be an extreme version of such a collapse. This future isn't suggested by the charts, but we know it's possible.

As Toby Ord’s The Precipice argues, asteroids and other "natural" risks don't seem likely to bring this about, but there are a few risks that seem serious and very hard to quantify: climate change, nuclear war (particularly nuclear winter), pandemics (particularly if advances in biology lead to nasty bioweapons), and risks from advanced AI.

With these three possibilities in mind (stagnation, explosion and collapse):

  • We live in one of the (two) fastest-growth centuries in all of history so far. (The 20th and 21st.)
  • It seems likely that this will at least be one of the ~80 fastest-growing centuries of all time.13
  • If the right technology comes along and drives explosive growth, it could be the #1 fastest-growing century of all time - by a lot.
  • If things go badly enough, it could be our last century.

So it seems like this is a quite remarkable century, with some chance of being the most remarkable. This is all based on pretty basic observations, not detailed reasoning about AI (which I will get to in future pieces).

Scientific and technological advancement

It’s hard to make a simple chart of how fast science and technology are advancing, the same way we can make a chart for economic growth. But I think that if we could, it would present a broadly similar picture as the economic growth chart.

A fun book I recommend is Asimov's Chronology of Science and Discovery. It goes through the most important inventions and discoveries in human history, in chronological order. The first few entries include "stone tools," "fire," "religion" and "art"; the final pages include "Halley's comet" and "warm superconductivity."

An interesting fact about this book is that 553 out of its 654 pages take place after the year 1500 - even though it starts in the year 4 million BC. I predict other books of this type will show a similar pattern,14 and I believe there were, in fact, more scientific and technological advances in the last ~500 years than the previous several million.15

In a previous piece, I argued that the most significant events in history seem to be clustered around the time we live in, illustrated with this timeline. That was looking at billions-of-years time frames. If we zoom in to thousands of years, though, we see something similar: the biggest scientific and technological advances are clustered very close in time to now. To illustrate this, here's a timeline focused on transportation and energy (I think I could've picked just about any category and gotten a similar picture).

So as with economic growth, the rate of scientific and technological advancement is extremely fast compared to most of history. As with economic growth, presumably there are limits at some point to how advanced technology can become. And as with economic growth, from here scientific and technological advancement could:

  • Stagnate, as some are concerned is happening.
  • Explode, if some technology were developed that dramatically increased the number of "minds" (people, or digital people, or advanced AIs) pushing forward scientific and technological development.16
  • Collapse due to some global catastrophe.

Neglected possibilities

I think there should be some people in the world who inhabit the Business As Usual headspace, thinking about how to make the world better if we basically assume a stable, regular background rate of economic growth for the foreseeable future.

And some people should inhabit the This Can’t Go On headspace, thinking about the ramifications of stagnation, explosion or collapse - and whether our actions could change which of those happens.

But today, it seems like things are far out of balance, with almost all news and analysis living in the Business As Usual headspace.

One metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:

We're going much faster than normal, and there isn't enough runway to do this much longer ... and we're accelerating.

And every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other.

If I were in this situation and I didn't know what was next (liftoff), I wouldn't necessarily get it right, but I hope I'd at least be thinking: "This situation seems kind of crazy, and unusual, and temporary. We're either going to speed up even more, or come to a stop, or something else weird is going to happen."

Thanks to María Gutiérrez Rojas for the graphics in this piece, and Ludwig Schubert for an earlier timeline graphic that this piece's timeline graphic is based on.


Use this feedback form if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) 

New Comment
55 comments, sorted by Click to highlight new comments since:

GDP growth is measured in money, a measure of value. Value does not have to be backed by a proportional amount of matter (or energy, space or time) because we can value things as much as we like - more than some constant times utilon per gram second.

Suppose I invent an algorithm that solves a hard problem and sell it as a service. The amount people will be willing to pay for it - and the amount the economy grows - is determined by how much people want it and how much money there is, but nobody cares how many new atoms I used to implement it. If I displace older, less efficient algorithms, then I produce value while reducing the number of atoms (or watts) backing the economy!

Material goods and population size can't keep growing forever, but value can. Many recent developments that produced a lot of value, like radio, computing, and the Internet, didn't do it by using proportionally more atoms. An algorithm is a convenient example but this applies to non-digital services just as much.

This is not a novel argument but I can't recall it's source or name.

As a concrete example, let's imagine that sending an email is equivalent to sending a letter. Let's ignore the infrastructure required to send emails (computers, satellites, etc) vs. letters (mail trucks, post offices, etc), and assume they're roughly equal to each other. Then the invention of email eliminated the vast majority of letters, and the atoms they would have been made from.

Couple this with the fact that emails are more durable, searchable, instantaneous, free, legible, compatible with mixed media, and occupy only a miniscule amount of physical real estate in the silicon of the computer, and we can see that emails not only reduce the amount of atoms needed to transmit a letter, but also produce a lot more value.

In theory, we might spend the next several thousand years not only finding ways to pack more value into fewer atoms, but also enhancing our ability to derive value from the same good or service. Perhaps in 10,000 years, checking my email will be a genuine pleasure!

In fact, come to think of it, this is the thesis of More from Less by Andrew McAffee, who points out that in numerous categories of material products, we've seen global GDP growing while using less material resources, in both relative and absolute terms. 

Edit: though see multiple 1-star reviews from non-anonymous Amazon reviewers with economics PhDs who say the core premise of McAffee's book is incorrect. Sounds like there is better research out there than he presents in this book.

An alternative point of view is in Decoupling Debunked, which seems to feed into degrowth literature. Makes me worry that both McAffee's and this piece will suffer from the same issues we find when we look for a consensus viewpoint among economists on the effect of the minimum wage.

A more optimistic 2020 peer reviewed article on decoupling, "A systematic review of the evidence on decoupling of GDP, resource use and GHG emissions, part II: synthesizing the insights", claims:

We find that relative decoupling is frequent for material use as well as GHG and CO2 emissions but not for useful exergy, a quality-based measure of energy use. Primary energy can be decoupled from GDP largely to the extent to which the conversion of primary energy to useful exergy is improved. Examples of absolute long-term decoupling are rare, but recently some industrialized countries have decoupled GDP from both production- and, weaklier, consumption-based CO2 emissions.

There's a few one-star Amazon reviews for the book that suggest McAfee's data is incorrect or misleading. Here's a quote from one of them, which seems like a solid counterargument to me:


"However, on the first slide on page 79, he notes that the data excludes impact from Import/export of finished goods. Not raw materials but finished goods. He comments that Net import is only 4% of GDP in the US. Here he makes a (potentially) devastating error – (potentially) invalidating his conclusion.

While Net imports is indeed around 4% of GDP, the gross numbers are Exports at approx. +13% and Imports at approx. -17%. So any mix difference in finished goods in Export and Import, can significantly change the conclusion. It so happens that US is a major Net importer of finished goods e.g. Machinery, electronic equipment and autos (finished goods, with materials not included above in the consumption data). Basically, a big part of US’ consumption of cars, washing machines, computers etc. are made in Mexico, China etc. They contain a lot of materials, not included in the graphs, upon which he builds his conclusion/thesis. So quite possibly, there is no de-coupling."

Thanks very much for pointing this out. I hadn't seen these rebuttals before.

I still think the argument holds in this case, because even computer software isn't atom-less. It needs to be stored, or run, or something somewhere.

I don't doubt that you could drastically reduce the number of atoms required for many products today. For example, you could in future get a chip in your brain that makes typing without a keyboard possible. That chip is smaller than a keyboard, so represents lots of atoms saved. You could go further, and have that chip be an entire futuristic computer suite, by reading and writing your brain inputs and outputs directly it could replace the keyboard, mouse, monitors, speakers, and entire desktop, plus some extra stuff, like also acting as a VR Headset, or video game console, or whatever. Lets say you manage to squeeze all that into a single atom. Cool. That's not enough. For this growth to go on for those ~8000 years, you'd need to have that single-atom brain chip be as valuable as everything on Earth today. Along with every other atom in the galaxy

I think at some point, unless the hottest thing in the economy becomes editing humans to value specific atoms arbitrary amounts (which sounds bad, even if it would work), you can't get infinite value out of things. I'm not even sure human minds have the capability of valuing things infinitely. I think even with today's economy, you'd start to hit some asymptotes (i.e. if one person had everything in the world, I'm not sure what they'd do with it all. I'm also not sure they'd actually value it any more than if they just had 90% of everything, except maybe the value on saying "I have it all", which wouldn't be represented in our future economy)

And still, the path to value per atom has to come from somewhere, and in general it's going to be making stuff more useful, or smaller, but there's only so useful a single atom can be, and there's only so small a useful thing can be. (I imagine some math on the number of ways you could arrange a set of particles, multiplied by the number of ways a particular arrangement could be used, as an estimate. But a quick guess says that neither of those values are infinite, and, I expect that number to be dominated by ways of arranging particles, not by number of uses, considering that even software on a computer is actually different arrangements of the electrons.)

So I guess that's the heart of it to me, there's certainly a lot more value we can squeeze out of things, but if there's not literally infinite, it will run out at some point, and that ~8000 year estimate is looking pretty close to whatever the limit is, if it's not already over it.

Please see my other reply here. Yes, value is finite, but the number of possible states of the universe is enormously large, and we won't explore it in 8000 years. The order of magnitude is much bigger.

(Incidentally, our galaxy is ~ 100,000 light years across; so even expanding to cover it would take much longer than 8000 years, and that would be creating value the old-fashioned way by adding atoms, but it wouldn't support continued exponential growth. So "8000 years" and calculations based off the size of the galaxy shouldn't be mixed together. But the order-of-magnitude argument should work about as well for the matter within 8000 light-years of Earth.)

In much the same way, estimates of value and calculations based on the number of permutations of atoms shouldn't be mixed together. There being a googleplex possible states in no way implies that any of them have a value over 3 (or any other number). It does not, by itself, imply that any particular state is better than any other. Let alone that any particular state should have value proportional to the total number of states possible.

Restricting yourself to atoms within 8000 light years, instead of the galaxy, just compounds the problem as well, but you noted that yourself. The size of the galaxy wasn't actually a relevant number, just a (maybe) useful comparison. It's like when people say that chess has more possible board states than there are atoms in the observable universe times the number of seconds since the Big Bang. It's not that there's any specifically useful interaction between atoms and seconds and chess, it's just to recognize the scale of the problem.

Value is not obviously bounded by atoms, yes. However, GDP measures production of value. And, the entities producing value are made of atoms. Today these entities are humans. In the future, they might be something much more efficient. However, it seems at least plausible that their efficiency (i.e. rate of value production per atom) is somehow bounded by physics.

The rate of value production per atom can be bounded by physics. But the amount of value ascribed to the thing being produced is only strictly bounded by the size of the number (representing the amount of value) that can be physically encoded, which is exponential in the number of atoms, and not linear.

size of the number (representing the amount of value) that can be physically encoded, which is exponential in the number of atoms

The natural numbers that can be physically encoded are not bounded by an exponent of the number of bits if you don't have to be able to encode all smaller numbers as well in the same number of bits. If you define a number, you've encoded it, and it's possible to define very large numbers indeed.

Great point, thanks!

To me, just ascribing more value to things without anything material about the situation changing sounds like inflation, not real growth.

The configuration does change, it's just that the change is not about the amount of matter. If there are configurations absurdly more positively or negatively valuable than others, that just makes the ordinary configurations stop being decision relevant, once discerning the important configurations becomes feasible.

So, you imagine that the rate at which new "things" are produced hits diminishing returns, but every new generation of things is more valuable than the previous generation s.t. exponential growth is maintained. But, I think this value growth has to hit a ceiling pretty soon anyway, because things can only be that much valuable. Arguably, nothing is so valuable that you can be Pascal-mugged into paying 1000 USD for someone promising to produce it by magic. Hence, the maximally valuable thing is worth no more than 1000 USD divided by the tiny probability that a Pascal mugger is telling the truth. I admit that I don't know how to quantify this, but it does point at a limit to such growth.

you imagine that the rate at which new "things" are produced hits diminishing returns

The rate at which new atoms (or matter/energy/space more broadly) are added will hit diminishing returns, at the very least due to speed of light.

The rate at which new things are produced won't necessarily hit diminishing returns because we can keep cannibalizing old things to make better new things. Often, re-configurations of existing atoms produce value without consuming new resources except for the (much smaller) amount of resources used to rearrange them. If I invent email which replaces post mail I produce value while reducing atoms used.

this value growth has to hit a ceiling pretty soon anyway, because things can only be that much valuable

Eventually yes, but I don't think they have to do hit a ceiling soon, e.g. in a timeframe relevant to the OP. Maybe it's probable they will, but I don't know how to quantify it. The purely physical ceiling on ascribable value is enormously high (other comment on this and also this).

Like you, I don't know what to make of intuition pumps like your proposed Pascal's Ceiling of Value. Once you accept that actual physics don't practically limit value, what's left of the OP is a similar-looking argument from incredulity: can value really grow exponentially almost-forever just by inventing new things to do with existing atoms? I don't know that it will keep growing, but I don't see a strong reason to think it can't, either.

I think it's more than an argument from incredulity.

Let's try another angle. I think that most people would prefer facing a probability of death to paying USD. I also think there's nothing so good that a typical person would accept a probability of everyone dying to get it with the remaining probability of . Moreover, a typical person is "subulititarian" (i.e. considers people dying at most times as bad as themself dying). Hence, subjective value is bounded by USD. Combined with physics, this limits GPD growth on a relevant timeframe.

I think that most people would prefer facing a 10e-6 probability of death to paying 1000 USD.

The sum of 1000 USD comes from the average wealth of people today. Using (any) constant here encodes the assumption that GDP per capita (wealth times population) won't keep growing.

If we instead suppose a purely relative limit, e.g. that a person is willing to pay a 1e-6 part of their personal wealth to avoid a 1e6 chance of death, then we don't get a bound on total wealth.

Let denotes the utility of a person with wealth , the maximal utility of a person (i.e. ) and the median wealth of a modern person. My argument establishes that

But, can we translate this to a bound on GDP? I'm not sure.

Part of the problem is, how do we even compare GDPs in different time periods? To do this, we need to normalize the value of money. Standard ways of doing this in economics involve using "universally valuable" goods such as food. But, food would be worthless in a future society of brain emulations, for example.

I propose using computational resources as the "reference" good. In the hypothetical future society you propose, most value comes from non-material goods. However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.

I propose using computational resources as the "reference" good.

I don't understand the implications of this, can you please explain / refer me somewhere? How is the GDP measurement resulting from this choice going to be different from another choice like control of matter/energy? Why do we even need to make a choice, beyond the necessary assumption that there will still be a monetary economy (and therefore a measurable GDP)?

In the hypothetical future society you propose, most value comes from non-material goods.

That seems very likely, but it's not a necessary part of my argument. Most value could keep coming from material goods, if we keep inventing new kinds of goods (i.e. new arrangements of matter) that we value higher than past goods.

However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.

There's a physical bound on how much computation can be done in the remaining lifetime of the universe (in our future lightcone). But that computation will necessarily take place over a very very long span of time.

For as long as we can keep computing, the set of computation outputs (inventions, art, simulated-person-lifetimes, etc) each year can keep being some n% more valuable than the previous year. The computation "just" needs to keep coming up with better things every year instead of e.g. repeating the same simulation over and over again. And this doesn't seem impossible to me.

The nominal GDP is given in units of currency, but the value of currency can change over time. Today's dollars are not the same as the dollars of 1900. When I wrote the previous comment, I thought that's handled using a consumer price index, in which case the answer can depend on which goods you include in the basket. However, actually real GDP is defined using something called the GDP deflator which is apparently based on a variable "basket" consisting of those goods that are actually traded, in proportion to the total market value traded in each one.

AFAIU, this means GDP growth can theoretically be completely divorced from actual value. For example, imagine there are two goods, A and B, s.t. during some periods A is fashionable and its price is double the price of B, whereas during other periods B is fashionable and its price is double the price of A. Assume also that every time a good becomes fashionable, the entire market switches to producing almost solely this good. Then, every time the fashion changes the GDP doubles. It thus continues to grow exponentially while the real changes are just circling periodically on the same place. (Let someone who understands economics correct me if I misunderstood something.)

Given the above, we certainly cannot rule out indefinite exponential GDP growth. However, I think that the OP's argument that we live in a very unusual situation can be salvaged by using a different metric. For example, we can measure the entropy per unit of time produced by the sum total of human activity. I suspect that for the history so far, it tracks GDP growth relatively well (i.e. very slow growth for most of history, relatively rapid exponential growth in modern times). Since the observable universe has finite entropy (due to the holographic principle), there is a bound on how long this phenomenon can last.

There's some discussion of this in a followup post.

"Many recent developments that produced a lot of value, like radio, computing, and the Internet, didn't do it by using proportionally more atoms."

There are vacuum electronic tube production facilities (late 18th century onward), many billion dollar semiconductor factories (late 1970s onward), and piles and piles of electronic waste that say this isn't true.

By "proportionately more" I meant more than the previous economic-best use of the same material input, which the new invention displaced (modulo increasing supply). For example, the amount of value derived by giving everyone (every home? every soldier? every car?) a radio is much greater than any other value the same amount of copper, zinc etc. could have been used for before the invention of radio. We found a new way to get more value from the same material inputs.

For material outputs (radio sets, telegraph wire, computers), of course material inputs are used. But the amount of value we get from the inputs is not really related to, or bounded by, the amount of input material. A new way of using material can have an arbitrarily high value-produced-to-materials-consumed ratio.

I'll run with your example of semiconductor factories. A factory costs between $1-20 billion to build. The semiconductor industry has a combined yearly revenue of $500 billion (2018). Doesn't sound like a huge multiplier so far.

But then consider that huge amounts of modern technology (= value) require semiconductors as an input. The amount of semiconductor industry inputs, and material waste byproducts, was similar in 1990 and 2020 (same order of magnitude). But the amount of value enabled by using those semiconductors was enormously larger in 2020. Whole new markets were created thanks to the difference in capability between 1990 semiconductors ($100 per megabyte DRAM) and 2020 ($0.003 per MB). Smartphones, PCs, modern videogames, digital video and audio, digital cameras, most of the way the Internet and Web are used today; but also all modern devices with chips inside, from cars to satellites; the list is almost endless.

All of these require extra inputs besides semiconductors, and those inputs cost time and money. But the bill of materials for a 2020 smartphone is smaller and cheaper than that of an early 1990 cellphone, while the value to the owner is much greater. (A lot of the value comes from software and digital movies and music, which don't consume atoms in the relevant sense, because they can be copied on demand.)

Thank you for clarifying the definition you're using for "proportionately more".

Two points come to mind:

  • The material waste products of the electronics ecosystem between 1990s and now has shifted from mass/toxic atoms (cathode-ray tubes/lead, mercury) to less mass but more rare(er) earth elements such as indium and cobalt. 1 The problem of "this can't go on" may not be limited by total of all atoms but by total of electronically important elements that can be mined "sustainably" on earth. All atoms are not equal. As you're probably aware, "rare earth" is not always about the total amount of atoms of said element in the earth but of how the element is dispersed (or not) and, thus, how "easily" it can be mined. ("easily" includes physical as well as political impediments2)The electronic waste stream efforts are very likely to shift from dealing with mass/toxicity to harvesting the rare earth elements from electronic waste. I can imagine the trade-off graph between all of the costs of more pit mines in more politically diverse areas for harvesting virgin rare earth elements vs harvesting electronic waste. I can't imagine either being anywhere close to all of the atoms on earth much less the entire universe. Orders of magnitude seem likely but I could be persuaded otherwise.
  • The idea of "modern technology (=value)" seems to have a presumption of that value being only positive. When I see that kind of blanket statement about technology I am reminded of the 2012 cover of The MIT Technology Review with Buzz Aldrin saying "You promised me Mars colonies. Instead, I got Facebook". No argument from me that use of atom-light applications are valued in the stock market. No argument from me regarding the excitement/"value" of block-chain and it's use of more electricity than many countries. Humans used to be pretty thrilled about tulips, too. Maybe the point of downsides of modern technology, including the exploitation of human nature wrt self-image (Instagram), in-group/out-group (Facebook), metabolic balance (Ultra-Processed Food), and attention (video games), fall to the stagnation/collapse buckets of the OP.

The second point plays into the first: modern technology value of human nature exploitation diverts technology from going off-planet to get more electronically important atoms.

I hope the two links can be followed. I'm new to this commenting tool. I'm open to advice if I've linked incorrectly (or inelegantly).

The OP's argument is general: it says essentially that (economic) value is bounded linearly by the number of atoms backing the economy. Regardless of how the atoms are translated to value. This is an impossibility argument. My rebuttal was also general, saying that value is not so bounded.

Any particular way of extracting value, like electronics, usually has much lower bounds in practice than 'linear in the amount of atoms used' (even ignoring different atomic elements). So yes, today's technology that depends on 'rare' earths is bounded by the accessible amount of those elements.

But this technology is only a few decades old. The economy has been growing at some % a year for much longer than that, across many industries and technological innovations that have had very different material constraints from each other. And so, while contemporary rare-earth-dependent techniques won't keep working forever, the overall trend of economic growth could continue far beyond any one technology's lifespan, and for much longer than the OP projects.

Technology and other secular change doesn't always increase value; often it is harmful. My argument is that economy can keep growing for a long time, not that it necessarily will, or that all (or even most) changes over time are for the best. And GDP is not a good measure of human wellbeing to begin with; we're measuring dollars, not happiness, and when I talk about "utility" I mean the kind estimated via revealed preferences.

This post is excellent. The airplane runway metaphor hit home for me and I think it will help me explain my worries about exponential growth to other people more clearly than graphs, so thanks for writing it up!

I think if you're already onboard with "people made of software" then this part goes through with much less difficulty?

Is it imaginable that we could develop the technology to support multiple equivalents of today's entire civilization, per atom available? Sure - but this would require a radical degree of transformation of our lives and societies, far beyond how much change we've seen over the course of human history to date.

Have you read Diaspora or Permutation City? Or heck, even just maybe Excession? Dragon's Egg is kinda fun but (via the same data that leads to the Fermi Problem) can't be true because if femtotechnological biology is real then... it already happened.

Maybe you have read this stuff, and its just that you're writing for an audience with a limited imagination? Its hard for me to figure it out.

And every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other.

Are you writing to and for those people? Or are you like me, trying to figure out where the cockpit (or the escape hatch) is and whether anyone has their hands on the wheel at all?

I think the general claim this post makes is

  • incredibly important
  • well argued
  • non obvious to many people

I think there's an objection here that value != consumption of material resources, hence the constraints on growth may be far higher than the author calculates. Still, the article is great

I think there's an objection here that value != consumption of material resources, hence the constraints on growth may be far higher than the author calculates. Still, the article is great.

IMO, I disagree here. I do think nearly all the value came from material consumption, IMO.

I think of all the posts that Holden has written in the last two years, this is the one that I tend to refer to by far the most, in-particular the "size of economy" graph.

I think there are a number of other arguments that lead you to roughly the same conclusion ("that whatever has been happening for the last few centuries/millenia has to be an abnormal time in history, unless you posit something very cyclical"), that other people have written about (Luke's old post about "there was only one industrial revolution" is the one that I used to link for this the most), but I think this post has a more minimal set of assumptions, and more directly argues for this claim. 

Although I think the assumption that economic growth demands endlessly increasing material consumption is flawed, it seems natural to imagine that even a maximally efficient economy must use a nonzero number of atoms on average to produce an additional utilon. There must, therefore, be a maximal level of universal utility, which we can approach to within some distance in a finite number of doublings. Since we have enormous amounts of time available, and are also contending with a shrinking amount of access to material resources over time, it seems natural to posit that an extremely long-lived species could reach a point at which the economy simply cannot grow at the same rate.

The timeline you establish here by extrapolating present trends isn't convincing to me, but I think the basic message that "this can't go on" is correct. It seems to me that this insight is vastly more important to understand the context of our century than any particular estimate of when we might reach the theoretical limit of utility.

In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can't keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be "maximally efficient" when value can't be added by rearranging its existing atoms, and we must add atoms to produce more value.

However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible physical arrangements of a finite-sized system is enormous. The Bekenstein bound is approximately 2.6e43 * M * R (mass times radius) bits per kg * m. Someone who understands QM should correct me here, but just as an order-of-magnitude-of-order-of-magnitude estimation, our galaxy masses around 1e44 Kg with a radius of 1e18 meters, so its arrangement in a black hole can contain up to 2.6e105 bits of information.

Those are bits; the number of states is 2^(2.6e105). That is much, much bigger than the OP's 3e70; we can grow the per-atom value of the overall system state by a factor much bigger than 3e70.

Of course this isn't a tight argument and there are lots of other things to consider. For example, to get the galaxy into some valuable configuration, we'd need to "use up" part of the same galaxy in the process of changing the configuration of the rest. But from a purely physical perspective, the upper bound on value per atom is enormously high.

ETA: replaced mind-boggling numbers with even bigger mind-boggling numbers after a more careful reading of Wikipedia.

That’s a nice conceptual refinement. It actually swings me in the other direction, making it seem plausible that humans might not have nearly enough time to find the optimum arrangement in their expected lifespan and that this might be a central question.

One possibility is that there is a maximal value tile that is much smaller than “all available atoms” and can be duplicated indefinitely to maximize expected value. So perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.

in their expected lifespan

Or even in the expected lifetime of the universe.

perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.

That's a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don't (and possibly can't, or shouldn't) have an exact (mathematical) definition of a Pan-Human Utility Function.

However, a proof isn't needed to make this happen (for better and for worse). If a local configuration is created which is sufficiently more (universally!) valuable than any other known local configuration, neighbors will start copying it and it will tile the galaxy, possibly ending progress if it's a stable configuration - even if this configuration is far from the best one possible locally (let alone globally).

In practice, "a wonderful thing was invented, everyone copied it of their own free will, and stayed like that forever because human minds couldn't conceive of a better world, leaving almost all possible future value on the table" doesn't worry me nearly as much as other end-of-progress scenarios. The ones where everyone dies seem much more likely.

Indeed. I think that a serious search for an answer to these questions is probably best left for the "Long Reflection."

Footnotes Container

1. If you have no idea what that means, try my short economic growth explainer.

2. Global real growth has generally ranged from slightly negative to ~7% per year.

3. I'm skipping over 2020 here since it was unusually different from past years, due to the global pandemic and other things.

4. For the historical data, see Modeling the Human Trajectory. The projections are rough and meant to be visually suggestive rather than using the best modeling approaches.

5. This refers to real GDP growth (adjusted for inflation). 2% is lower than the current world growth figure, and using the world growth figure would make my point stronger. But I think that 2% is a decent guess for "frontier growth" - growth occurring in the already-most-developed economies - as opposed to total world growth, which includes “catchup growth” (previously poor countries growing rapidly, such as China today).

To check my 2% guess, I downloaded this US data and looked at the annualized growth rate between 2000-2020, 2010-2020, and 2015-2020 (all using July since July was the latest 2020 point). These were 2.5%, 2.2% and 2.05% respectively.

6. 2% growth over 35 years is (1 + 2%)^35 = 2x growth.

7. Wikipedia's highest listed estimate for the Milky Way's mass is 4.5*10^12 solar masses, each of which is about 2*10^30 kg, each of which is estimated as the equivalent of about 1.67*10^-27 hydrogen atoms. (4.5*10^12 * 2*10^30)/(1.67*10^-27) =~ 5.4*10^69.

8. Wikipedia: "In March 2019, astronomers reported that the mass of the Milky Way galaxy is 1.5 trillion solar masses within a radius of about 129,000 light-years." I'm assuming we can't travel more than 129,000 light-years in the next 8200 years, because this would require far-faster-than-light travel.

9. This calculation isn't presented straightforwardly in the post. The key lines are "No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years" and "The Milky Way galaxy hosts about 100 billion stars. Lots of energy just spewing into space, there for the taking. Recall that each factor of ten takes us 100 years down the road. One-hundred billion is eleven factors of ten, so 1100 additional years." 1400 + 1100 = 2500, the figure I cite. This relies on the assumption that the average star in our galaxy offers about as much energy as the sun; I don't know whether that's the case.

Hey... the post links to tenth footnote instead of this one. (Also, no, the Sun seems at the somewhat low end of brightness?)

10. There is an open debate on whether Modeling the Human Trajectory is fitting the right sort of shape to past historical data. I discuss how the debate could change my conclusions here.

11. 250 doublings would be a growth factor of about 1.8*10^75, over 10,000 times the number of atoms in our galaxy.

12. 20 years would be 240 months, so if each one saw a doubling in the world economy, that would be a growth factor of about 1.8*10^72, over 100 times the number of atoms in our galaxy.

13. That’s because of the above observation that today’s growth rate can’t last for more than another 8200 years (82 centuries) or so. So the only way we could have more than 82 more centuries with growth equal to today’s is if we also have a lot of centuries with negative growth, ala the zig-zag dotted line in the "This Can't Go On" chart.

14. This dataset assigns significance to historical figures based on how much they are covered in reference works. It has over 10x as many "Science" entries after 1500 as before; the data set starts in 800 BC. I don't endorse the book that this data set is from, as I think it draws many unwarranted conclusions from the data; here I am simply supporting my claim that most reference works will disproportionately cover years after 1500. 

15. To be fair, reference works like this may be biased toward the recent past. But I think the big-picture impression they give on this point is accurate nonetheless. Really supporting this claim would be beyond the scope of this post, but the evidence I would point to is (a) the works I'm referencing - I think if you read or skim them yourselves you'll probably come out with a similar impression; (b) the fact that economic growth shows a similar pattern (although the explosion starts more recently; I think it makes intuitive sense that economic growth would follow scientific progress with a lag).

16. The papers cited in The Duplicator on this point specifically model an explosion in innovation as part of the dynamic driving explosive economic growth.