Disclaimer: This is a collection of some simple stories for exponential growth. I've tried to list the main ones, but I might well have missed some, and I welcome feedback.

The topic of whether and why growth trends are exponential has been discussed on LessWrong before. For instance, see the previous LessWrong posts Why are certain trends so precisely exponential? and Mathematical simplicity bias and exponential functions. The purpose of this post is to explore some general theoretical reasons for expecting exponential growth, and the assumptions that these models rely on. I'll look at economic growth, population dynamics, and technological growth.

TL;DR

  1. Exponential growth (or decay) arises from a situation where the change in level (or growth rate) is proportional to the level. This can be modeled by either a continuous or a discrete differential equation.
  2. Feedback based on proportionality is usually part of the story, but could occur directly for the measured variable or in a hidden variable that affects the measured variable.
  3. In a simplified balanced economic growth model, growth is exponential because the addition to capital stock in a given year is proportional to output in that year, depreciation rate is constant, and output next year is proportional to capital stock this year.
  4. In a simple population dynamics model, growth is exponential under the assumption that the average number of kids per person stays constant.
  5. An alternative story of exponential growth is that performance is determined by multiplying many quantities, and we can work to make proportional improvements in the quantities one after the other. This can explain roughly exponential growth but not close-to-precise exponential growth.
  6. Stories of intra-industry or inter-industry coordination can explain a more regular exponential growth pattern than one might otherwise expect.

#1: Exponential arises from change in level (or growth rate) being proportional to the level

Brief mathematical introduction for people who have a basic knowledge of calculus. Suppose we're trying to understand how a quantity x (this could be national GDP of a country, or the price of 1 GB of NAND flash, or any other indicator) changes as a function of time t. Exponential growth means that we can write:

x = Cat

where C > 0, a > 1 (exponential decay would mean a < 1). More conventionally, it is written in the form:

x = Cekt

where C > 0, k > 0 (exponential decay would mean k < 0). The two forms are related as follows: a = ek.

The key feature of the exponential function is that for any t, the quotient x(t +1)/x(t) is a constant independent of t (the constant in question being a). In other words, the proportional gain is the same over all time periods.

Exponential growth arises as the solution to the (continuous, ordinary, first-order first-degree) differential equation:

dx/dt = kx

This says that the instantaneous rate of change is proportional to the current value.

We can also obtain exponential growth as the solution to the discrete differential equation:

Δ x = (a - 1)x

where Δ x denotes the difference x(t + 1) - x(t) (the discrete derivative of x with respect to t). What this says is that the discrete change in x is proportional to x.

To summarize, exponential growth arises as a solution to both continuous and discrete differential equations where the rate of change is proportional to the current level. The mathematical calculations work somewhat differently, but otherwise, the continuous and discrete situations are qualitatively similar for exponential growth.

#2: Feedback based on proportionality is usually part of the story, but the phenomenon could occur in a visible or hidden process

The simplest story for why a particular indicator grows exponentially is that the growth rate is determined directly in proportion with the value at a given point in time. One way of framing this is that there is feedback from the level of the indicator to the rate of change of the indicator. To get a good story for exponential growth, therefore, we need a good story for why the feedback should be in the form of direct proportionality, rather than some other functional form.

However, we can imagine a subtly different story of exponential growth. Namely, the indicator itself is not the root of the phenomenon at all, but simply a reflection of other hidden variables, and the phenomenon of exponential growth is happening at the level of these hidden variables. For instance, visible indicator x might be determined as being 0.82y2 for a hidden variable y, and it might be that the variable y is the one that experiences feedback from its level to its rate of change. I believe this is conceptually similar to (though not mathematically the same as) hidden Markov models.

One LessWrong comment offered this sort of explanation: perhaps the near-perfect exponential growth of US GDP, and its return to an earlier trend line after deviation during some years, suggests that population growth is the hidden variable that drives long-run trends in GDP. The question of whether economic growth should revert to an earlier trend line after a shock is a core question of macroeconomics with a huge but inconclusive literature; see Arnold Kling's blog post titled Trend vs. Random Walk.

#3: A bare-bones model of balanced economic growth (balanced growth version of Harrod-Domar model)

Let's begin with a very basic model of economic growth This is not to be applied directly in the understanding of real-world economies. Rather, it's meant to give us a crude idea of where exponentiality comes from.

In this model, an economy produces a certain output Y in a given year (Y changes from year to year). The economy consumes part of the output, and saves the rest of it to add to its capital stock K. Suppose the following hold:

  1. The fraction of output produced that is converted to additional capital stock is constant from year to year (i.e., the propensity to save is constant).
  2. The (fractional) rate of depreciation of capital stock (i.e., the fraction of capital stock that is lost every year due to depreciation) is constant.
  3. The amount of output produced in a given year is proportional to the capital stock at the end of the previous year, with the constant of proportionality not changing across years.

We have two variables here, output and capital stock, linked by proportionality relationships between them and between their year-on-year changes. When we work out the algebra, we'll discover that both variables grow exponentially in tandem.

The above describe a balanced growth model, where the shape and nature of the economy do not change. It just keeps growing in size, with all the quantities growing together in the same proportion. Economies may initially be far from a desirable steady state, or may be stuck in a low-savings steady state. Also note that if the rate of depreciation of capital stock exceeds the rate at which new capital stock is added, the economy will decay rather than grow exponentially.

If you're interested in actual models of economic growth used in growth theory and development economics, read up on the Harrod-Domar model and its variants such as the Ramsey-Coopman-Kans model, AK model, and Solow-Swan model. For questions surrounding asymptotic convergence, check out the Inada conditions.

#4: Population dynamics

The use of exponential models for population growth is justified under the assumption that the number of children per woman who survive to adulthood remains constant. Assume a 1:1 sex ratio, and assume that women have an average of 3 kids who survive to adulthood. In that case, with every generation, the population multiplies by a factor of 3/2 = 1.5. After n generations, the population would be (1.5)n times the original population. This is of course a grossly oversimplified model, but it covers the rationale for exponential growth. In practice, the number of surviving children per woman varies over time due to a combination of fertility changes and changes in age-specific mortality rates.

The dynamics are even simpler to understand for bacteria in a controlled environment such as a petri dish. Bacteria are unicellular organisms and they reproduce by binary fission: a given bacterium splits into two new bacteria. As long as there are ample resources, a bacterium may split into two after an average interval of 1 hour. In that case, we expect the number of bacteria in the petri dish to double every hour.

#5: A large number of factors that multiply together to determine the quantity

Here is a somewhat different story for exponential growth that a number of people have proposed independently. In a recent comment, Ben Kuhn wrote:

One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).

Note that in order for this growth to come out as close to exponential, it's important that the marginal difficulty, or time, or cost, of addressing the factors is about the same. For instance, if the overall indicator we are interested in is a product pqrs, it may be that in a given year, we can zero in on one of the four factors and reduce that by 5%, but it doesn't matter which one.

A slightly more complicated story is that the choice of what factor we can work on at a given stage is constrained, but the best marginal choices at all stages are roughly as good in proportional terms. For instance, maybe, for our product pqrs, the best way to start is by reducing p by 5%. But once we are done with that, next year the best option is to reduce q by 5%. And then, once that's done, the lowest-hanging fruit is to reduce r by 5%. This differs subtly from the previous one in that we're forced from outside in the decision of what factor to work on at the current margin, but the proportional rate of progress still stays constant.

However, in the real world, it's highly unlikely that the proportional gains quite stay constant. I mean, if we can reduce p by 5% in the first year and q by 5% in the second year, what really gets in the way of reducing both together? Is it just a matter of throwing more money at the problem?

By the way, one example of rapid progress that does seem to closely hew to the multiplicative model is the progress made on linear programming algorithms. Linear programming involves a fair number of algorithms within algorithms. For instance, solving certain types of systems of linear equations is a major subroutine invoked in the most time-critical component of linear programming.

My overall conclusion is that multiplicative stories are good for explaining why growth is very roughly close to exponential, but they are not strong enough by themselves to explain a very precise exponential growth trend. However, when combined with stories about regularization, they could explain what a priori seems an unexpectedly close to precise exponential.

#6: The story of coordination and regularization

Some people have argued that the reason Moore's law (and similar computing paradigms) have held for sufficiently long periods of history is due to explicit industry roadmaps such as the International Technology Roadmap for Semiconductors. I believe that a roadmap cannot bootstrap the explanation for growth being exponential. If roadmaps could dictate reality so completely, why didn't the roadmap decide on even faster exponential growth, or perhaps superexponential growth? No, the reason for exponential growth must come from some more fundamental factors.

But explicit or implicit roadmaps and industry expectations can explain why progress was so close to being precisely exponential. I offer one version of the story.

In a world where just one company is involved with research, manufacturing, and selling to the public, the company would try to invest according to what they expected consumer demand to be (see my earlier post for more on this). Since there aren't strong reasons to believe that consumer needs grow exponentially, nor are there good reasons to believe that progress at resolving successive barriers is close to precisely exponential, an exponential growth story here would be surprising.

Suppose now that the research and manufacturing processes are handled by different types of companies. Let's also suppose that there are many different companies competing at the research level and many different companies competing at the manufacturing level. The manufacturing companies need to make plans for how much to produce and how much raw material to keep handy for the next year, and these plans require having an idea of how far research will progress.

Since no individual manufacturer controls any individual researcher, and since the progress of individual research companies can be erratic, the best bet for manufacturers is to make plans based on estimates of how far researchers are expected to go, rather than on any individual research company's promise. And a reasonable way to make such an estimate is to have an industry-wide roadmap that serves a coordinating purpose. Manufacturers have an incentive to follow the roadmap, because deviating in either direction might result in them having factories that don't produce the right sort of stuff, or have too much or too little capacity. The research companies also have incentives to meet the targets, and in particular, to neither overshoot nor undershoot too much. The reasons for not undershooting are obvious: they don't want to be left behind. But why not overshoot? Since the manufacturers are basing their plans on the technology they expect,  a research company overshooting might result in technologies that aren't ready for implementation, so the advantage is illusory. On the other hand, the costs of overshooting (in terms of additional expenditures on research) are all too real.

Thus, the benefits of coordination between different parts of the "supply chain" (in this case, the ideas and the physical manufacturing) lead to greater regularization of the growth trend than one would expect otherwise. If there are reasons to believe that growth is roughly exponential (the multiplicative story could be one such reason) then this could lead to it being far more precisely exponential.

The above explanation is highly speculative and I don't have strong confidence in it.

PS on algorithm improvement

  • If the time taken for an algorithm is described as a sum of products, then only the factors of the summands that dominate in the big-oh sense matter. For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.
  • To improve by a given constant of proportionality the time complexity of an algorithm where the time taken is a sum of products that are of the same order of magnitude, one strategy to improve each summand by that constant of proportionality. Alternatively, we could improve some summands by a lot more, in which case we'd have to determine the overall improvement as the appropriate weighted average.
  • To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.
New Comment
5 comments, sorted by Click to highlight new comments since:
[-]V_V00

For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.
They can't: Complexity is bounded from below, which means that for each problem there exist a maximally efficient algorithm. And similarly, once you specify the details of the hardware, there exists a maximally efficient program for that specific hardware. Once you get there you can't improve any further.

Thanks for your thoughts.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

I'm not assuming that.

In fact, my analysis isn't asymptotic at all, but rather, for fixed (but large) input sizes. In the asymptotic setting, moving to a new algorithm can yield an improvement that is no longer in the same order equivalence class (I could have elaborated more on this, but this was already a PS).

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

I meant "factor" in the "part of a multiplication" sense. For instance, let's say you have an algorithm that has an outer loop A that calls an inner loop B every step. Then if a and b are the number of steps respectively, the total time is ab. Now, reducing either a or b by a given factor would reduce the product. That reduction by a factor could be through the identification of steps that turn out to be unnecessary.

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.

I am not under this impression. Sorry for the confusion.

Algorithms can't be improved indefinitely, nor can population or the economy. But we can still talk of exponential growth over a range of time as we chip off at the obvious issues.

[-]V_V00

I'm not assuming that.

"let's assume that the time taken is a sum of products"

This is the definition of a polynomial, although you might not have intended it to be a polynomial in something other than the input dimensions. In that case, I'm not sure I've understood clearly what you are arguing.

Sorry for my lack of clarity.

I was making the point that if the run time can be broken down in a form where it's expressed as a sum of products, where the summands are the times taken for some sub-algorithms, then we can attempt to tweak the sub-algorithms to reduce the time taken for the individual tasks.

The time taken for the sub-algorithm may or may not be polynomial in the input dimensions.

[-]V_V00

Ok.