Warning: This is a somewhat long-winding post with a number of loosely related thoughts and no single, cogent thesis. I have included a TL;DR after the introduction, listing the main points. All corrections and suggestions are greatly appreciated.
It's commonly known, particularly to LessWrong readers, that in the world of computer-related technology, key metrics have been doubling fairly quickly, with doubling times ranging from 1 to 3 years for most metrics. The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but one of its implications, based on Dennard scaling, broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel). Another similar proposed law is Kryder's law, which looks at the doubling of hard disk storage capacity. Chapters 2 and 3 of Ray Kurzweil's book The Singularity is Near goes into detail regarding the technological acceleration (for an assessment of Kurzweil's prediction track record, see here).
One of the key questions facing futurists, including those who want to investigate the Singularity, is the question of whether such exponential-ish growth will continue for long enough for the Singularity to be achieved. Some other reasonable possibilities:
- Growth will continue for a fairly long time, but slow down to a linear pace and therefore we don't have to worry about the Singularity for a very long time.
- Growth will continue but converge to an asymptotic value (well below the singularity threshold)beyond which improvements aren't possible. Therefore, growth will progressively slow down but still continue as we come closer and closer to the asymptotic value
- Growth will come to a halt, because there is insufficient demand at the margin for improvement in the technology.
Ray Kurzweil strongly adheres to the exponential-ish growth model, at least for the duration necessary to reach computers that are thousands of times as powerful as humanity (that's what he calls the Singularity). He argues that although individual paradigms (such as Moore's law) eventually run out of steam, new paradigms tend to replace them. In the context of computational speed, efficiency, and compactness, he mentions nanotechnology, 3D computing, DNA computing, quantum computing, and a few other possibilities as candidates for what might take over once Moore's law is exhausted for good.
Intuitively, I've found the assumption of continued exponential growth wrong. I'll hasten to add that I'm mathematically literate and so it's certainly not the case that I fail to appreciate the nature of exponential growth — in fact, I believe my skepticism is rooted in the fact that I do understand exponential growth. I do think the issue is worth investigating, both from the angle of whether the continued improvements are technologically feasible, and from the angle of whether there will be sufficient incentives for people to invest in achieving the breakthroughs. In this post, I'll go over the economics side of it, though I'll include some technology-side considerations to provide context.
TL;DR
I'll make the following general points:
- The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves.
- Industries based on knowledge goods exhibit experience curve effects: what matters is cumulative demand rather than demand in a given time interval. The irreversibility of creating knowledge goods creates a dynamic different from that in other industries.
- What matters for technological progress is what people investing in research think future demand will be like. Bubbles might actually be beneficial if they help lay the groundwork of investment that is helpful for many years to come, even though the investment wasn't rational for individual investors.
- Each stage of investment requires a large enough number of people with just the right level of willingness to pay (see the PS for more). A diverse market, with people at various intermediate stages of willingness to pay, is crucial for supporting a technology through its stages of progress.
- The technological challenges confronted at improving price-performance tradeoffs may differ for the high, low, and middle parts of the market for a given product. The more similar these challenges, the faster progress is likely to be (because the same research helps with all the market segments together).
- The demand-side story most consistent with exponential technological progress is one where people's desire for improvement in the technologies they are using are proportional to the current level of the technologies. But this story seems inconsistent with the facts: people's appetite for improvement probably declines once technologies get good enough. This creates problems for the economic incentive side of the exponential growth story.
- Some exponential growth stories require a number of technologies to progress in tandem. Progress in one technology helps facilitate demand for another complementary technology in this story. Such progress scenarios are highly conjunctive, and it is likely that actual progress will fall far short of projected exponential growth.
#1: Short versus long run for supply and demand
In the short run, supply curves are upward-sloping and demand curves are downward-sloping. In particular, this means that when the demand curve expands (more people wanting to buy the item at the same price) then that causes an increase in price and increase in quantity traded (rising demands creates shortages at the current price, motivating suppliers to increase supplies and also charge more money given the competition between buyers). Similarly, if the supply curve expands (more amount of the stuff getting produced at the same price) then that causes a decrease in price and increase in quantity traded. These are robust empirical observations that form the bread and butter of micreconomics, and they're likely true in most industries.
In the long run, however, things become different because people can reallocate their fixed costs. The more important the allocation of fixed costs is to determining the short-run supply curve, the greater the difference between short-run supply curves based on choices of fixed cost allocation. And in particular, if there are increasing returns to scale on fixed costs (for instance, a factory that produces a million widgets costs less than 1000 times a factory that produces a thousand widgets) and fixed costs contribute a large fraction of production costs, then the long-run supply curve might end up be downward-sloping. An industry where the long-run supply curve is downward-sloping is called a decreasing cost industry (see here and here for more). (My original version of this para was incorrect; see CoItInn's comment and my response below it for more).
#2: Introducing technology, the arrow of time, and experience curves
The typical explanation for why some industries are decreasing cost industries is the fixed costs of investment in infrastructure that scale sublinearly with the amount produced. For instance, running ten flights from New York to Chicago costs less than ten times as much as running one flight might. This could be because the ten flights can share some common resources such as airport facilities or even airplanes, and also they can offer backups for one another in case of flight cancellations and overbooking. The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year. A mass transit system for a city of a million people costs less than 100 times as much as a mass transit system for a city of the same area with 10,000 people. These explanations for decreasing cost have only a moderate level of time-directionality. When I talk of time-directionality, I am thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up. If demand for New York-Chicago flights reverts to low levels, the planes can be diverted to other routes. If demand for hard drives falls, the factory producing them can (at some refurbishing cost) produce flash memory or chips or something totally different. As for intra-city mass transit systems, some are easier to repurpose than others: buses can be sold, and physical train cars can be sold, but the rail lines are harder to repurpose. In all cases, there is some time-directionality, but not a lot.
Technology, particularly the knowledge component thereof, is probably an exception of sorts. Knowledge, once created, is very cheap to store, and very hard to destroy in exchange for other knowledge. Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.
This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity. Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.
The concept of price skimming is also potentially relevant.
#3: The genie out of the bottle, and gaining from bubbles
The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high. This depends on the technology not getting lost if the suppliers go out of business — but that's probably likely, given that suppliers are unlikely to want to destroy cost-lowering technologies. Even if they go out of business, they'll probably sell the technology to somebody who is still in business (after all, selling their technology for a profit might be their main way of recouping some of the costs of their investment). Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest. In particular, the tech bubble of 1998-2001 spurred significant investments in Internet infrastructure (based on false premises) as well as in the semiconductor industry, permanently lowering the prices of these, and facilitating the next generation of technological development. However, the argument also ignores the fact that the resources spent on the technological development could instead have gone to other even more valuable technological developments. That's a big omission, and probably destroys the case entirely, except for rare situations where some technologies have huge long-term spillovers despite insufficient short-term demand for a rational for-profit investor to justify investment in the technology.
#4: The importance of market diversity and the importance of intermediate milestones being valuable
The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting: for instance, if production costs are $10 per piece and 30 people are willing to pay $100 per piece, then pushing production down from $10 to $5 per piece yields a net gain of only $150 — a pittance compared to the existing profit of $2700. On the other hand, if there are 300 people willing to pay $10 per piece, existing profit is zero whereas the profit arising from reducing the cost to $5 per piece is $1500. On the third hand, people willing to pay only $1 per piece are useless in terms of spurring investment to reduce the price to $5, since they won't buy it anyway.
Building on the preceding point, the market segment that plays the most critical role in pushing the frontier of technology can change as the technology improves. Initially, when prices are too high, the segment that pushes technology further would be the small high-paying elite (the early adopters). As prices fall, the market segment that plays the most critical role becomes less elite and less willing to pay. In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do. Also, market segments for whom the technology is still too expensive are also benefiting in terms of future expectations. Poor people who couldn't afford mobile phones in 1994 benefited from the rich people who generated demand for the phones in 1994, and the middle-income people who generated demand for the phones in 2004, so that now, in 2014, the phones are cost-effective for many of the poor people.
It becomes clear from the above that the continued operation of technological progress depends on the continued expansion of the market into segments that are progressively larger and willing to pay less. Note that the new populations don't have to be different from the old ones — it could happen that the earlier population has a sea change in expectations and demands more from the same suppliers. But it seems like the effect would be greater if the population size expanded and the willingness to pay declined in a genuine sense (see the PS). Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge (as in the example above, going from customers willing to pay $100 to customers willing to pay $1 would require too much investment in research and development and may not be supported by the market). You need people at each intermediate stage to spur successive stages of investment.
A closely related point is that even though improving a technology by a huge factor (such as 1000X) could yield huge gains that would, on paper, justify the cost of investment, the costs in question may be too large and the uncertainty may be too high to justify the investment. What would make it worthwhile is if intermediate milestones were profitable. This is related to the point about gradual expansion of the market from a small number of buyers with high willingness to pay to a large number of buyers with low willingness to pay.
In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.
#5: Different market segments may face different technological challenges
There are two ends at which technological improvement may occur: the frontier end (of the highest capacity or performance that's available commercially) and the low-cost end (the lowest cost at which something useful is available). To some extent, progress at either end helps with the other, but the relationship isn't perfect. The low-cost end caters to a larger mass of low-paying customers and the high-cost end caters to a smaller number of higher-paying customers. If progress on either end complements the other, that creates a larger demand for technological progress on the whole, with each market segment freeriding off the other. If, on the other hand, progress at the two ends requires distinct sets of technological innovations, then overall progress is likely to be slower.
In some cases, we can identify more than two market segments based on cost, and the technological challenge for each market segment differs.
Consider the case of USB flash drives. We can broadly classify the market into three segments:
- At the high end, there are 1 TB USB 3.0 flash drives worth $3000. These may appeal to power users who like to transfer or back up movies and videos using USB drives regularly.
- In the middle (which is what most customers in the First World, and their equivalents elsewhere in the world, would consider) are flash drives in the 16-128 GB range with prices ranging from $10-100. These are typically used to transfer documents and install softwares, with the occasional transfer of a movie.
- At the "low" end are flash drives with 4 GB or less of storage space. These are sometimes ordered in bulk for organizations and distributed to individual members. They may be used by people who are highly cash-constrained (so that even a $10 cost is too much) and don't anticipate needing to transfer huge files over a USB flash drive.
The cost challenges in the three market segments differ:
- At the high end, the challenges of miniaturization of the design dominate.
- At the middle, NAND flash memory is a critical determinant of costs.
- At the low end, the critical factor determining cost is the fixed costs of production, including the costs of packaging. Reducing these costs would presumably involve lowering the fixed costs of production, including cheaper, more automated, more efficient packaging.
Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle. Note also that the definitions of the low, middle, and high end are themselves subject to change. Ten years ago, there wasn't really a low or high end (more on this in the historical anecdote below). More recently, some disk space values have moved from the high end to the middle end, and others have moved from the middle end to the low end.
#6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?
Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies. In particular, we might ask: to what extent do people value continued improvements in processing speed, memory, and hard disk space, directly or indirectly?
The answer most consistent with exponential growth is that whatever level you are currently at, you pine for having more in a fixed proportion to what you currently have. For instance, for hard disk space, one theory could be that if you can buy x GB of hard disk space for $1, you'd be really satisfied only with 3x GB of hard disk space for $1, and that this relationship will continue to hold whatever the value of x. This model relates to exponential growth because it means that the incentives for proportional improvement remain constant with time. It doesn't imply exponential growth (we still have to consider technological hurdles) but it does take care of the demand side. On the other hand, if the model were false, it wouldn't falsify exponential growth, but it should make us more skeptical of claims that exponential growth will continue to be robustly supported by market incentives.
How close is the proportional desire model to the reality? I think it's a bad description. I will take a couple of examples to illustrate.
- Hard disk space: When I started using computers in the 1990s, I worked on a computer with a hard disk size of 270 MB (that included space for the operating system). The hard disk really did get full just with ordinary documents and spreadsheets and a few games played on monochrome screens — no MP3s, no photos, no videos, no books stored as PDFs, and minimal Internet browsing support. When I bought a computer in 2007, it had 120 GB (105 GB accessible) and when I bought a computer last year, it had 500 GB (450 GB accessible). I can say quite categorically that the experiences are qualitatively different. I no longer have to think about disk space considerations when downloading PDFs, books, or music — but keeping hard disk copies of movies and videos might still give me pause in the aggregate. I actually downloaded an offline version of Wikipedia for 10 GB, something that gave me only a small amount of pause with regards to disk space requirements. Do I clamor for an even larger hard disk? Given that I like to store videos and movies and offline Wikipedia, I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage. But the issue lacks anything like the urgency that running out of disk space had back in the day. I probably wouldn't be willing to pay much for improvements in disk space at the margin. And I'm probably at the "use more disk space" extreme of the spectrum — many of my friends have machines with 120 GB hard drives and are nowhere near close to running out of it. Basically, the strong demand imperative that existed in the past for improving hard drive capacity no longer exists (here's a Facebook discussion I initiated on the subject).
- USB flash drives: In 2005, I bought a 128 MB USB flash drive for about $50 USD. At the time, things like Dropbox didn't exist, and the Internet wasn't too reliable, so USB flash drives were the best way of both backing and transferring stuff. I would often come close to running out of space on my flash drive just to transfer essential items. In 2012, I bought two 32 GB USB flash drives for a total cost of $32 USD. I used one of them to back up all my documents plus a number of my favorite movies, and still had a few GB to spare. The flash drives do prove inadequate for transferring large numbers of videos and movies, but those are niche needs that most people don't have. It's not clear to me that people would be willing to pay more for a 1 TB USB flash drive (a few friends I polled on Facebook listed reservation prices for a 1 TB USB flash drive ranging from $45 to $85. Currently, $85 is the approximate price of 128 GB USB flash drives; here's the Facebook discussion). At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that. On either end, therefore, the incentives for innovation seem low.
#7: Complementary innovation and high conjunctivity of the progress scenario
The discussion of the hard disk and USB flash drive examples suggests one way to rescue the proportional desire and exponential growth views. Namely, the problem isn't with people's desires not growing fast enough, it's with complementary innovations not happening fast enough. In this view, maybe in processor speed improved dramatically, new applications enabled by that would revive the demand for extra hard disk space and NAND flash memory. Possibilities in this direction include highly redundant backup systems (including peer-to-peer backup), extensive internal logging of activity (so that any accidental changes can be easily located and undone), extensive offline caching of websites (so that temporary lack of connectivity has minimal impact on browsing experience), and applications that rely on large hard disk caching to complement memory for better performance.
This rescues continued exponential growth, but at a high price: we now need to make sure that a number of different technologies are progressing simultaneously. Any one of these technologies slowing down can cause demand for the others to flag. The growth scenario becomes highly conjunctive (you need a lot of particular things to happen simultaneously), and it's highly unlikely to remain reliably exponential over the long run.
I personally think there's some truth to the complementary innovation story, but I think the flagging of demand in absolute terms is also an important component of the story. In other words, even if home processors did get a lot faster, it's not clear that the creative applications this would enable would have enough of a demand to spur innovation in other sectors. And even if that's true at the current margin, I'm not sure how long it will remain true.
This blog post was written in connection with contract work I am doing for the Machine Intelligence Research Institute, but repreesents my own views and has not been vetted by MIRI. I'd like to thank Luke Muehlhauser (MIRI director) for spurring my interest in the subject, Jonah Sinick and Sebastian Nickel for helpful discussions on related matters, and my Facebook friends who commented on the posts I've linked to above.
Comments and suggestions are greatly appreciated.
PS: In the discussion of different market sectors, I argued that the presence of larger populations with lower willingness to pay might be crucial in creating market incentives to further improve a technology. It's worth emphasizing here that the absolute size of the incentive depends on the population more than the willingness to pay. To reduce the product cost from $10 to $5, the profit from a population of 300 people willing to pay at least $10 is $1500, regardless of the precise amount they are willing to pay. But as an empirical matter, accessing larger populations requires going to lower levels of willingness to pay (that's what it means to say that demand curves slope downward). Moreover, the nature of current distribution of disposable wealth (as well as willingness to experiment with technology) around the world is such that the increase in population size is huge as we go down the rung of willingness to pay. Finally, the proportional gain from reducing production costs is higher from populations with lower willingness to pay, and proportional gains might often be better proxies of the incentives to invest than absolute gains.
I made some minor edits to the TL;DR, replacing "downward-sloping demand curves" with "downward-sloping supply curves" and replacing "technological progress" with "exponential technological progress". Apologies for not having proofread the TL;DR carefully before.
One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).
I'm actually writing another (long) post on exponential growth and the different phenomena that could lead to it. Multiplicative costs are on the list of plausible explanations. I've discussed these multiplicative stories with Jonah and Luke before.
I think that multiplicative costs is a major part of the story for the exponential-ish improvements in linear programming algorithms, as far as I could make out based on a reading of this paper: http://web.njit.edu/~bxd1947/OR_za_Anu/linprog_history.pdf
More in my upcoming post :).
UPDATE: Here's the post: http://... (read more)