There's a long article in this week's The Economist:

The onrushing wave

discussing the effect of changing technology upon the amount of employment available in different sectors of the economy.

Sample paragraph from it:

The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. Their argument rests on an underappreciated aspect of the exponential growth in chip processing speed, memory capacity and other computer metrics: that the amount of progress computers will make in the next few years is always equal to the progress they have made since the very beginning. Mr Brynjolfsson and Mr McAfee reckon that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.

(There's a summary online of their previous book: Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy)

 

What do people think are society's practical options for coping with this change?

New Comment
32 comments, sorted by Click to highlight new comments since: Today at 1:23 PM

exponential growth in chip processing speed

In terms of raw speed, Moore's Law has broken down for at least six or eight years. Chips have continued to advance in terms of transistors per area and other metrics, but their clock speed now is roughly what it was in 2005; and while parallelisation is nice, it is much more difficult to take advantage of than plain speed advances. Take an algorithm written in 1984 and run it on the hardware of 2004, and you will get an enormous speedup with zero work; but to get a further speedup from the hardware of 2014, you have to think about how to parallelise, and that's hard work even when it's possible - and not every algorithm can be usefully parallelised.

Hmm, you have a point, I still use my 2004 convertible Toshiba tablet with 1.7 GHz Pentium M and 1.5GB RAM (which superficially matches today's tablets' specs), but I cannot use a '2000 desktop anymore for anything.

Taking advantage of new hardware has always required changing programs to make better use of the hardware. A Pentium 4 wasn't just a faster Pentium Pro. It had a different architecture, new instructions, different latencies and throughputs for various instructions, and vector processing extensions. To make full use of the P4's power, people definitely had to modify their code, all the way down to the assembly level. In fact early in the release cycle there were reports of many programs actually running slower on P4s than P3s under certain conditions. Software developers and compiler designers had to force themselves to use the new and largely unfamiliar MMX/SSE instruction sets to get the most out of those new chips.

But all of this is just part of the broader trend of hardware and software evolving together. Univac programs wouldn't be very good fits for the power of the 386, for instance. Our programming practices have evolved greatly during the last several decades. One example is that x86 programmers had to learn how to write efficient programs using a relatively small number of registers and a small number of memory accesses. This was something of a handicap, as programmers were used to memory access being roughly the same speed as register access (maybe 2-4 times slower), rather than 10 or 20 times slower (or more!) as they were on later x86 architectures. This forced the development of cache-aware algorithms and so on.

And where fast performance is really needed (HPC and servers), software developers have always had to modify their code, usually for every single chip iteration. It is very uncommon, for instance, for code written on one supercomputer to run perfectly well on a new supercomputer, without updating the code and compilers.

Anyway, it's not just misleading to say Moore's law has broken down for the past 6-8 years, it's flat-out wrong. Moore's law is about the number of transistors that can fit on a 'single chip', and it has indeed been going strong and keeps going strong:

http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg

When you shrink the transistors down, the distance between them gets smaller, thus you can run circuits at higher speed. This is still true today. The problem is this: In the past, when you shrunk transistors down, you could lower their operating voltage without sacrificing error rate. This is no longer the case, and it seems to be a limitation of silicon CMOS tech, and not a limitation of photolithography techniques. Thus today we have chips that are in principle capable of operating easily at up to 10 GHz or more, but they would dissipate impractical levels of power while doing so. So chips are run at speeds far lower than they can theoretically run at. The cure for this problem is to try to do the same amount of work with fewer transistors, even if it means slightly slower speeds. The payoff of using 2x fewer transistors or a task more than outweighs the disadvantage of having it run 2x slower (A 2x reduction in chip frequency/voltage results in more than 2x reduction in power usage). This is in many ways opposite of the trend we had in the late 1990's and early 2000's. Thus we now have hybrid architectures that use a huge number of very simple, low-transistor-count cores (like nVidia's CUDA cores or Intel's MIC architecture) that run at slow speed, but with very high parallelism. These architectures have made computing MUCH faster. The price of this speed increase is that now mainstream computer hardware has become parallel. So, non-HPC programmers now have to deal with issues that were traditionally only reserved for HPC programmers. Thus the huge amount of tension and anxiety that we now see in mainstream programming.

Essentially, as computing has been getting better, the average cpu you have in your laptop has become more and more like what a supercomputer of 20 years ago used to look like. As a result, it has inherited the same programming difficulties.

In terms of raw speed, Moore's Law has broken down for at least six or eight years. Chips have continued to advance in terms of transistors per area and other metrics, but their clock speed now is roughly what it was in 2005

Moore's Law is precisely about transistors per area, not about clock speed. So it hasn't broken down.

Moore's original formulation referred to transistors per area per dollar, yes. However, the same exponential growth has been seen in, for example, memory per dollar, storage per dollar, CPU cycles per second per dollar, and several others; and the phrase "Moore's Law" has come to encompass these other doublings as well.

If it's about all of these things, it doesn't seem very useful to say it's broken down if it only stops working in one of these areas and continues in the others.

[This comment is no longer endorsed by its author]Reply

The benefits of parallelization are highly depended on the task, but there are quite a lot of tasks that are very amenable to it. It's difficult to rewrite a system from the ground up to take advantage of parallelization, but systems are designed with it in mind from the beginning, they can simply be scaled up as a larger number of processors becomes economically feasible. For quite a few algorithms, setting up parallelization is quite easy. Creating a new bitcoin block, for instance, is already a highly parallel task. As far as society-changing applications, there's a wide variety of tasks that are very susceptible to parallelization. Certainly, human-level intelligence does not appear to require huge serial power; human neurons have a firing rate in at most a few hundred hertz. Self-driving cars, wearable computers, drones, database integration ... I don't see a need for super-fast processors for any of these.

I am increasingly skeptical of people who claim that technology is going to "deliver enormous benefits". I feel that this model of socioeconomics creates a large number of unexplainable (under the model) anomalies, such as the fact that half of Europe and many American cities seem to be in absolute decline, in spite of having about as much technology as the rest of the world, or the fact that places like Hong Kong and Singapore are astoundingly rich and successful, in spite of not having any special technological proclivity.

My model now is that socioeconomic well-being is a function of several factors. Technology is a factor with a relatively small weight. Institutional quality is a far more important factor. My view of the US is that we've experienced a rapid uptake in technology alongside serious decline in institutional quality, and these two effects have more or less cancelled each other out, so that the well-being of people in the US is about unchanged over the last 1-2 decades.

To elaborate on this point, consider that the aggregate market cap of Apple, Microsoft, Google, Oracle, Intel, Qualcomm, Cisco, and Facebook is about 1.5e12 dollars. Conservative estimates for the cost of the US war on terror since 2001 is about the same. It's not an exact comparison, but it's an order of magnitude sanity check on my claim that wealth increase due to technological advance is on the same order of magnitude as wealth decrease due to institutional failure.

Market cap only represents the value those tech companies have managed to capture, though. I rarely click on Google's ads and therefore contribute little to their market cap, but access to search engines like Google has had a tremendous impact on my life.

[-]satt10y20

Although the US didn't capture all of the losses from the War on Terror, either!

What is your estimate of how much of Hong Kong's current wealth and success it would preserve if it were unable to make use of technology developed after, say, 1990?

Two answers, under different assumptions about what you're asking. If HK had no post-1990 tech and neither did the rest of the world, then it would maintain about 95% of its wealth. If HK was stuck in 1990 tech while the rest of the world wasn't then it could maintain about 85% of its wealth - it would still be richer than most of Europe. If this seems high to you, consider that a basic idea of economics suggests that countries will use trade to make up for their relative deficiencies and maximize their comparative advantage, and HK is an global center of trade.

I asked mostly because I wanted a concrete reference point for the claim you were making; it's easier to avoid talking past each other that way.

So, the last 20-25 years of technological development accounts for 5% of Hong Kong's current wealth? Sure, that seems plausible enough.

What sorts of numbers do you think the people who talk about the enormous benefits technology delivers have in mind for that question?

Well, the snippet from the article compares current technological advances to the first era of industrialisation, so they're probably thinking 100-200% range.

Gotcha. Yeah, the idea that we've doubled or tripled our real wealth in the last 25 years seems implausible.

I upvoted the whole chain of comments leading here because it shows how a rational discussion should go: Establish reference points. Elaborate. Agree!

I did the same. I wonder if it would be a good idea to document such cases as Examples of Best Practices.

They would be unable to do business in any real way with the rest of the world. They'd be communicating by landline phones, couriers, and carrier pidgeons. They'd just manage to get a 80486DX at a whopping 20MHz.

Some internet, but no HTTP, and likely no modern standards. Just how do you expect them to be a global center of trade in a world economy where their competition has modern computers, communications, and logistics?

Throw out their financial services industry and import/export businesses.

The advantage they're left with is the unique relation they have with China, where they have access to the market but don't have to play by the same rules. There's always value in legal privileges. Much of the world economy is driven by such privileges. But that's not an argument against the value and power of technology to create wealth.

They'd have to buy that tech including training and support. And as a trading center they could.

My view of the US is that we've experienced a rapid uptake in technology alongside serious decline in institutional quality

How would you measure or at least estimate institutional quality?

[-]knb10y10

My view of the US is that we've experienced a rapid uptake in technology alongside serious decline in institutional quality, and these two effects have more or less cancelled each other out, so that the well-being of people in the US is about unchanged over the last 1-2 decades.

I would say that we have had modest technological progress and modest institutional decline. Progress has been overwhelmingly localized in IT/telecom/computers. We've seen small improvements in other areas.

It might seem like this would produce a wash in standard of living, but since we've also seen huge increases in inequality, standard of living for the bottom 75-90% of people is falling.

I feel that this model of socioeconomics creates a large number of unexplainable (under the model) anomalies

It seems to me that those things are pretty quickly explained if you throw race and IQ in to your model, which aren't likely to change in the near future, and tech is still the thing likely to change significantly in the near future.

Immigration can make countries much less homogenous in terms of race, to the point where you can't predict much about a country's or area's population. (I'm talking on a global scale, not about immigration to the US.)

See also Hanson's less than enthusiastic review.

I think the key point here is

that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.

If this is true it would basicaly also apply to FAI. Less so because FAI may have better ways to ask. ore so because the changes are even more fundamental.

No, a FAI would have many advantages. For one thing, it wouldn't have the same level of coordination problems that humans do. The technological problems of making DVD were solved years before they replaced VHS. Their sales were delayed by competing standards and the worry that all but one of the standards would be "the next Betamax". The current state of technological development is an absolute mess. We have competing companies with competing standards, and even within a company there are different generations. You have an iPhone 3 that you want to upgrade to the newest generation? You're going to have to replace your charger and other peripherals. Software companies keep releasing new versions of their programs, which means that users have to learn new user interfaces, and people who are using different versions now have compatibility issues. We have technologies involving dozens on patents owned by different companies that are stuck in development hell because the companies can't work out a profit distribution agreement.

You have an iPhone 3 that you want to upgrade to the newest generation? You're going to have to replace your charger and other peripherals.

From the perspective of the company, this is a feature and not a bug.

The current state of technological development is an absolute mess.

Yep. The idea that the space of possible innovations brought by any technological advancement is explored, not even completely, but just semi-efficiently, is starkly at odds with reality.

I think there is much to what Yudkowsky is saying on the topic in this post:

http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/

He is arguing that the high levels of unemployment we see today are not due to technological progress but rather to the financial crisis.

If it takes 1 year to re-train a person to the level of employability in a new profession, and every year 2% of jobs are automated out of existence, then you'll get a minimum of 2% unemployment.

If it takes 4 years to re-train a person to the level of employability in a new profession, and every year 2% of jobs are automated out of existence, then you'll get a minimum of 8% unemployment.

If it takes 4 years to re-train a person to the level of employability in a new profession, and every year 5% of jobs are automated out of existence, then you'll get a minimum of 20% unemployment.

It isn't so much the progress, as the rate of progress.

Yudkowsky mentions that there is a near unlimited demand for low skill personal service jobs, such as cleaning floors, and that the 'problem' of unemployment could be seen as people being unwilling to work such jobs at the wages supply-and-demand rate them as being worth. But I think that's wrong. If a person can't earn enough money to survive upon, by working all the hours of a week that they're awake at a particular job, then effectively that job doesn't exist. There may be a near unlimited numbers of families willing to pay a $0.50 an hour for someone to clean floors in their home, but there are only a limited number who're willing to offer a living wage for doing so.

If a person can't earn enough money to survive upon

In the Western world you don't need to earn any money to physically survive.

Your life may not be particularly pleasant but you will not starve to death in a ditch.