The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?
Clearly computers are exactly the same, and ought to be expected to have the same effects, as steam engines. Just look at horses, they're doing fine.
Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US). If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions? Something has gone wrong with the engine of reemployment...But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too. ...even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation.
And ...
(Upvoted.) I've been reading Tyler and I read McAfee. So far, your comment here is the most impressive argument for this position I've seen anywhere, and so I don't feel bad about not addressing it earlier. I'm not sure you really address the central point either; why can't the disemployed people find new jobs like in the last four centuries, and why did unemployment drop in Germany once they fixed their labor market, and why hasn't employment dropped in Australia, etcetera? (And note that anything along the lines of 'regional boom' contradicts ZMP and completely outcompeted humans and other explanations which postulate unemployability, not 'unemployable unless regional boom'.) Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage? Maybe eventually AI will disemploy that kid but right now humans are still doing laundry! Again, the economy of 1920 seemed to do quite well handling disemployment pressures like this with reemployment, so what changed?
Quick question: To what extent are you playing Devil's Advocate above and to what extent do you actually think that the robotic disemployment thesis is correct, a primary cause of current unemployment, not solvable with NGDP level targeting, and unfixable due to some humans being too-much-outcompeted, rather than due to other environmental changes like the regulatory environment etcetera?
I've been reading Tyler and I read McAfee.
Cowen says some interesting things but I don't think he makes the best case for technological unemployment; not sure what you mean by McAfee - Brynjolfsson is the lead author on Race Against the Machine, not McAfee.
I'm not sure you really address the central point either; why can't the disemployed people find new jobs like in the last four centuries,
As my initial comment implies, I think the last century is qualitatively different automation than before: before, the machines began handling brute force things, replacing things which offered only brute force & not intelligence like horses or watermills. But now they are slowly absorbing intelligence, and this seems to be the final province of humans. In Hanson's terms, I think machines switched from being complements to being substitutes in some sectors a while ago.
and why did unemployment drop in Germany once they fixed their labor market, and why hasn't employment dropped in Australia, etcetera?
I don't know nearly enough about Germany to say. They seem to be in a weird position in Europe, which might explain it. I'd guess that Australia seems to owe its success to avoiding a...
As my initial comment implies, I think the last century is qualitatively different automation than before: before, the machines began handling brute force things, replacing things which offered only brute force & not intelligence like horses or watermills. But now they are slowly absorbing intelligence, and this seems to be the final province of humans. In Hanson's terms, I think machines switched from being complements to being substitutes in some sectors a while ago.
The key Hansonian concept is that replacing humans at tasks is still complementation because different tasks are complementary to each other, a la hot dogs and buns; I should perhaps edit OP to make this clearer. It is not obvious to me that craftspeople disemployed by looms would have considered their work to be unskilled, but as that particular industry was automated, people moved to other jobs in other industries and complementarity continued to dominate. Again the question is, what's different now? Is it that no human on the planet does any labor any more which could be called unskilled, that nobody cooks or launders or drives? Obviously not. But there are many plausible changes in regulation, taxes, p...
I'd pay $5/hour for someone to drive me almost anywhere if availability was coordinated by Uber, but not taxi prices... This looks to me like a barrier-to-entry, regulatory-and-tax scenario, not "Darn it we're too rich and running out of things for labor to do!"
Federal minimum wage has been falling relative to productivity for decades. Also, Australia has a much higher minimum wage than the US but a lower unemployment rate. They also don't have at-will employment, implying that the risks of hiring are larger. So I'm not sure the regulations are actually the problem here (that said, I oppose many of them anyway on various grounds).
Somewhat irrelevant, but:
$150 can't pay someone to trim your trees, at least not well
I think you need to find an enterprising teenager? I currently pay a local kid $100 a month to do the overwhelming majority of my (very elderly) parent's yardwork. He mows the lawn, does the edging, weeds the flower bed and trims back the bushes. He butchered things a few times the start, but he has gotten quite competent and I fear the day he realizes he is worth more than ~$10 an hour + a christmas bonus + free lunch served by my mother when he is working.
Of course if you have trees > 20-30 feet tall you'll probably need a more expensive professional service.
I mean that when somebody in the bottom quintile gives me a car ride to Berkeley for $5, nothing else happens to them. They don't pay Social Security on the $5. They don't have their health benefits phased out. They don't have to fill out a form. They just have an additional $5.
I know this is a completely radical concept.
The problem isn't just all those other taxes but phasing-out of benefits - this is what leads to the calculations and observations by which somebody making $25,000/year isn't much better off than someone getting $8,000/year.
ADDED: Also, any paperwork can easily be an extreme barrier to that IQ 70 kid that Gwern was talking about.
Safety net should be a slope, not a cliff. Earning your first dollar shouldn't mean you get $1 less in benefits - there's actually a good argument for subsidizing the first $X of income - which is what the EITC is. Basically negative income tax.
Regarding the drop of unemployment in Germany, I've heard it claimed that it is mainly due to changing the way the unemployment statististics are done, e.g. people who are in temporary, 1€/h jobs and still receiving benefits are counted als employed. If this point is still important, I can look for more details and translate.
EDIT: Some details are here:
It is possible to earn income from a job and receive Arbeitslosengeld II benefits at the same time. [...] There are criticisms that this defies competition and leads to a downward spiral in wages and the loss of full-time jobs. [...]
The Hartz IV reforms continue to attract criticism in Germany, despite a considerable reduction in short and long term unemployment. This reduction has led to some claims of success for the Hartz reforms. Others say the actual unemployment figures are not comparable because many people work part-time or are not included in the statistics for other reasons, such as the number of children that live in Hartz IV households, which has risen to record numbers.
To what extent are you playing Devil's Advocate above and to what extent do you actually think that the robotic disemployment thesis is correct, a primary cause of current unemployment, not solvable with NGDP level targeting and unfixably due to some humans being too-much-outcompeted rather than due to other environmental changes like the regulatory environment etcetera?
Gwern on neoluddism: http://www.gwern.net/Mistakes#neo-luddism
Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage?
In addition to gwern's objections, what if his RCA price-point turns out to be, say, 50c an hour? The utility curve is not smooth. Past a point, a starvation wage is still a starvation wage. Even in a hypothetical world where there were zero welfare and no opportunities for crime, he'd be better off spending the time looking for low-probability alternatives than settling on spending 40 hours a week working for sure starvation.
Yes, but location isn't fungible, and not all jobs are telecommutable. A 50c/hour wage in the Bay Area is a death sentence without some supplemental source, even if someone in the Congo might live like a king on it.
This reminds me place premium, an interesting concept that someone doing the same job in one country can earn more than in another. Though we are talking about some kid who can't even get a job in the first place, this concept works well.
For example if a homogenous region such as country, city, or even suburb, has automated to such a degree that menial jobs are few. Has attracted the best people, and the best people to serve the best people. Such a region has 'place premium' as the top creative jobs, programming, finance, design work, etc, pay extremely well to entice the best. These people demand, via their wealth, the best service and so entice those that are skilled, good looking, whatever attributes required for service. Continuously filtering people.
I'll also argue that the US is a special case in that US dollar holders get a subsidy to living via the petrodollar/global reserve currency. Payed for by any foreigners wanting to by [relative to them] foreign products. This only increases the place premium of living in the US, and thus earning a wage in USD.
For the IQ 70 kids, perhaps there ARE no jobs for them in the region they live in. They have been filtered out by better (in ...
Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage?
The left tail on the distribution for inventive, creative, bright people seems highly likely to be fatter than the right tail. You need to be genetically gifted enough and have had the right encouragement, and lived in the right intellectual environment, to go on to create neat inventions and research and so on - that automation supposedly frees people up for/ If it is, then rather than freeing people up for better jobs, it frees people up to compete for a finite number of worse jobs.
Or, in other words, it seems to me like there's a non-trivial possibility that the people who were doing admin tasks are being displaced into doing laundry tasks instead. That what would have been being done by the 70 IQ kid is now being done by a 100 IQ adult.
The trucking industry alone employs ~3% of the entire American population. That's not trivial by any means
I just thought I'd mention that driverless cars can be expected to have a lot of ripple effects. Parking lot attendants; traffic court clerks; insurance claim adjusters; auto body repairmen; the guy whose job it is to calibrate breathalyzers; meter maids; etc. All of these people could face a good deal of unemployment if driverless cars come in.
As far as your larger point goes, I think you make a good point. By looking at AI in a narrow way, Eliezer is giving short shrift to a lot of technological improvements which have the potential to cause unemployment. For example, if a business starts scanning documents and keeping them electronically, it will probably need fewer file clerks and mailroom guys. Does this count as AI? Perhaps and perhaps not, but when people assert that unemployment is due to advances in computers, they certainly are referring to these types of changes.
As far as unemployment itself goes, I also agree with you that even if the theoretical model is correct, there is still surely a lag in reemployment which has the potential to cause disruption. How quickly did the need for blacksmiths drop down to nearly zero? Probably pretty slowly and gently compared to what might be happening now. Perhaps a 50 year old blacksmith would have urged his son to find a different line of work but would have had enough business to see him through.
If cars were invented nowadays, the horse-and-saddle industry would surely try to arrange for them to be regulated out of existence, or sued out of existence, or limited to the same speed as horses to ensure existing buggies remained safe.
That's not a new thing, that sort of regulation actually happened!
Many labor market regulations transfer wealth or job security to the already-employed at the expense of the unemployed, and these have been increasing over time.
One example: raising the minimum wage makes lower-productivity workers permanently unemployable, because their work is not worth the price, so no one can afford to hire them any more.
When the government raises minimum wage, it effectively funds the development of automation, as businesses seek replacements for low-end labor. (Like Amazon buying that robotics company to build warehouse management robots.)
Heck, you could almost say that AI doesn't cause unemployment; the need for unemployment causes AI. When labor cost increases without a productivity gain, there has to be a productivity gain to make up for it, and the pain of the increase motivates businesses to actually look for alternatives to their current ways of doing something.
So every time the minimum wage goes up, companies will replace more and more of their former minimum wage workers with automation. Somehow, the politicians never catch on to this, or they know and don't care. It makes me want to scream every time I get a promotional email from some organization talking about how evil low wages are and how the minimum wage needs to be raised. Don't they know they are going to make jobs go away, basically forever?
I have strong doubts about its generality.
It matches up with my experience, with the caveat that it is much more true for publicly held firms than privately held firms. I remember a project I was working on for a warehouse management software company; the advising professor commented something along the lines of "well, if they can show they make the money back in five years, then it's a win to invest," and we responded with "actually, the decision horizon for most of their clients is about a year or two." He was visibly shocked by the implied difference in time horizons.
The argument for this mostly comes in the implicit discounting of promises. If the salesman claims it has an effect size that large, then very possibly it will actually pay off once you account for the total cost of installation and ownership. The cynical observation is it has something more to do with the quarterly cycle of businesses- investments need to pay back for themselves rather quickly, or it may be your successor who reaps the benefits of your investments. Privately held firms have noticeably longer time horizons, make more of these long-term investments, and that appears to be a major cause for them often performing better in the long run than publicly held businesses.
Eliezer, what was your motivation for thinking about this topic and writing this post? Is there a strategic relevance to MIRI or the typical LW reader? Are journalists or other people frequently asking you to comment on AI and unemployment (in which case, why is it titled "Anti-FAQ")? Is it just intellectual curiosity from your interests in AI and economics?
Partially dispel the view of MIRI wherein we're allegedly supposed to pontificate on something called 'AI risk' and look gravely concerned about it. Lure young economists into looking at the intelligence explosion microeconomics problem.
Both Q and A seem to be treating unemployment as intrinsically bad, which is a case of lost purposes, a confusion between terminal and instrumental goals.
It's not a confusion by the technological unemployment people, at least: most of them seem to come to conclusions like 'this is irreversible and reversing it is undesirable anyway, so what we need to do is de-link employment with being able to survive using something like Basic Income'.
People who think that automation is currently increasing unemployment don't generally just talk about jobs lost during the Great Recession. They see an overall trend of reduction in employment and wages since at least 2000.
You're absolutely right that the recession was caused by a financial shock. The thing is, a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession. The difference might be that this time, it is easier then ever in the past for employers to figure out how to do more with less workers (because of the internet, and automation, and computers, ect), and so even when demand starts to come back up as the GDP grows again, they apparently still don't need to hire many workers.
The economists making the automation argument aren't saying that automation caused the great recession or the loss of jobs that happened then; they tend to think that it's a long ongoing trend that's been going for quite a while, that it was partly hidden for a few years by the housing bubble, but that the great recession has accelerat...
They see an overall trend of reduction in employment and wages since at least 2000.
And also wage stagnation in contrast to continuing productivity gains since the 1970s.
The explanation that the people like Erik Brynjolfsson make for why the gap between productivity and wages is growing larger is that, as it becomes easier to automate more and more parts of production, the relative importance of capital (money to invest in automation) grows, while the relative importance of labor declines. So as automation advances, more of the profit goes to those with the capital to invest in automation while less of it goes to the worker.
Paul Krugman wrote an article about 6 months ago discussing in economic terms how it can be possible for certain types of technological advance to benefit capital at the expense of workers, it was pretty interesting, let me find it.
http://krugman.blogs.nytimes.com/2012/12/10/technology-and-wages-the-analytics-wonkish/
A possible explanation: You can't have a IQ-70 person doing the work that needs IQ 130, but you can have it the other way round.
So maybe in the past many people were too smart for their jobs (because most things that needed to be done were stupid), and when those jobs were automated away, the smart people moved to do smarter things. This continued for some time... until all the smart people left the stupid jobs. Now when yet more stupid jobs are automated away, the remaining stupid people have nowhere to go.
In a story format:
There was a farmer with three sons -- one was smart, one was average, one was stupid. At the beginning all three sons were needed to work on the farm, otherwise there would not be enough food for them to survive.
Then the farmer bought a machine, so only two sons were needed at the farm. The smartest son left the farm and became a scientist.
Then the farmer bought another machine, so only one son was needed at the farm. The average son left the farm and became a clerk.
The the farmer bought a third machine, so no sons were needed at the farm. The stupid son left the farm and became... unemployed, because he was too stupid to do anything else than farm work.
(Why did this happen now, and not during the previous years when the former sons were leaving?)
I don't think it's the number of jobs being automated away that matters, but the rate; unemployment becomes a problem when automation outpaces reemployment. Better or worse economic policies can move the rate of reemployment up or down, but as the rate of automation increases, the quality of governance required to make par rises with it.
The financial system is staring much more at the inside of its eyelids now than in the 1980s.
What exactly does this mean?
Most money in the financial system is invested in bets on what happens in the financial system.
Notional amount is not a good measure for the OTC market. It has two main problems it double (or more) counts multiple step transactions and it doesn't net offsetting transactions.
For the first problem, the power grid is a good analogy. Imagine you wanted to assess the total amount of power in the US power grid, so you add the amount of all power leaving plants, plus the amount of everything that passes through high voltage lines, plus the amount of everything going through substations, plus the amount of everything on transformers, plus the amount of everything going through local grids, plus the amount of all power used by homes or companies.
Since power goes through all of those steps, if you count each step separately and sum, your total will be massively overstated.
Netting: if a friend and I get lunch twice and I buy the first lunch and a friend buys the second we call it even and that's that. If two large corporations do the same thing, they leave both contracts in place. This is because wiring money back and forth is cheap and canceling or amending contracts is cumbersome and incurs legal costs ( which are expensive). So even though economic exposure is zero, notional exposure is the cost of lunch *2.
For long dated contracts that are around for years, repeated nettings can build up large meaningless notionals that bloat the figures.
Both these issues with notionals are well known, so you should probably slightly update your wariness for whatever source was quoting notionals without the requisite disclaimer.
Another hypothesis for the mix, conveyed to me by a business major:
The biggest recent change has been the abrupt entry into the information age, with internet companies being in the innovation spotlight. Software companies and other information-centered businesses are far more scalable than most, which means that when a product gets popular, a big profit can result. The idea here is that this provides an exceptionally high opportunity for inequality: information industries create a small population which get the high payoff, with a large number who pay for the new products.
Some simplistic macroeconomic simulations have suggested that there are two equilibriums which an economy can fall into; one where people have roughly the same amount of money, and another where money concentrates into a small number of hands. This makes the tech-inequality idea scary. Surely reality is more complex than the simple simulation; but, innovations with high inequality risk could push us into a different equilibrium...
The traditional story is that when innovators provide new products for everyone to buy, everyone benefits; the innovators may get rich, but the others who buy the product are also better off. Looking at graphs, the standard of living goes way up for the rich, but also rises more slowly for the poor... until the 90s. Then the poor actually get worse off again. (I checked this some time ago, and don't have a convenient link, sorry! In general there are a lot of things in this comment that could use fact-checking.)
Your * explanations in general involve some systemic changes, which doesn't jive with the abrupt and dramatic shock seen in the 2007-2009 data. Any explanation of what is currently happen that doesn't tie into the obvious business cycle seems to lack the necessary explanatory power.
I don't doubt that some or all of those systemic issues are driving long term trends (for instance I know dozens of phds who WANT to be working on next-gen power generation but are instead in banking or finance because no one would hire them to do anything else. This obviously has an effect on the mix of employment sectors but that shouldn't necessarily mean lower employment), but there is an abrupt and sudden shock in the data. The fact that its happening in multiple countries at once makes it harder to blame regulatory environments.
Maybe we've finally reached the point where there's no work left to be done
If so, this is superb! This is the end goal. A world in which there is no work left to be done, so we can all enjoy our lives, free from the requirement to work.
The thought that work is desirable has been hammered into our heads so hard that it's a really, really dubious proposition that actually a world where nobody has to work is the ultimate goal. Not one in which everyone works. That world sucks. That's world in which 85% of us live today.
Vernor Vinge once said something to the effect of, “When a robot can autonomously clean a bachelor’s bathroom, then we will be very, very close to a singularity.” So he's in agreement with you on this senero:
My God, what have we wrought! There’s going to be a massive [jainitorial] unemployment cri—FOOM
Edit, sourced qoute: http://mindstalk.net/vinge/firstMoversTalk.html
Vernor: My classic example of that is that I figured that a robot that could clean a bachelor's unprepped bathroom
[laughter]
would be something that would be very close to satisfying the singularity.
Well, if it has the ability to clean a bathroom, similar systems could cook, clean, drive, construct, do pretty much any routine task—that sounds like a lot of jobs to me. Now, could a lizard-level intelligence clean a randomly chosen bathroom? Said robot would have to have a lot of common sense notions of how to treat objects, very good visual perception, proprioception, and object classification, even the ability to use tools. That sounds more around higher mammel intelligence to me. As I haven’t spent my life studying AI, I’m perfectly willing to replace my opinion on this with your own, but I’m having trouble seeing how cleaning a randomly-chosen bathroom is a lizard-level task.
Hm. My previous sentence is on reflection incorrect; considering the number of jobs that could potentially be replaced by 'clean a bachelor pad' level intelligence, we would be looking at a potential disemployment shock that would be considered large in the US. Not a complete disemployment shock, but it would probably qualify as 'mass unemployment' if reemployment failed.
Now, could a lizard-level intelligence clean a randomly chosen bathroom?
If a generally lizard-level intelligence were hooked to a petabyte database of special cases scraped by slightly smarter algorythms from security footage of previous bathroom cleanings, it could do it. This isn't how an AI theorist would attempt the problem, but it is more or less how Google translate works, and quite possibly how the first bachelor-bathroom-cleaning robot will work. Such an AI would be nowhere near capable of self-improvement.
I realize this doesn’t exactly contradict you, but even if true (and it probably is/was) I think those “most” would not in fact think of difficulty but rather of how well you need to solve the problem. That is, a bathroom-cleaning robot that misplaces the shampoo five percent of the time might be considered “solved problem”, but a self-driving car that “misplaces” the car even one percent of the time would sound very scary. I think it’s the difference in “acceptance criteria” that makes people misrank tasks rather than relative difficulty.
Really? I think of roads and highways as simple prepared environments, on which even the unexpected can be handled with relatively few actions - swerve, stop. A bathroom can be messy in a ridiculous variety of ways.
The main question is why is automation associated with unemployment today when it wasn't in the past. To answer, you have to consider the kinds of jobs created by and lost to automation and the determinants of workers incomes in the jobs.
Most of the industrial revolution is associated an increasing number of workers in manufacturing and fewer in farming. The industrial work force grew primarily at the expense of the peasants or farmers. Today, automation is causing manufacturing jobs to be replaced by service jobs. Farming jobs were the first to go because...
Why are we talking about jobs rather than man-hours worked? Automation reduced man-hours worked. We went from much longer work weeks to 40 hour work weeks as well as raising standards of living.
AI will reduce work time further. If someone can use AI to produce as much in 30 hours as they did in 40, they could chose to work anywhere from 30 - 40 hours and be better off. Many people would chose to work less as they compare the marginal values of free time and extra pay.
Why are we seeing long term unemployment instead of shorter work weeks now? Is this inevitable or is there some structural or institutional problem causing it?
This is a good FAQ, but one thing that's bugging me. This bit from footnote #2:
This would also require some amount of decreased taxes on the next quintile in order to avoid high marginal tax rates, i.e., if you suddenly start paying $2000/year in taxes as soon as your income goes from $19,000/year to $20,000/year then that was a 200% tax rate on that particular extra $1000 earned.
Is warning about an error that almost no one makes, and thus ends up sounding kinda clueless in turn. Current tax codes are already written in terms of marginal rates, so ther...
"There's a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying."
Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.
"We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there...
300 IQ is 10 standard deviations above the mean. So picture a trillion planets each with a trillion humans on them and take the smartest person out of all of this and transport him to our reality and make it very easy for him to quickly clone himself. Do you really think it would take this guy five full years to dominate scientific output?
So picture a trillion planets each with a trillion humans on them
There is almost no way this hypothetical provokes accurate intuitions about a 300 IQ. It's hard to ask someone to picture something they are literally incapable of picturing and I suspect people hearing this will just default to "someone a little smarter than the smartest person I know of".
Ah, right. The first response that came to mind was "well, I might already have everything that I want, but what about those poor or unemployed folks we're worried about" - but of course, if there are such people with unsatisfied desires, then obviously that means that there's still an unmet demand that the increased production can help meet, and the extra money is so that the poor people can actually buy the fruits of that additional production? Thanks, that makes sense.
That is very plausibly a world in which unemployment is massively higher than today, if sentiment is the only remaining reason to employ humans at anything; and a world in which a few capital-holders are the only ones who can afford to employ all these premium human hairdressers etcetera. If this is how things end up, then I would call my thesis falsified, and admit that the view I criticized was correct.
Have you spent much time working in labs? Its been my experience that most of the work is data collection, where the process you are collecting data on is the limiting factor. Honestly can't think of any lab I've been apart of where data collection was not the rate limiting step.
Here are the first examples that popped into my head:
Consider Lenski's work on E.coli. It took from 1988-2010 to get to 50k generations (and is going). The experimental design phase and data analysis here are minimal in length compared to the time it takes e.coli to grow and breed.
It took 3 years to go from the first potential top quark events on record (1992) to actual discovery (1995). This time was just waiting for enough events to build up (I'm ignoring the 20 years between prediction and first-events because maybe a super-intelligence could have somehow narrowed down the mass range to explore, I'm also ignoring the time required to actually build an accelerator, thats 3 years of just letting the machine run).
Depending on what you are looking for, timescales in NMR collection are weeks to months. If your signal is small, you might need dozens of these runs.
Also, anyone who has ever worked with a low temperature system can tell you that keeping the damn thing working is a huge time sink. So you could add 'necessary machine maintenance' to these sorts of tasks. Its not obvious to me that leak checking your cryonics setup to troubleshoot can be sped up much by higher IQ.
And, we all pay upwards of 98% of all of our wealth to the hidden tax of inflation
This is nonsense.
Is the userbase comfortable labelling the seymour_results account a troll? If people still take the account seriously then there is obviously false information being flung about that requires correction. But from my perspective this reply triggered my 'do not feed' instincts so I suspect it may be time to revert to "downvote and ignore" as a harm minimisation tactic.
One limit of the theory is that while it does state the new equilibrium will be "15 hot dogs in 15 buns" (ie, enhanced production and no more unemployment), and that has been verified in the past, it doesn't state at which rate the new equilibrium will be reached, nor what will happen in the transition period.
One possible hypothesis is that if the rate of change is too fast, no equilibrium can't be reached, the economy can't adjust fast enough to new technology.
I don't think it's the case right now - technology isn't going significantly faster than it was for most of the 20st century. But I think it's worth an entry in the FAQ.
Steve Keen's Debunking Economics blames debt, not automation.
Essentially, many people currently feel that they are deep in debt, and work to get out of debt. Keen has a ODE model of the macroeconomy that shows various behaviors, including debt-driven crashes.
Felix Martin's Money goes further and argues that strong anti-inflation stances by central bank regulators strengthen the hold of creditors over debtors, which has made these recent crashes bigger and more painful.
Regulations and minimum wage means hiring the worst workers in america is more expensive than getting the same production from workers in 3rd world countries, but the most marginal workers are not getting produced at any less a rate in america than they were in the past.
The early parts of this seem to fall apart when you switch from first-order qualitative reasoning to thinking about derivatives. Our basic observation is that the rate at which new technologies are automating away jobs now exceeds the rate at which new jobs are being created. Yes, this indicates a deficiency in the engine of reemployment, but putting all the focus on one side of the inequality seems disingenuous; every factor which changes the values on either side matters, cumulatively. Yeah, reemployment isn't working; but we're also pushing harder on it...
I predict that labor market turnover is higher now than it was in past decades, for as many decades as we have reliable data.
Goes and checks.
BLS data on total separations as a percentage of total employment. It only goes back to Dec 2000, but that is enough to surprise me: the separation side of the turnover fell from 4.0 to 3.2. So my hypothesis, that the rate of automation has increased by enough to significantly impact the labor market, is falsified.
Edit: Actually, after a bit more research I'm not so sure - in particular, I found this which claims that there are 2.7M temporary workers (+50% over the last four years). Converting temporary-worker count into turnover rate is tricky, but this is a symptom you'd expect if turnover has increased, and I don't think it's included in the BLS data.
...The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons. (Plus, we are really stunningly stupid about matching educational supply to labor demand. How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so? Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three cho
It seems to me that a good model of the great recession should include as its predictions that male employment would be particularly hard-hit even among recessions (see https://docs.google.com/spreadsheet/ccc?key=0AofUzoVzQEE5dFo3dlo4Ui1zbU5kZ2ZENGo4UGRKbFE#gid=0). I think this probably favors ZMP (see http://marginalrevolution.com/marginalrevolution/2013/06/survey-evidence-for-zmp-workers.html). Edit: after normalizing the data with historical context, I'm not so sure.
secularly increasing long-term unemployment
What does "secularly" mean here...? I don't think I'm familiar with this usage.
I'm not sure I completely follow your reply to the hot dog and bun example. As the questioner pointed out, we may simply be reaching a saturation of the amount of hot dogs and buns we need. Maybe I'm being unfair but I feel you hand-waved that concern away. You say that:
We do not literally have nothing better for unemployed workers to do. Our civilization is not that advanced.
Which is true, but doesn't address the question, because you don't have to have robots replace 100% of humans for some people to find themselves without a job.
...It's plausible we
I think the view that automation is now destroying jobs, the view that the economy always re-allocates the workforce appropriately and the views defended in this anti-FAQ all rest on a faulty generalisation. The industrial revolution and the early phases of computerisation produced jobs for specific reasons. Factories required workers and computers required data entry. It wasn't a consequence of a general law of economics, it was a fortuitous consequence of the technology. We are now seeing the end of those specific reasons, but not because of a general tr...
The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.
I wouldn't be surprised if this was the case, and I'd be very surprised if the end of cheap (at least, much cheaper) petroleum has nothing to do with that.
It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!
I believe I have alredy pointed out that automatic trains already exist. Putting a human superintendent onto a train with nothing to do except watch it drive itslef would be quite ineffectice, because the job is so boring they are unlikely to concetrate. I believe exisitng driverless trains are monitored by CCTV, which is more effective since the monotirs actually ahve something to do in flicking between channels, and could be appied to driverless cars.
I admit, I stopped reading the linked paper when I saw the page count, but I don't see why you're rejecting decades of 60<IQ<100 AIs as implausible (uninteresting is another matter, but some people are interested). An IQ70 AI is little more able to self-improve than a IQ70 human is able to improve an AI. Even an IQ120 human would have trouble with that. The task of bringing AIs from IQ60 to IQ140 where they can start meaningfully contributing to AI research falls to IQ180 humans, and will probably take a long time.
Not that talking about the IQ of ...
An IQ70 AI is little more able to self-improve than a IQ70 human is able to improve an AI
This is not obviously true. We're a lot less well optimized for improving code than some conceivable AIs can be: a seed AI with relatively modest general intelligence but very good self-modification heuristics might still end up knocking our socks off.
That said, there's a much larger design space where this isn't the case.
My [unverified] intuition on AI properties is that the delta between current status and 'IQ60AI' is multiple orders of magnitude larger than the delta between 'IQ60AI' and 'IQ180AI'. In essence, there is not that much "mental horsepower" difference between the stereotypical Einstein and a below-average person; it doesn't require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.
We don't know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known methods in many labs around the world by the current (non IQ180) researchers rapidly (ballpark of 6 months maybe?). We know from history that a 0 IQ process can optimize from monkey-level intelligence to an Einstein by bruteforcing; So in essence, if you've got IQ70 minds that can be rapidly run and simulted, then just apply more hardware (for more time-compression) and optimization, as that gap seems to require exactly 0 significant breakthroughs to get to IQ180.
While I agree with almost all of the antifaq (the general point is apparently more plausible to economists than non-economists), this is pretty misleading:
"The future cannot be a cause of the past."
True, but human expectations about the future can be very important in the present. If you expect that 5 years from now FAI will take over, you won't bother to make many long-term investments like building factories and training new workers.
So I decided to see how much negative karma I could amass before a "singularity type event."
Y'know, that was a fun game when sites like Slashdot first started to implement a karma system -- it was all new and shiny and of course people wanted to see how do you break one of those.
Hint: that was long time ago. By now negative karmawhoring is strictly in the domain of a certain class of creatures which are not known for the smarts or good hair styling.
Only marginally related to the topic -- not sure if this belongs here or to the Open Thread:
What do people think the effect of raising or lowering the retirement age would be on unemployment? Intuitively, I'd guess that lowering the retirement age means that more old people will retire, and more young people will be needed to take up their jobs, lowering the unemployment rate (and effectively transferring wealth from old to young generations). But I can remember very few people (almost exclusively in meatspace) ever suggesting lowering the retirement age t...
The effect of automation is to lessen the need for human engagement in producing something. But automation isn't free. There is sometimes a decision to invest in a new machine or another employee. Machines are a form of capital. As you own more capital in the form of machines, you have an edge over someone who doesn't. In the capitalist model, advantages must be exploited as much as possible to remain competitive.
In our society, the difficulty seems to be how to address the problem that less human effort is required. According to supply and demand, this me...
...Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?
A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot
Does anybody have any data or reasoning that tracks the history of the relative magnitude of ideal value of unskilled labor versus ideal minimum cost of living? Presumably this ratio has been tracking favorably, even if in current practical economies the median available minimum wage job is in a city with a dangerously tight actual cost of living.
What I'd like to understand is, outside of minimum wage enforcement and solvable inefficiencies that affect the cost of basic goods, how much more economic output does an unskilled worker have over the cost what ...
We don't live in a free market simulation where innovation magically appears just because it is better and possible. Professions lobby for their regulation to protect against replaceent. Opthamologist lobby against general surgeons taking over their tasks, despite them having the longest hospital waiting times. We could be training lay people to carry out specific highly specialised tasks, getting rid of unemployment and easing the education burden on doctors. We don't do these things. Only 82% of emergency, 71percent of urgent department presentations are...
There are some great points in this article, but I cant imagine why a super intelligent being would want to perform any human job (let alone the low level ones). Wouldn't it rather be doing whatever complex stuff it is interested in? Another point - it all really depends on what the goals of the AI are - that will be the tricky task. Ideally it would be there to help humans but there is no guarantee it would stick to that in future iterations of itself.
a super intelligent being would want to perform any human job (let alone the low level ones). Wouldn't it rather be doing whatever complex stuff it is interested in?
A banal human job might be the thing a superintelligence most enjoys. The AI will only get bored or thirst for novel intellectual stimulation if we program it to do so. Nothing is intrinsically interesting or boring.
Ideally it would be there to help humans but there is no guarantee it would stick to that in future iterations of itself.
There might be a guarantee, but we haven't found out what it would look like yet.
Meta: Talking about macroeconomics on the Internet often triggers a lot of mind-killy tendencies in my experience - I would be extremely worried if I saw Lesswrong focusing on similar topics with any regularity.
. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."
But presumably, productivity would rise as well, increasing the real value of wages of a certain face value.
most humans are not paid to drive cars most of the time
Well, if you count people who rent cheap apartments in the suburbs rather than expensive ones downtown, and then drive to work (etc.) and back every day...
They are not paid to drive cars. From their perspective robotic cars are a pure gain of time, not a loss of money.
Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?
A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.
Q. Sounds like a lovely theory. As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact. Experiment trumps theory and in reality, unemployment is rising.
A. Sure. Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries). We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away. The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution. The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries. Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should. The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".
Q. But now people aren't being reemployed. The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.
A. Yes. And that's a new problem. We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence. The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?
Q. Maybe we've finally reached the point where there's no work left to be done, or where all the jobs that people can easily be retrained into can be even more easily automated.
A. You talked about jobs going away in the Great Recession and then not coming back. Well, the Great Recession wasn't produced by a sudden increase in productivity, it was produced by... I don't want to use fancy terms like "aggregate demand shock" so let's just call it problems in the financial system. The point is, in previous recessions the jobs came back strongly once NGDP rose again. (Nominal Gross Domestic Product - roughly the total amount of money being spent in face-value dollars.) Now there's been a recession and the jobs aren't coming back (in the US and EU), even though NGDP has risen back to its previous level (at least in the US). If the problem is automation, and we didn't experience any sudden leap in automation in 2008, then why can't people get back at least the jobs they used to have, as they did in previous recessions? Something has gone wrong with the engine of reemployment.
Q. And you don't think that what's gone wrong with the engine of reemployment is that it's easier to automate the lost jobs than to hire someone new?
A. No. That's something you could say just as easily about the 'lost' jobs from hand-weaving when mechanical looms came along. Some new obstacle is preventing jobs lost in the 2008 recession from coming back. Which may indeed mean that jobs eliminated by automation are also not coming back. And new high school and college graduates entering the labor market, likewise usually a good thing for an economy, will just end up being sad and unemployed. But this must mean something new and awful is happening to the processes of employment - it's not because the kind of automation that's happening today is different from automation in the 1990s, 1980s, 1920s, or 1870s; there were skilled jobs lost then, too. It should also be noted that automation has been a comparatively small force this decade next to shifts in global trade - which have also been going on for centuries and have also previously been a hugely positive economic force. But if something is generally wrong with reemployment, then it might be possible for increased trade with China to result in permanently lost jobs within the US, in direct contrast to the way it's worked over all previous economic history. But just like new college graduates ending up unemployed, something else must be going very wrong - that wasn't going wrong in 1960 - for anything so unusual to happen!
Q. What if what's changed is that we're out of new jobs to create? What if we've already got enough hot dog buns, for every kind of hot dog bun there is in the labor market, and now AI is automating away the last jobs and the last of the demand for labor?
A. This does not square with our being unable to recover the jobs that existed before the Great Recession. Or with lots of the world living in poverty. If we imagine the situation being much more extreme than it actually is, there was a time when professionals usually had personal cooks and maids - as Agatha Christie said, "When I was young I never expected to be so poor that I could not afford a servant, or so rich that I could afford a motor car." Many people would hire personal cooks or maids if we could afford them, which is the sort of new service that ought to come into existence if other jobs were eliminated - the reason maids became less common is that they were offered better jobs, not because demand for that form of human labor stopped existing. Or to be less extreme, there are lots of businesses who'd take nearly-free employees at various occupations, if those employees could be hired literally at minimum wage and legal liability wasn't an issue. Right now we haven't run out of want or use for human labor, so how could "The End of Demand" be producing unemployment right now? The fundamental fact that's driven employment over the course of previous human history is that it is a very strange state of affairs for somebody sitting around doing nothing, to have nothing better to do. We do not literally have nothing better for unemployed workers to do. Our civilization is not that advanced. So we must be doing something wrong (which we weren't doing wrong in 1950).
Q. So what is wrong with "reemployment", then?
A. I know less about macroeconomics than I know about AI, but even I can see all sorts of changed circumstances which are much more plausible sources of novel employment dysfunction than the relatively steady progress of automation. In terms of developed countries that seem to be doing okay on reemployment, Australia hasn't had any drops in employment and their monetary policy has kept nominal GDP growth on a much steadier keel - using their central bank to regularize the number of face-value Australian dollars being spent - which an increasing number of influential econbloggers think the US and even more so the EU have been getting catastrophically wrong. Though that's a long story.[1] Germany saw unemployment drop from 11% to 5% from 2006-2012 after implementing a series of labor market reforms, though there were other things going on during that time. (Germany has twice the number of robots per capita as the US, which probably isn't significant to their larger macroeconomic trends, but would be a strange fact if robots were the leading cause of unemployment.) Labor markets and monetary policy are both major, obvious, widely-discussed candidates for what could've changed between now and the 1950s that might make reemployment harder. And though I'm not a leading econblogger, some other obvious-seeming thoughts that occur to me are:
Q. Some of those ideas sounded more plausible than others, I have to say.
A. Well, it's not like they could all be true simultaneously. There's only a fixed effect size of unemployment to be explained, so the more likely it is that any one of these factors played a big role, the less we need to suppose that all the other factors were important; and perhaps what's Really Going On is something else entirely. Furthermore, the 'real cause' isn't always the factor you want to fix. If the European Union's unemployment problems were 'originally caused' by labor market regulation, there's no rule saying that those problems couldn't be mostly fixed by instituting an NGDP level targeting regime. This might or might not work, but the point is that there's no law saying that to fix a problem you have to fix its original historical cause.
Q. Regardless, if the engine of re-employment is broken for whatever reason, then AI really is killing jobs - a marginal job automated away by advances in AI algorithms won't come back.
A. Then it's odd to see so many news articles talking about AI killing jobs, when plain old non-AI computer programming and the Internet have affected many more jobs than that. The buyer ordering books over the Internet, the spreadsheet replacing the accountant - these processes are not strongly relying on the sort of algorithms that we would usually call 'AI' or 'machine learning' or 'robotics'. The main role I can think of for actual AI algorithms being involved, is in computer vision enabling more automation. And many manufacturing jobs were already automated by robotic arms even before robotic vision came along. Most computer programming is not AI programming, and most automation is not AI-driven. And then on near-term scales, like changes over the last five years, trade shifts and financial shocks and new labor market entrants are more powerful economic forces than the slow continuing march of computer programming. (Automation is a weak economic force in any given year, but cumulative and directional over decades. Trade shifts and financial shocks are stronger forces in any single year, but might go in the opposite direction the next decade. Thus, even generalized automation via computer programming is still an unlikely culprit for any sudden drop in employment as occurred in the Great Recession.)
Q. Okay, you've persuaded me that it's ridiculous to point to AI while talking about modern-day unemployment. What about future unemployment?
A. Like after the next ten years? We might or might not see robot-driven cars, which would be genuinely based in improved AI algorithms, and would automate away another bite of human labor. Even then, the total number of people driving cars for money would just be a small part of the total global economy; most humans are not paid to drive cars most of the time. Also again: for AI or productivity growth or increased trade or immigration or graduating students to increase unemployment, instead of resulting in more hot dogs and buns for everyone, you must be doing something terribly wrong that you weren't doing wrong in 1950.
Q. How about timescales longer than ten years? There was one class of laborers permanently unemployed by the automobile revolution, namely horses. There are a lot fewer horses nowadays because there is literally nothing left for horses to do that machines can't do better; horses' marginal labor productivity dropped below their cost of living. Could that happen to humans too, if AI advanced far enough that it could do all the labor?
A. If we imagine that in future decades machine intelligence is slowly going past the equivalent of IQ 70, 80, 90, eating up more and more jobs along the way... then I defer to Robin Hanson's analysis in Economic Growth Given Machine Intelligence, in which, as the abstract says, "Machines complement human labor when [humans] become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."
Q. Could we already be in this substitution regime -
A. No, no, a dozen times no, for the dozen reasons already mentioned. That sentence in Hanson's paper has nothing to do with what is going on right now. The future cannot be a cause of the past. Future scenarios, even if they seem to associate the concept of AI with the concept of unemployment, cannot rationally increase the probability that current AI is responsible for current unemployment.
Q. But AI will inevitably become a problem later?
A. Not necessarily. We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply. That scenario isn't the only possibility.
Q. What other possibilities are there?
A. Lots, since what Hanson is talking about is a new unprecedented phenomenon extrapolated over new future circumstances which have never been seen before and there are all kinds of things which could potentially go differently within that. Hanson's paper may be the first obvious extrapolation from conventional macroeconomics and steady AI trendlines, but that's hardly a sure bet. Accurate prediction is hard, especially about the future, and I'm pretty sure Hanson would agree with that.
Q. I see. Yeah, when you put it that way, there are other possibilities. Like, Ray Kurzweil would predict that brain-computer interfaces would let humans keep up with computers, and then we wouldn't get mass unemployment.
A. The future would be more uncertain than that, even granting Kurzweil's hypotheses - it's not as simple as picking one futurist and assuming that their favorite assumptions correspond to their favorite outcome. You might get mass unemployment anyway if humans with brain-computer interfaces are more expensive or less effective than pure automated systems. With today's technology we could design robotic rigs to amplify a horse's muscle power - maybe, we're still working on that tech for humans - but it took around an extra century after the Model T to get to that point, and a plain old car is much cheaper.
Q. Bah, anyone can nod wisely and say "Uncertain, the future is." Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong. Do you think we will eventually get to the point where AI produces mass unemployment?
A. My own guess is a moderately strong 'No', but for reasons that would sound like a complete subject change relative to all the macroeconomic phenomena we've been discussing so far. In particular I refer you to "Intelligence Explosion Microeconomics: Returns on cognitive reinvestment", a paper recently referenced on Scott Sumner's blog as relevant to this issue.
Q. Hold on, let me read the abstract and... what the heck is this?
A. It's an argument that you don't get the Hansonian scenario or the Kurzweilian scenario, because if you look at the historical course of hominid evolution and try to assess the inputs of marginally increased cumulative evolutionary selection pressure versus the cognitive outputs of hominid brains, and infer the corresponding curve of returns, then ask about a reinvestment scenario -
Q. English.
A. Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level. This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'. The resulting predictions are strongly distinct from traditional economic models of accelerating technological growth (we're not talking about Moore's Law here). Since it should take advanced general AI to automate away most or all humanly possible labor, my guess is that AI will intelligence-explode to superhuman intelligence before there's time for moderately-advanced AIs to crowd humans out of the global economy. (See also section 3.10 of the aforementioned paper.) Widespread economic adoption of a technology comes with a delay factor that wouldn't slow down an AI rewriting its own source code. This means we don't see the scenario of human programmers gradually improving broad AI technology past the 90, 100, 110-IQ threshold. An explosion of AI self-improvement utterly derails that scenario, and sends us onto a completely different track which confronts us with wholly dissimilar questions.
Q. Okay. What effect do you think a superhumanly intelligent self-improving AI would have on unemployment, especially the bottom 25% who are already struggling now? Should we really be trying to create this technological wonder of self-improving AI, if the end result is to make the world's poor even poorer? How is someone with a high-school education supposed to compete with a machine superintelligence for jobs?
A. I think you're asking an overly narrow question there.
Q. How so?
A. You might be thinking about 'intelligence' in terms of the contrast between a human college professor and a human janitor, rather than the contrast between a human and a chimpanzee. Human intelligence more or less created the entire modern world, including our invention of money; twenty thousand years ago we were just running around with bow and arrows. And yet on a biological level, human intelligence has stayed roughly the same since the invention of agriculture. Going past human-level intelligence is change on a scale much larger than the Industrial Revolution, or even the Agricultural Revolution, which both took place at a constant level of intelligence; human nature didn't change. As Vinge observed, building something smarter than you implies a future that is fundamentally different in a way that you wouldn't get from better medicine or interplanetary travel.
Q. But what does happen to people who were already economically disadvantaged, who don't have investments in the stock market and who aren't sharing in the profits of the corporations that own these superintelligences?
A. Um... we appear to be using substantially different background assumptions. The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters. The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill. A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money. It just moves atoms around into whatever molecular structures or large-scale structures it wants.
Q. How would it get the energy to move those atoms, if not by buying electricity from existing power plants? Solar power?
A. Indeed, one popular speculation is that optimal use of a star system's resources is to disassemble local gas giants (Jupiter in our case) for the raw materials to build a Dyson Sphere, an enclosure that captures all of a star's energy output. This does not involve buying solar panels from human manufacturers, rather it involves self-replicating machinery which builds copies of itself on a rapid exponential curve -
Q. Yeah, I think I'm starting to get a picture of your background assumptions. So let me expand the question. If we grant that scenario rather than the Hansonian scenario or the Kurzweilian scenario, what sort of effect does that have on humans?
A. That depends on the exact initial design of the first AI which undergoes an intelligence explosion. Imagine a vast space containing all possible mind designs. Now imagine that humans, who all have a brain with a cerebellum, thalamus, a cerebral cortex organized into roughly the same areas, neurons firing at a top speed of 200 spikes per second, and so on, are one tiny little dot within this space of all possible minds. Different kinds of AIs can be vastly more different from each other than you are different from a chimpanzee. What happens after AI, depends on what kind of AI you build - the exact selected point in mind design space. If you can solve the technical problems and wisdom problems associated with building an AI that is nice to humans, or nice to sentient beings in general, then we all live happily ever afterward. If you build the AI incorrectly... well, the AI is unlikely to end up with a specific hate for humans. But such an AI won't attach a positive value to us either. "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." The human species would end up disassembled for spare atoms, after which human unemployment would be zero. In neither alternative do we end up with poverty-stricken unemployed humans hanging around being sad because they can't get jobs as janitors now that star-striding nanotech-wielding superintelligences are taking all the janitorial jobs. And so I conclude that advanced AI causing mass human unemployment is, all things considered, unlikely.
Q. Some of the background assumptions you used to arrive at that conclusion strike me as requiring additional support beyond the arguments you listed here.
A. I recommend Intelligence Explosion: Evidence and Import for an overview of the general issues and literature, Artificial Intelligence as a positive and negative factor in global risk for a summary of some of the issues around building AI correctly or incorrectly, and the aforementioned Intelligence Explosion Microeconomics for some ideas about analyzing the scenario of an AI investing cognitive labor in improving its own cognition. The last in particular is an important open problem in economics if you're a smart young economist reading this, although since the fate of the entire human species could well depend on the answer, you would be foolish to expect there'd be as many papers published about that as squirrel migration patterns. Nonetheless, bright young economists who want to say something important about AI should consider analyzing the microeconomics of returns on cognitive (re)investments, rather than post-AI macroeconomics which may not actually exist depending on the answer to the first question. Oh, and Nick Bostrom at the Oxford Future of Humanity Institute is supposed to have a forthcoming book on the intelligence explosion; that book isn't out yet so I can't link to it, but Bostrom personally and FHI generally have published some excellent academic papers already.
Q. But to sum up, you think that AI is definitely not the issue we should be talking about with respect to unemployment.
A. Right. From an economic perspective, AI is a completely odd place to focus your concern about modern-day unemployment. From an AI perspective, modern-day unemployment trends are a moderately odd reason to be worried about AI. Still, it is scarily true that increased automation, like increased global trade or new graduates or anything else that ought properly to produce a stream of employable labor to the benefit of all, might perversely operate to increase unemployment if the broken reemployment engine is not fixed.
Q. And with respect to future AI... what is it you think, exactly?
A. I think that with respect to moderately more advanced AI, we probably won't see intrinsic unavoidable mass unemployment in the economic world as we know it. If re-employment stays broken and new college graduates continue to have trouble finding jobs, then there are plausible stories where future AI advances far enough (but not too far) to be a significant part of what's freeing up new employable labor which bizarrely cannot be employed. I wouldn't consider this my main-line, average-case guess; I wouldn't expect to see it in the next 15 years or as the result of just robotic cars; and if it did happen, I wouldn't call AI the 'problem' while central banks still hadn't adopted NGDP level targeting. And then with respect to very advanced AI, the sort that might be produced by AI self-improving and going FOOM, asking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth. There would indeed be effects, but you'd be missing the point.
Q. Thanks for clearing that up.
A. No problem.
ADDED 8/30/13: Tyler Cowen's reply to this was one I hadn't listed:
See here for the rest of Tyler's reply.
Taken at face value this might suggest that if we wait 50 years everything will be all right. Kevin Drum replies that in 50 years there might be no human jobs left, which is possible but wouldn't be an effect we've seen already, rather a prediction of novel things yet to come.
Though Tyler also says, "A second point is that now we have a much more extensive network of government benefits and also regulations which increase the fixed cost of hiring labor" and this of course was already on my list of things that could be trashing modern reemployment unlike-in-the-1840s.
'Brett' in MR's comments section also counter-claims:
The core idea in market monetarism is very roughly something like this: A central bank can control the total amount of money and thereby control any single economic variable measured in money, i.e., control one nominal variable. A central bank can't directly control how many people are employed, because that's a real variable. You could, however, try to control Nominal Gross Domestic Income (NGDI) or the total amount that people have available to spend (as measured in your currency). If the central bank commits to an NGDI level target then any shortfalls are made up the next year - if your NGDI growth target is 5% and you only get 4% in one year then you try for 6% the year after that. NGDI level targeting would mean that all the companies would know that, collectively, all the customers in the country would have 5% more money (measured in dollars) to spend in the next year than the previous year. This is usually called "NGDP level targeting" for historical reasons (NGDP is the other side of the equation, what the earned dollars are being spent on) but the most advanced modern form of the idea is probably "Level-targeting a market forecast of per-capita NGDI". Why this is the best nominal variable for central banks to control is a longer story and for that you'll have to read up on market monetarism. I will note that if you were worried about hyperinflation back when the Federal Reserve started dropping US interest rates to almost zero and buying government bonds by printing money... well, you really should note that (a) most economists said this wouldn't happen, (b) the market spreads on inflation-protected Treasuries said that the market was anticipating very low inflation, and that (c) we then actually got inflation below the Fed's 2% target. You can argue with economists. You can even argue with the market forecast, though in this case you ought to bet money on your beliefs. But when your fears of hyperinflation are disagreed with by economists, the market forecast and observed reality, it's time to give up on the theory that generated the false prediction. In this case, market monetarists would have told you not to expect hyperinflation because NGDP/NGDI was collapsing and this constituted (overly) tight money regardless of what interest rates or the monetary base looked like.
Call me a wacky utopian idealist, but I wonder if it might be genuinely politically feasible to reduce marginal taxes on the bottom 20%, if economists on both sides of the usual political divide got together behind the idea that income taxes (including payroll taxes) on the bottom 20% are (a) immoral and (b) do economic harm far out of proportion to government revenue generated. This would also require some amount of decreased taxes on the next quintile in order to avoid high marginal tax rates, i.e., if you suddenly start paying $2000/year in taxes as soon as your income goes from $19,000/year to $20,000/year then that was a 200% tax rate on that particular extra $1000 earned. The lost tax revenue must be made up somewhere else. In the current political environment this probably requires higher income taxes on higher wealth brackets rather than anything more creative. But if we allow ourselves to discuss economic dreamworlds, then income taxes, corporate income taxes, and capital-gains taxes are all very inefficient compared to consumption taxes, land taxes, and basically anything but income and corporate taxes. This is true even from the perspective of equality; a rich person who earns lots of money, but invests it all instead of spending it, is benefiting the economy rather than themselves and should not be taxed until they try to spend the money on a yacht, at which point you charge a consumption tax or luxury tax (even if that yacht is listed as a business expense, which should make no difference; consumption is not more moral when done by businesses instead of individuals). If I were given unlimited powers to try to fix the unemployment thing, I'd be reforming the entire tax code from scratch to present the minimum possible obstacles to exchanging one's labor for money, and as a second priority minimize obstacles to compound reinvestment of wealth. But trying to change anything on this scale is probably not politically feasible relative to a simpler, more understandable crusade to "Stop taxing the bottom 20%, it harms our economy because they're customers of all those other companies and it's immoral because they get a raw enough deal already."
Two possible forces for significant technological change in the 21st century would be robotic cars and electric cars. Imagine a city with an all-robotic all-electric car fleet, dispatching light cars with only the battery sizes needed for the journey, traveling at much higher speeds with no crash risk and much lower fuel costs... and lowering rents by greatly extending the effective area of a city, i.e., extending the physical distance you can live from the center of the action while still getting to work on time because your average speed is 75mph. What comes to mind when you think of robotic cars? Google's prototype robotic cars. What comes to mind when you think of electric cars? Tesla. In both cases we're talking about ascended, post-exit Silicon Valley moguls trying to create industrial progress out of the goodness of their hearts, using money they earned from Internet startups. Can you sustain a whole economy based on what Elon Musk and Larry Page decide are cool?
Currently the conversation among economists is more like "Why has total factor productivity growth slowed down in developed countries?" than "Is productivity growing so fast due to automation that we'll run out of jobs?" Ask them the latter question and they will, with justice, give you very strange looks. Productivity isn't growing at high rates, and if it were that ought to cause employment rather than unemployment. This is why the Great Stagnation in productivity is one possible explanatory factor in unemployment, albeit (as mentioned) not a very good explanation for why we can't get back the jobs lost in the Great Recession. The idea would have to be that some natural rate of productivity growth and sectoral shift is necessary for re-employment to happen after recessions, and we've lost that natural rate; but so far as I know this is not conventional macroeconomics.