Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Robots, AI, and Unemployment Anti-FAQ

47 Eliezer_Yudkowsky 25 July 2013 06:46PM

Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.  The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

continue reading »

New report: Intelligence Explosion Microeconomics

45 Eliezer_Yudkowsky 29 April 2013 11:14PM

SummaryIntelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, "reinvestable returns on cognitive investments": what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.

(Sample idea you haven't heard before:  The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)

I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive 'technological acceleration'.  MIRI has an intended-to-be-small-and-technical mailing list for such discussion.  In case it's not clear from context, I (Yudkowsky) am the author of the paper.

Abstract:

I. J. Good's thesis of the 'intelligence explosion' is that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version of itself, and that this process could continue enough to vastly exceed human intelligence.  As Sandberg (2010) correctly notes, there are several attempts to lay down return-on-investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with I. J. Good's intelligence explosion thesis as such.

I identify the key issue as returns on cognitive reinvestment - the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs.  There are many phenomena in the world which have been argued as evidentially relevant to this question, from the observed course of hominid evolution, to Moore's Law, to the competence over time of machine chess-playing systems, and many more.  I go into some depth on the sort of debates which then arise on how to interpret such evidence.  I propose that the next step forward in analyzing positions on the intelligence explosion would be to formalize return-on-investment curves, so that each stance can say formally which possible microfoundations they hold to be falsified by historical observations already made.  More generally, I pose multiple open questions of 'returns on cognitive reinvestment' or 'intelligence explosion microeconomics'.  Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting the outcomes for Earth-originating intelligent life.

The dedicated mailing list will be small and restricted to technical discussants.

continue reading »

The Evolutionary-Cognitive Boundary

22 Eliezer_Yudkowsky 12 February 2009 04:44PM

I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history.  There are good reasons for this!  Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.

Consider, for example, the hypothesis that managers behave rudely toward subordinates "to signal their higher status".  This hypothesis then has two natural subdivisions:

If rudeness is an executing adaptation as such - something historically linked to the fact it signaled high status, but not psychologically linked to status drives - then we might experiment and find that, say, the rudeness of high-status men to lower-status men depended on the number of desirable women watching, but that they weren't aware of this fact.  Or maybe that people are just as rude when posting completely anonymously on the Internet (or more rude; they can now indulge their adapted penchant to be rude without worrying about the now-nonexistent reputational consequences).

If rudeness is a conscious or subconscious strategy to signal high status (which is itself a universal adapted desire), then we're more likely to expect the style of rudeness to be culturally variable, like clothes or jewelry; different kinds of rudeness will send different signals in different places.  People will be most likely to be rude (in the culturally indicated fashion) in front of those whom they have the greatest psychological desire to impress with their own high status.

continue reading »

Cynicism in Ev-Psych (and Econ?)

14 Eliezer_Yudkowsky 11 February 2009 03:06PM

Though I know more about the former than the latter, I begin to suspect that different styles of cynicism prevail in evolutionary psychology than in microeconomics.

Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

Evolutionary psychologists are mildly cynical about the environmental circumstances that activate and maintain an emotion.  For example, if you fall in love with the body, mind, and soul of some beautiful mate, an evolutionary psychologist would like to check up on you in ten years to see whether the degree to which you think your mate's mind is still beautiful, correlates with independent judges' ratings of how physically attractive that mate still is.

But it wouldn't be conventionally ev-psych cynicism to suppose that you don't really love your mate, and that you were actually just attracted to their body all along, but that instead you told yourself a self-deceiving story about virtuously loving them for their mind, in order to falsely signal commitment.

Robin, on the other hand, often seems to think that this general type of cynicism is the default explanation and that anything else bears a burden of proof - why suppose an explanation that invokes a genuine virtue, when a selfish desire will do?

Of course my experience with having deep discussions with economists mostly consists of talking to Robin, but I suspect that this is at least partially reflective of a difference between the ev-psych and economic notions of parsimony.

Ev-psychers are trying to be parsimonious with how complex of an adaptation they postulate, and how cleverly complicated they are supposing natural selection to have been.

Economists... well, it's not my field, but maybe they're trying be parsimonious by having just a few simple motives that play out in complex ways via consequentialist calculations?

continue reading »