Kevin Drum has an article in Mother Jones about AI and Moore's Law:

THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.

Although he only mentions consumer goods, Drum presumably means that scarcity will end for services and consumer goods. If scarcity only ended for consumer goods, people would still have to work (most jobs are currently in the services economy). 

Drum explains that our linear-thinking brains don't intuitively grasp exponential systems like Moore's law. 

Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.

By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.

At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.

So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?

But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.

He also includes this nice animated .gif which illustrates the principle very clearly. 

Drum continues by talking about possible economic ramifications.

Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.

Drum says the share of (US) national income going to workers was stable until about a decade ago. I think the graph he links to shows the worker's share has been declining since approximately the late 1960s/early 1970s. This is about the time US immigration levels started increasing (which raises returns to capital and lowers native worker wages). 

The rest of Drum's piece isn't terribly interesting, but it is good to see mainstream pundits talking about these topics.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 7:30 AM

When the robot revolution happens, we need to have many supporters of efficient charity among the ruling class, at least while the ruling class still consists of humans.

When the robot revolution happens, we need to have many supporters of efficient charity among the ruling class

The problem is much worse. All the ethical sophistication in the world (as argued by e.g. Richard Rorty) can not in practice serve as a barrier to cruelty and domination if there's no underlying moral sentiment of empathy, of sharing a certain "human circle" with your fellow beings.

The ruling class needs to share some moral emotions that would allow preference-utilitarian/negative-utilitarian ethics towards the "worthless" poor in the first place. Otherwise they might simply decide that the "efficiency" in "efficient charity" is best achieved by enslaving the former working class or lumping it with domestic animals... for the proles' own safety/virtue/moral benefit/etc, of course. They're dirty and uncouth! they would revert to "savagery" without proper control! they have no use for the liberties that the superior man enjoys! empathizing with them as equals is simply an intellectual mistake and naive sentimentality! ...Basically, see every rationalization for slavery ever made; when slavery cannot be opposed by force or starved economically, those aspiring to become the new "aristocracy" could grow very enamored of such rationalizations again.

I can hardly even begin describing my fears of how the new technocratic/quasi-aristocratic elite might conduct its relations with the "common people" that have no economic or military leverage over it. This is why I am so fucking terrified about the emerging association between transhumanism and an anti-egalitarian/far-right ideology!

The Jacobin magazine has a very good article on this subject from a while ago:

The great danger posed by the automation of production, in the context of a world of hierarchy and scarce resources, is that it makes the great mass of people superfluous from the standpoint of the ruling elite. This is in contrast to capitalism, where the antagonism between capital and labor was characterized by both a clash of interests and a relationship of mutual dependence: the workers depend on capitalists as long as they don’t control the means of production themselves, while the capitalists need workers to run their factories and shops. It is as the lyrics of “Solidarity Forever” had it: “They have taken untold millions that they never toiled to earn/But without our brain and muscle not a single wheel can turn.” With the rise of the robots, the second line ceases to hold.

The existence of an impoverished, economically superfluous rabble poses a great danger to the ruling class, which will naturally fear imminent expropriation; confronted with this threat, several courses of action present themselves. The masses can be bought off with some degree of redistribution of resources, as the rich share out their wealth in the form of social welfare programs, at least if resource constraints aren’t too binding. But in addition to potentially reintroducing scarcity into the lives of the rich, this solution is liable to lead to an ever-rising tide of demands on the part of the masses, thus raising the specter of expropriation once again. This is essentially what happened at the high tide of the welfare state, when bosses began to fear that both profits and control over the workplace were slipping out of their hands.

If buying off the angry mob isn’t a sustainable strategy, another option is simply to run away and hide from them. This is the trajectory of what the sociologist Bryan Turner calls “enclave society”, an order in which “governments and other agencies seek to regulate spaces and, where necessary, to immobilize flows of people, goods and services” by means of “enclosure, bureaucratic barriers, legal exclusions and registrations.” Gated communities, private islands, ghettos, prisons, terrorism paranoia, biological quarantines; together, these amount to an inverted global gulag, where the rich live in tiny islands of wealth strewn around an ocean of misery. In Tropic of Chaos, Christian Parenti makes the case that we are already constructing this new order, as climate change brings about what he calls the “catastrophic convergence” of ecological disruption, economic inequality, and state failure. The legacy of colonialism and neoliberalism is that the rich countries, along with the elites of the poorer ones, have facilitated a disintegration into anarchic violence, as various tribal and political factions fight over the diminishing bounty of damaged ecosystems. Faced with this bleak reality, many of the rich – which, in global terms, includes many workers in the rich countries as well – have resigned themselves to barricading themselves into their fortresses, to be protected by unmanned drones and private military contractors. Guard labor, which we encountered in the rentist society, reappears in an even more malevolent form, as a lucky few are employed as enforcers and protectors for the rich.

But this too, is an unstable equilibrium, for the same basic reason that buying off the masses is. So long as the immiserated hordes exist, there is the danger that it may one day become impossible to hold them at bay. Once mass labor has been rendered superfluous, a final solution lurks: the genocidal war of the rich against the poor. Many have called the recent Justin Timberlake vehicle, In Time, a Marxist film, but it is more precisely a parable of the road to exterminism. In the movie, a tiny ruling class literally lives forever in their gated enclaves due to genetic technology, while everyone else is programmed to die at 25 unless they can beg, borrow or steal more time. The only thing saving the workers is that the rich still have some need for their labor; when that need expires, so presumably will the working class itself.

I wonder how this bleak picture might change if we throw cheap cognitive enhancement into the mix. Especially considering Eliezer's idea that increased intelligence should make the poor folks better at cooperating with each other.

I wonder how this bleak picture might change if we throw cheap cognitive enhancement into the mix.

Considered it too. I just pray to VALIS that there'll be a steep enough curve of diminishing returns associated with intelligence amplification - so that even if the technocratic elite desperately wants to maintain supremacy, it can't just throw exponentially more resources at "boutique" small-scale enhancement and maintain the gap with "mass-enhanced" humans.

I suspect there are unknown unknowns in the scenario. The masses are more likely to have open source research. I think the elite is more likely to screw itself up with a bad ideology, but that might be mere wishful thinking.

This is why I am so fucking terrified about the emerging association between transhumanism and an anti-egalitarian/far-right ideology!

Can you provide some examples of this association?

If you're just talking about libertarians or something, my impression is that they want a reasonably egalitarian society too....they just have different economic policies for bringing it about.

[-][anonymous]11y60

Not libertarians. Reactionaries.

authoritarian anti-egalitarians.

I just meant that I haven't come across any examples of people who are simultaneously transhumanist and authoritarian. Where do I find these writings?

[-][anonymous]11y80

these guys are lesswrongers.

I am transhumanist and authoritarian.

Nick Land I think is another big example?

Oh...so basically the whole Dark Enlightenment school of thought?

I've only started reading this strand of thought recently, and haven't yet made the connection to authoritarianism. I get that they reject modern liberalism, democracy, and the idea that everyone has equal potential, but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity? Do they also believe that an elite group should have large amounts of power over the majority? And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?

EDIT thoughts after reading the sources you linked:

Perhaps an anti-egalitarian can be thought of one who does not value equality as an intrinsic moral good? Even if everyone is valued equally, the optimal solution in terms of getting the most satisfaction to everyone does not necessarily involve everyone being satisfied in roughly equal measures.

Basically, on Haidt's moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...

...actually, come to think of it that's almost how I scored when i took it a few years ago. - 3.7 harm, 2.0 fairness, 0 on everything else.

you've given yourself the label "authoritarian". If you took Haidt's test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)

[-][anonymous]11y10

Can't speak for others, but here's my take:

s/they/you:

but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity?

I think it's more important to look at absolute opportunity than relative opportunity.

That said, in my ideal world we all grow up together as one big happy family. (with exactly the right amount of drama)

Do they also believe that an elite group should have large amounts of power over the majority?

Yes, generally. Note that everything can be cast in a negative light by (in)appropriate choice of words.

The elites need not be human, or the majority need not be human.

My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc's.

And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?

Yes, people currently have major differences in moral value. This may or may not be bad, I'm not sure.

But again, I'm more concerned with people's absolute moral value, which should be higher. (and just saying "I should just value everyone more" ie "lol I'll multiply everyone's utility by 5" doesn't do anything)

Basically, on Haidt's moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...

Dunno, you'd have to test them.

My general position on such systems is that all facets of human morality are valuable, and people throw them out/emphasize them for mostly signalling/memetic-infection reasons.

All of those axes sound really important.

you've given yourself the label "authoritarian". If you took Haidt's test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)

Haven't taken the test. Self-describing as an "authoritarian" can only really be understood in the wider social context where authority and hierarchy have been devalued.

So a more absolute description would be that I recognize the importance of strong central coordination in doing things (empirical/instrumental), and find such organization to have aesthetic value. For example, I would not want to organize my mind as a dozen squabbling "free" modules, and I think communities of people should be organized around strong traditions, priests, and leaders.

Of course I also value people having autonomy and individual adventure.

Haven't taken the test. Self-describing as an "authoritarian" can only really be understood in the wider social context where authority and hierarchy have been devalued.

I think that's really the crux of it. When someone says they are authoritarian, that doesn't necessarily have anything to do with present/past authoritarian regimes.

My general position on such systems is that all facets of human morality are valuable

Isn't that a bit recursive? Human morality defines what is valuable. Saying that a moral is valuable is implying some sort of meta-morality. If someone doesn't assign "respect for authority" intrinsic value (though it may have utility in furthering other values), isn't that ...just the way it is?

My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc's

I think everyone's ideal world is one where all our actions were directed by a being with access to the CEV of humanity (or, more accurately, each person wants humanity to be ruled by their own CEV). On LessWrong, that's not even controversial - it would be by definition the pinnacle of rational behavior.

The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that "FOOM" stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There's no point to a political stance unless it has consequences for what actions one can take in the short term.

When someone says they are authoritarian, that doesn't necessarily have anything to do with present/past authoritarian regimes.

Sounds pretty dangerous.

[-][anonymous]11y-10

If someone doesn't assign "respect for authority" intrinsic value (though it may have utility in furthering other values), isn't that ...just the way it is?

No. Generally people are confused about morality, and such statements are optimized for signalling rather than correctness with respect to their actual preferences.

For example, I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false. I claim that the same pattern applies to non-authoritarianism, having been there myself.

So when I say "all of it is valuable" I am rejecting the pattern "Some people value X, but they are confused and X is not real morality, I only value Y and Z" which is a common position to take wrt the authority and purity axes on haidt, because that is supposedly a difference between liberals and conservatives, hence ripe for in-group signalling.

If some people value X, consider the proposition that it is actually valuable. Sometimes it isn't, and they're just weird, but that's rare, IMO.

The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that "FOOM" stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There's no point to a political stance unless it has consequences for what actions one can take in the short term.

You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don't think are realistic. I don't have time for that. Some people do though:

Moldbug has written plenty about how such a society could function and come about (the reaction)

Yvain has also recently laid out his semi-plausible authoritarian human society (raikoth) (eugenics, absolute rule by computer, omnipresent surveillance, etc)

I expect moreright will have some interesting discussion of this as well.

You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don't think are realistic. I don't have time for that. Some people do though:

Oh, I didn't mean that I want you to outline a manifesto or plan or anything.

Do they also believe that an elite group should have large amounts of power over the majority?

was my original question. What I meant was more that if you identify as "authoritarian", it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government - but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we'd be better off.

I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false.

Hehe...you better expect to save quite a few lives if you want to justify staying alive with that preference set (you have organs that could be generating so much utility to so many people!).

"Some people value X, but they are confused and X is not real morality, I only value Y and Z" which is a common position to take wrt the authority and purity axes on haidt,

If you cross out " but they are confused and X is not real morality" I guess I'm one of those people - I don't think they are confused about what they value. I just think that I don't share that value. The phrase "real morality" is senseless - I'm not a moral realist.

I suppose I could be confused about my own values, of course. But when I read Haidt's work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn't even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren't thinking clearly (whereas now I think it's just a matter of different values)

[-][anonymous]11y00

was my original question. What I meant was more that if you identify as "authoritarian", it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government - but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we'd be better off.

This is a good point and I'm unsure of my answer. I need to think about that. It could be that authoritarianism is as unrealistic as anarchism (I used to be an anarchist, and decided the whole "somehow we will find a way to solve the military aggression problem" was too much apologetics. The "somehow we will make the dictator uncorruptible" may be similar apologetics).

That said, I do reject the idea that values depend on what's convenient in reality, else I'd worship chaos. I value authority intrinsically whether or not there are realistic ways to design society to reflect that. In that sense perhaps "authoritarian" is a confusing word?

But by using reality to argue this point, I infer that you think it's an empirical issue and that order and authority is intrinsically valuable, if we could somehow get it?

I suppose I could be confused about my own values, of course. But when I read Haidt's work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn't even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren't thinking clearly (whereas now I think it's just a matter of different values)

Think hard whenever your "values" differ from other people. There is an anti-pattern of thought where you erroneously trace differences in belief to some justifiable difference because it allows you to stop thinking without being rude or losing face.

I think that the "different values" thing comes from the same source as the "agree to disagree" thing, and the difference is that there exists a convincing rationalization in the values case.

"Different values" is the polite way to say "this conversation is not worth my time, or otherwise annoying", not an actual truth. If you confuse it for an actual truth, it acts as a Semantic Stopsign and prevents you from ever realizing your error, if there is one.

Tangent: Way too much of morality is based on signalling. "my values are whatever would be socially advantageous to claim were my values in this social context".

EDIT: As further evidence for you, I used to have a negative visceral reaction to the idea of authority, and then decided after much thought that it wasn't so bad and in fact kind of nice. So keep in mind that there are layers and layers of meaningless memetics to wade through before you get to anything like a fundamental value that could differ with someone else.

I used to have a negative visceral reaction to the idea of authority, and then decided after much thought that it wasn't so bad and in fact kind of nice.

Hm... so if you change your mind about a value, does it no longer qualify as a fundamental value? I'm not sure if we are using the word "value" in the same way.

I think it was you posted a few months ago about moral uncertainty, and I think you also posted that humans are poorly described by utility functions.

If you believe that, you should agree that we don't necessarily even have an actual set of moral axioms, underlying all the uncertainty and signaling. The term "fundamental value" implicitly implies a moral axiom in a utility function - and while it is a useful term under most contexts, I think it should be deconstructed for this conversation.

For most people, under the right conditions murder and torture can seem like a good idea. Smart people might act more as if they were under a set of axioms, but that's just because they work hard at being consistent because inconsistency causes them negative feelings.

So when I say "different values" this is what I mean:

1) John's anterior cingulate cortex doesn't light up brightly in response to conflict. He thus does not feel any dissonance when believing two contradictory statements, and is not motivated to re-evaluate his model. Thus, he does not value consistency like me - we have different values. Understanding this, I don't try to convince him of things by appealing to logical consistency, instead appealing directly to other instincts.

2) Sally's amygdala activates in response to incest, thanks to the Westermarck instinct. She thus has more motivation to condemn incest between two consenting parties, even when there is no risk of children being involved.

Mine lights up in disgust too, but to a much lesser extent. I'd probably be against incest too, but I've set up a hopefully consistent memetic complex of values to prevent my ACC from bothering the rest of my brain, and being against incest would destroy the consistency.

Our values are thus different - Sally's disgust became moral condemnation, my disgust is just a squick. If Sally could give me a reason to be against incest which didn't create inconsistency for me, she might well change my view. If she's also one of those that values consistency, I can change her view by pointing out the inconsistency. Or, I can desensitize her instinctive disgust through conditioning by showing her pictures and video of happy, healthy incestuous couples in love talking about their lives and struggles.

3) Bob has the connections from his ventromedial prefrontal cortex to his amygdala severed. He thus is not bothered by other people's pain. I watch Bob carefully: because of the fact that he does not factor in other people's pain into his calculations about his next action, I'm afraid he might hurt people I care about, which would bother me a lot. We have different values - but I can still influence Bob by appealing to his honor. He might still be motivated to genuinely respect authority, or to follow purity rules. If he's like Sally, he might condemn incest "because it is gross", but the feelings of inbred children might not weigh on his mind at all.

Basically, I see "values" as partly a set of ideas and partly an extension of "personality". You can change someone's values through argument, conditioning, etc...but between people there are often differences in the underlying motives which drive value creation, along with the layers of memetics.

(brain parts are roughly in line with current knowledge understanding of what they do, but take it with a grain of salt - the underlying point is more important)

Enslaving, in terms of putting to work w/o pay, doesn't make much sense in the hypothetical where the marginal value of human labor is effectively worthless, right? What would the poor be enslaved to do?

Perhaps a more realistic scary scenario would be this one: http://unqualified-reservations.blogspot.com/2009/11/dire-problem-and-virtual-option.html (essentially: when you're no longer of productive benefit to society, you go to virtual reality / video game heaven).

How scary that proposal sounds might be a matter of debate, though I suppose most folks around here would prefer a more egalitarian scenario where cognitive enhancement is distributed evenly enough that no human is left behind.

For my own part, I'm content to wirehead to the extent that I have confidence that machines are capable of being more productive-to-others than I am along the axes I value being productive-to-others on.

Put differently: I don't seem to care very much whether I am doing important things, as long as important things are getting done at least as effectively as they would be were I doing them.

There's a slight refinement to this in the case where the entities doing the important things are basically like me, since there's a whole question of whether I'm defecting in a Prisoner's Dilemma, but I interpret the connotations of "machine" as implying that this complication doesn't arise here.

Ah, a very interesting point of view. Framing it as a dichotomy between important work and wireheading seems a bit stark though. Are you meaning to include any sort non-productive fun under the umbrella of wireheading? I usually think of that term as implying only simple, non-complex fun (e.g. pleasure of orgasm vs experience of love and friendship).

This gets difficult to specify, because "productive" and "important" are themselves ill-defined, but I certainly mean to include "virtual reality/video game heaven" within "wireheading," including VR environments including virtual people that pretend to love and befriend me. (This is distinguished from actual people who really do love and befriend me, whether they have flesh-and-blood bodies or not.)

That said, I have no idea, ultimately, if I would prefer the continuous-orgasm video game or the fake-love-and-friendship video game... I might well prefer the latter, or to switch back and forth, or something else.

Ah, that clears things up, thanks!

Perhaps a more realistic scary scenario would be this one: http://unqualified-reservations.blogspot.com/2009/11/dire-problem-and-virtual-option.html (essentially: when you're no longer of productive benefit to society, you go to virtual reality / video game heaven).

It certainly seems more analogous to welfare or gated communities than a hypothetical "war against the poor" does.

I published a response to Drum's article here, on MIRI's blog.

Before clicking through, I expected the response to be something like, "So you're worried about AI replacing human labor and wondering, 'What should I do about it?' Here at MIRI ..."

The actual post turns out to be an intelligent and well-thought out guide for how to predict AI advances. Since it seems to largely agree with the time frame suggested in the article above, it is probably most useful for readers who were skeptical of those claims.

What would you suggest for readers who find themselves mostly convinced by Drum's argument, and are asking the "What should I do" question?

(Context: as a MIRI supporter, I'm not asking for information for myself so much as for resources that would be helpful to share with others who start thinking about intelligence explosion issues within the context of technology replacing human labor.)

What would you suggest for readers who find themselves mostly convinced by Drum's argument, and are asking the "What should I do" question?

See How Can I Reduce Existential Risk from AI?

I think there's another step missing between being convinced that AI will replace human labor and that AI is the most pressing x-risk (which is the starting point for the article you linked), but this gives me an idea of what the form of your answer would be:

Not only will AI replace human labor, it's also extremely dangerous (for reasons laid out elsewhere on LessWrong / MIRI / various lukeprog-created websites :)), so we really need to solve the more general problem of AI x-risk.

[-][anonymous]11y80

Good summary. I particularly like the 'Lake Michigan' comparison.

There's some good speculation (about telepathy!) and links to other articles by mainstream pundits courtesy of The Economist .

Note that the drop in labor's share of income corresponds very highly with the entry into the global market of huge new labor pools in China and the rest of Asia. Very basic economics suffices to explain it: the ratio of labor to capital suddenly went way up, so the price of labor had to go down. I have very high confidence that this situation is going to start to reverse as the Chinese start to accumulate capital.

[-]Cyan11y40

I have very high confidence that this situation is going to start to reverse as the Chinese start to accumulate capital.

But is your confidence high enough to counterbalance the loss if it turns out you're wrong?

In the piece, Drum links this article by economist Noah Smith, which concludes:

...It may turn out that the "rise of the robots" ends up augmenting human labor instead of replacing it. It may be that technology never exceeds our mental capacity. It may be that the fall in labor's income share has really been due to the great Chinese Labor Dump, and not to robots after all, and that labor will make a comeback as soon as China catches up to the West.

But if not - if the age of mass human labor is about to permanently end - then we need to think fast. Extreme inequality may be "efficient" in the Econ 101 sense, but in the real world it always leads to disaster.

If scarcity only ended for consumer goods, people would still have to work (most jobs are currently in the services economy).

I'm not sure that follows.

If the price of consumer goods and basic necessities of all types fell to almost zero, but we had large-scale technological unemployment (most people don't have a job and thus have little money), I would expect most people to just stop paying for most services and just do them for themselves (or for their neighbors, whatever). Which would then eliminate more service jobs, and so on.

You could easily get to a point where much or most of the mainstream economy just ceases to exist.

[-]knb11y00

If the price of consumer goods and basic necessities of all types fell to almost zero, but we had large-scale technological unemployment (most people don't have a job and thus have little money),

You are assuming that most people wouldn't have a job. I think most people would have jobs if scarcity only ended for consumer goods. We would still need plumbers, teachers, lawyers, cops, firemen, soldiers, doctors, investors, scientists, etc. The scary part is when AIs can do those jobs as well.

I'm not assuming that most people wouldn't have a job, at least not at first. I think that if unemployment goes above about 15% or so and stays there, the whole system starts to become unstable; that's the point where you tend to get either political change or revolution, if people aren't able to fill their basic needs.

Most of the things you list are actually professions that are mostly hired by the government itself, and the govenrment shouldn't have any shortage of money, since it can still tax the lights-out factories that are producing everything. Those jobs will continue to exist for a while, and in fact more services might move over to that bucket (for example, in the UK doctors are all employed by the government itself, and that model might spread). All of the private sector jobs, though, could start to disappear.