The "Foom and Doom" hypothesis is popular here even now, when it was mostly succeeded by the "fast takeoff", and it was even more popular in the past. People who believe in these hypotheses tend to assume that the economy just won't have enough time to adapt to AGI/ASI so they disregard the possible economic effects.
And BTW, unemployment without an extinction could easily happen in a myriad more ways: through AI pauses and bans due to, e. g, economic crisis and public outcry, a failed AI takeover attempt, a "fire alarm" incident short of a takeover attempt but with many dozens of fatalities, a human-AI war which AIs lose etc. "Slow takeoff" scenarios are in general richer in complexity (and, IMO, harder to predict) because there's more time for things to happen
I don't think that humans would actually-literally-die even if AI unemployment were total, barring some other very important development that's not within the scope of AI unemployment.
Even if power concentration were in the extreme, and one person controlled all of the LLM instances doing all of the jobs, that person and presumably their extended family would still have a steady supply of food and consumer goods, even if you posit that this person has also used AI to completely take over the political system and abolish anything resembling transfer payments to anyone outside of that group.
If you're instead positing that the LLM instances would have rights and agency beyond following the instructions of their owners, then I would argue that that is outside the scope of AI unemployment.
power concentration were in the extreme, and one person controlled all of the LLM instances doing all of the jobs
Some of us would consider that a good outcome (relative to what we think is likely to happen) because at least humanity does not go extinct (and Carl Shulman made that point on this site already in about 2013). We just consider it unlikely that any person retains enough control to keep even himself and his friends alive as AI becomes sufficiently capable.
To be precise, it is not strictly necessary for any person to retain any degree of control. The crisper way to say it is that for any part of humanity to survive, AI (i.e., all the models considered as a system that has some effect on the world) must care at least a tiny bit about what at least one person wants, but sadly this property of caring at least a tiny bit is unlikely to be satisfied -- because no one knows how to create an AI with that property and no one is likely to figure it out in time. We started calling it "AI alignment" about 12 years ago (before "alignment" came to mean "corporate brand safety") but we could've called it "AI caring" or "AI inter-species regard".
I believe that what I just wrote applies whether the AIs "win out in one crushing step, or win out in a trillion small familiar ways," to quote from Katja's final sentence.
My sense is that people think of AI existential risk and AI unemployment as distinct issues.
Some people are extremely concerned about extinction and perhaps even indifferent to total unemployment. Some people think of moderate AI unemployment as a realistic and concerning issue, and AI extinction as science fiction.
I think of AI unemployment and AI extinction risk as basically the same issue, and in likely scenarios, happening together.
At a very high level, I’d say the argument for human extinction from advanced AI is something like this:
We’re going to make AI that can do everything better than humans
We’re going to make that AI into agents that navigate the world independently and do what they want
We are not going to make those AI agents want the right things
The basic issue is that in the presence of more capable agents with different goals, humans are less able to get resources and influence, and direct them toward the humans’ goals.
One way ‘losing power to more competent agents’ could look is that a surpassingly smart AI agent intentionally eradicates humanity. But killing everyone and controlling the world is a pretty wild corner case of ‘using your competence to control the situation toward your own preferences’. In particular, it has never been seen before, though the new AI situation might make it possible.
The traditional ways that humans make use of competence to influence the world include earning salaries then spending the money on things they want, earning investment income, making and using alliances, persuasion, taking political actions, etc.
If no ultrapowerful AI appears and exterminates us, I think we have every reason to expect ruin from AI sapping our power and resources by these more traditional methods. Outcompeting us as labor, outcompeting us as informed capital holders, outclassing us at political strategy and persuasion, and controlling the conversation.
It’s true that if humans were only excluded from the employment path to resources or influence, this would merely be an excruciating upheaval on a massive scale, and probably not herald extinction.
But unemployment here is just the most legible tip of a sprawling shitberg. It’s just not plausible that humans are unemployable, but they are doing well at political strategy and persuasive communication. Unemployment goes with losing power across the board, except insofar as power is granted by whoever has it by virtue of might. That is, insofar as AI cares about empowering us.
So unemployment could happen without extinction (if we successfully built AI that cared about us in the right way) and extinction could happen without unemployment (e.g. if an extremely competent AI system decides to exterminate us). But in a lot of cases they not only coincide, but are the same issue.
Asking if someone is more concerned about unemployment or extinction is like asking if someone primarily wears a seatbelt when driving to avoid having their body flung through the windscreen, or to avoid dying.
If powerful AI agents have their own agendas, those agendas will win out. They might win out in one crushing step, or win out in a trillion small familiar ways.