Why do most humans in 2041 still need to work 40 hours a week? The answer is complicated, but to keep this comment simple, let's focus on a few factors that even a hypothetical reader from 2024 would understand.
In most countries, government regulation requires humans in the loop. These might seem like bullshit jobs, but that doesn't make the competition for them any less fierce. An average person cannot get a good job without good credentials (required for regulatory reasons), and good credentials are expensive; it often takes a lifetime to pay back the school debt. It doesn't matter whether the things taught at school are useful in any practical sense (the few remaining human teachers mostly agree that they are not), but they are required by law. The official reasoning is that general education keeps us human (note: this is simplified to the level of strawman, but I am trying to keep it simple for the hypothetical 2024 reader unfamiliar with the culture wars of 2041).
With the exception of a few things such as rent, most things today are significantly cheaper than they used to be in 2024. On the other hand, there are new expenses, many of them related to AI. Some aspects of life got complicated, for example contracts of all kinds. To put it bluntly, you need the latest AI to safely navigate the legal minefield created by the latest AI. Trying to save money by using a cheaper version of AI that is several weeks obsolete is generally considered a very bad idea, and will probably cost you more in long run, because you have no idea what you sign (and you should generally assume that the form was optimized to extract as much value from you as legally possible, otherwise the company would be leaving money on table). You either spend a large part of your income on AI services... or you risk joining the underclass at the first accident; there is not much of a middle way. If you can't afford the "business version" of the latest AI, you can get one that is supported by advertising -- the less you pay for it, the more you should expect the AI agent to optimize for the goals of the advertisers rather than your personal goals. (Oh, "advertisement" today no longer means trying to influence the humans. Humans are mostly irrelevant. It means influencing the AI agents that make most of the everyday decisions. As a simple example, you can pay the AI agents to buy your products rather than your competitor's products, even if they are somewhat more expensive or worse, and to defend this choice to human users using individually optimized arguments.)
There is increasingly addictive... well, basically everything. I am afraid that a far-mode description will fail to convey how strong the effect is when experienced in near mode, but basically: The salesmen of old have used only a few dozen simple techniques (such as smiling at you, looking in your eyes, repeating your name, trying to anchor you to a higher price and then giving you a discount, creating a false sense of urgency, etc.) which were only statistically effective and often failed or backfired for you, the modern ones come to you with a full AI-powered analysis of your personality (yes, there are regulations against this, but they are trivially circumvented), and they have probably already spent a few previous months trying to influence you in all known ways (bots pretending to be humans contacting you on social networks and nudging you in the desired direction, advertising in your AI agent if you use the cheaper version, subliminal advertising on the streets flashing when the screen detects you looking at it, etc.) which makes is almost impossible to resist; in many cases the humans believe that the interaction was actually their own idea, and quite often they fall in love with the salesperson.
Some people suggest that this is a problem humanity should focus on solving, but the respected economists (and more importantly, their AI advisors) mostly shrug and say: "revealed preferences".
Looking back from 2041
When people in the early 21st Century imagined an AI-empowered economy, they tended to project person-like AI entities doing the work. “There will be demand for agent-like systems,” they argued, “so we’ll see AI labs making agents which can then be deployed to various problems”.
We now know that that isn’t how it played out. But what led to the largely automated corporations that we see today? Let’s revisit the history:
Today, there are a few instances of fully autonomous corporations, with no human control even in theory, as well as a larger number of fully autonomous AI agents, generally created by hobbyists or activists. However, while intriguing (and suggestive about how the future might unfold), to date these remain a tiny fraction.
And although AI for research has been one of the slower applications to find a niche for properly automated groups (with many cases of AI used at the management level coordinating human researchers, who in turn make use of AI research assistants; although this varies by field), it still appears to have made a difference. On most measures, technological progress was around 1.5–2x faster in the period 2030–2035, compared to a decade earlier (2020–2025), and the second half of the 2030s was faster again. Moreover, in the last couple of years we have been seeing an increase in successes out of purely automated research groups. A controversial AI-produced paper published in Science earlier this year claimed that the rate of technological progress is now ten times faster than it was at the turn of the century. Since IJ Good first coined the idea of an intelligence explosion, 75 years ago last year, people have wondered if we will someday see a blistering rate of progress, that is hard to wrap our heads around. Perhaps we are, finally, standing on the cusp — and the automated corporations we have developed stand ready to work, integrating the fruits of that explosion back into human society.
Remarks
As is perhaps obvious: this is not a prediction that this is how the future will play out. Rather, it’s an exploration of one way that it might play out — and of some of the challenges that might arise if it did.
Thanks to Raymond Douglas, Max Dalton, and Tom Davidson, and Adam Bales, for helpful comments.