Thanks for pointing out two interesting papers. Then:
probably less than a dozen economists who are taking the transformative AI seriously at all.
I'd say clearly: Nope (I let you think about it yourself why this statement is obviously orders of magnitude false; I reckon you'll realize).
but a good chunk of this is doubtless because Acemoglu (who was a giant in the field long before he got a Nobel) published a piece "The Simple Macroeconomics of AI" [..]
I'd say instead: There's well reason to doubt the magnitude of the impact of that single paper. My disappointment about the indeed huge share of economists who dismiss e.g. labor market effects, well antedates Acemoglu's mid-2024 paper, and even now many who dismiss the labor market effects simply do so due to basic professional deformation. (Somewhat speculatively I'd in fact be more inclined to believe it's rather because of the same deformation that Acemoglu himself ends up with his analysis where he does)
I wrote a bit more about What economists get wrong about AI, and why, in How Econ 101 makes us blinder on trade, morals, jobs with AI – and on marginal costs.
Key factors imho clearly include that we don't realize good old brain augmentation isn't the same as brain replacement. Stated somewhat differently, we're swayed by our pride of having been for 200 years the only ones having reliably and rightly pointed out to a largely ignorant public that 'machines yield better jobs overall rather than to make workers redundant', so we're now struggling to see that clearly that this time, this time IS different. And one can also say our ignorance as a profession is the wrath of us having blindly followed empirical observation and having - once again - elevated it to a fundamental law, without clearly realizing how contingent past developments between machines and jobs were on various subtle tastes and consumer psychology and machine limits etc.
If you're aware of other preprints or publications taking TAI seriously, I would genuinely love to have citations! Makes it much easier to say these kinds of things to a policymaker when there's a stack of supporting arguments.
Related: https://www.lesswrong.com/posts/rmYj6PTBMm76voYLn/publishing-academic-papers-on-transformative-ai-is-a
---
Edit 2026/04/19: this introductory paragraph was in my draft on my laptop, but I failed to copy it over when making the original
When it comes to the policy world, few disciplines are listened to as much as the economists. I'll spare you my guesses about why, but suffice to say that if the economists aren't taking AI seriously, policymakers won't either. This seems like one place where I might have some comparative advantage, so I decided to get caught up on what they're thinking, and contribute where I could. I've messaged several of them over the past few months, and a number of them have kindly responded. Even though I'm offering critique here, I genuinely appreciate those who are taking the notion of transformative AI seriously.
---
Sadly, there's not many of them: probably less than a dozen economists who are taking the transformative AI seriously at all. Partly because they've seen a long history of 'straight lines on graphs' and are skeptical about transformative technologies, but a good chunk of this is doubtless because Acemoglu (who was a giant in the field long before he got a Nobel) published a piece "The Simple Macroeconomics of AI" which used a conservative methods and conservative estimates for the values (by the standards in the CS community ridiculously conservative) to claim that the impact of AI over a decade would be... 0.5% of the economy; i.e. a nothingburger. This one paper probably did more to suffocate the field than a lot of other things: if you're going to say Acemoglu is wrong, you want to be **damn sure** that you're right.
Acemoglu's longtime collaborator Restrepo (who's on Anthropics econ panel!) took the idea of labor replacement seriously, and predicts that human wages will fall to the "compute-equivalent cost" of having an AI do the work; this is probably true. He also offers a (deeply flawed) proof that humans won't be any worse off in this regime. He clearly hasn't taken compute advances seriously enough to actually calculate the equivalent wages; my own estimate is that by 2029 they'll be below the rice-subsistence price (meaning that a full day's work won't buy a day's worth of rice).
But by this point, the cracks in the dam are showing, and most of economists are starting to accept that it's going to be bigger than Acemoglu claimed. Say what else you like, they are persuaded by data.
One aspect where the economists are probably right is that even an intelligence explosion will take a while to really impact most of the economy. While autists such as ourselves live at a computer terminal, there's a huge fraction of society that depends on **physical stuff**, and until the robots take over, we'll still need lots of humans doing things for other humans. Kording & Marinescu are a computer scientist & economist duo who tackled this divergence head-on, and pointed out that "no mattter how smart you are, there's only so many ways to stack a pile of books". Their model is one of the only ones I've seen that takes the impact on wages and unemployment seriously, and they estimate wages will rise until ~40% of the pure-intelligence tasks have been automated, and then they'll start falling (noteworthy is that the Anthropic Economic Index, coupled with my own measures of the intelligence sector, suggest we're very close to that point).
One economist who's taken things very seriously is Chad Jones at Stanford, who has engaged not just with the possibility of a technological singularity, but has written papers grappling with existential risk; he estimates we're underfunding safety research by a factor of 30x. Even though he engages with the possibility of a singularity, his latest preprint thinks that (because of those physical components to the economy), the actual economic impact will be very slow at first: only 4%/year by 2030, rising to 10%/year by 2040, and a singularity by 2060 (this is actually his most rapid scenario; the baseline is more like 75 years). His model, like most macro models, doesn't really allow for unemployment effects or other social disruption. Personally I'm skeptical of approaches like this, given that the English "enclosure" laws introduced over 50 years of massive unemployment, but this seems to be the macroeconomist's version of a "spherical cow" (which as a physicist I can hardly begrudge them).
The big factor of course between the CS and economics worlds is belief in recursive self-improvement, and the degree to which the intelligence and physical economies are coupled. Only one paper that I'm aware of has tackled this head-on: a piece by Davidson, Halperin, Houlden, and Korinek (I think some of those folks are known in these parts). They explicitly model flywheel effects between software and hardware, and also find that a singularity is a definite possibility. More interesting, much like Jones & Tonetti find that the effect will be very small at first, but will blow up once we get to about 13% of the economy having been automated. An obvious question is: can we get Jones & Tonetti to line up with Davidson et al, and if so how far along this process are we? They're aware of each other's work, but I don't see anything explicit trying to synchronize them and establish a timeline from that; if no one does it soon then I'll work on it.
So we're in the situation where economists have *finally* gotten all the pieces together, and are on the cusp of engaging legitimately with transformative AI. This in turn will get policymakers to take it more seriously, too - including (thanks to Chad Jones!) the possibility of extinction risk, and additional impetus to take legislative action. Even failing that direct action, work like Korinek is gaining traction, and provides non-sci-fi reasons to intervene, which is also helpful.