Traditional economics thinking has two strong principles, each based on abundant historical data:

  • Principle (A): No “lump of labor”: If human population goes up, there might be some wage drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people will find new productive things to do, such that human labor will retain high value—in other words, the demand curve will move right. Indeed, if anything, the value of labor will ultimately go up, not down—for example, dense cities are engines of economic growth!
  • Principle (B): “Experience curves”: If the demand for some product goes up, there might be some price increase in the very short term, because the supply curve slopes up. But in the longer term, people will ramp up manufacturing of that product to catch up with the demand—in other words, the supply curve will move right. Indeed, if anything, the price per unit will ultimately go down, not up, because of economies of scale, R&D, etc.

Now consider Artificial General Intelligence (AGI), i.e. a combination of chips, algorithms, electricity, and teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—stuff like founding and running new companies, research and development, learning and applying new skills, working in collaborative teams, skillfully using teleoperated robots after only a few hours of practice, and so on.

So here’s a question: When we have AGI, what happens to the price of chips, electricity, and teleoperated robots?

(…Assuming free markets, and rule of law, and AGI not taking over and wiping out humanity, and so on. I think those are highly dubious assumptions, but let’s not get into that here!)

Principle (A) has an answer to this question. It says prices will be high. After all, if AGI can really do all the things that ambitious entrepreneurial skilled labor can do, then there will be no “lump of labor” for AGI, any more than there has been for humans. However much AGI there is, it will keep finding new productive things to do. And the prices will reflect that high value. (Incidentally, if that’s true, then it would imply that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.)

Principle (B) has a different, contradictory answer to this question. It says prices will be low. After all, if AGI is basically a manufactured good, then manufacturing will ramp up, creating ever more AGI at a cost that decreases with scale (and with R&D). And the prices will reflect that low cost. (Incidentally, if that’s true, then it would imply that human labor, now forced to compete with a far-lower-price substitute, will become so devalued that we won’t be able to earn enough money to afford to eat.[1])

Anyway, I sometimes see unproductive debates that look like this:

One side treats Principle (A) as an unstoppable force. The other side treats Principle (B) as an immovable wall. Instead of grappling with the contradiction, they just talk past each other. As a very recent example of such arguments, check out the blog post AGI Will Not Make Labor Worthless by @Maxwell Tabarrok, and its comments section.

Who is right? Well, at any given time,

  • Either the price is high, and the supply curve is racing rightwards—since there’s a massive profit to be made by ramping up the manufacture of AGI “labor”.
  • …Or the price is low, and the demand curve is racing rightwards—since there’s a massive profit to be made by skilled entrepreneurial AGI “labor” finding new productive things to do.
  • …Or the price is in between, and both the supply curve and the demand curve are racing rightwards!

The price at any given time depends on which curve is racing rightwards faster. I have opinions, but that’s out-of-scope for this little post. If people are even trying to figure this out, that would already be a step up from much current discourse.

But more importantly— What happens when an unstoppable force is slamming into an immovable wall? Common sense says: a big friggin’ explosion.

…So that naturally brings us to the school of thought where we expect AGI to bring >>100%/year sustained growth of the global economy—see for example a discussion by Carl Shulman on the 80,000 hours podcast.

I think this is the correct conclusion, given the premises. Indeed, I think that, if you really try hard to hold Principle (A) and Principle (B) in your mind at the same time, and think through the consequences, then truly explosive economic growth is where you will inevitably wind up.

Of course, that collides against yet a third principle of traditional economics, also based on abundant historical data:

  • Principle (C): Wait, you said >>100%/year of sustained growth of the global economy? What are you, nuts??

But, that’s where we’re at. It’s a trilemma. All three of (A, B, C) are traditional, time-tested economic principles. But it’s basically impossible to simultaneously believe all of them at once. People still try to do so, including professional economists, but I think they wind up tying themselves into knots of self-contradiction.

(Of course, those economists are still a step up from the economists who dismiss AGI as sci-fi nonsense!)

(Again, my actual main expectation is AGI takeover, which renders this whole discussion kinda moot. But if we’re gonna talk about it, we should get it right!)

  1. ^

    At least, probably not. We don’t know for sure how much compute and electricity it will take to run superhuman AGI, since it doesn’t exist yet. But my own guess, based on how much calculation a human brain does, is that it would probably be well under $0.10/hour at today’s prices, and lower in the future as we go down the experience curve.

1.
^

At least, probably not. We don’t know for sure how much compute and electricity it will take to run superhuman AGI, since it doesn’t exist yet. But my own guess, based on how much calculation a human brain does, is that it would probably be well under $0.10/hour at today’s prices, and lower in the future as we go down the experience curve.

New Comment


32 comments, sorted by Click to highlight new comments since:

If we have ≫100% economic growth in this hypothetical economy, then it is possible for both Principle (A) [human labor price stays high] and Principle (B) [human labor price falls very low] to be satisfied simultaneously. This is possible because "high" and "low" are not absolutes. They are measured relative to the price of other goods. I am interpreting "AGI not taking over" to imply that the owners of capital remain human.

Given the continued rule of current law, human beings will continue to have value to other human beings through status games (entourages, buying poor-quality artisanal products, paying muscular dudes to sing you Happy Birthday), capital-owning rich perverts (prostitution, OnlyFans) and legal requirements (jury duty, notaries), if nothing else. This is because "being a human" has conceptual value the way Comedian is more valuable than any other banana taped to the wall. If the owners of capital care about Coherent Extrapolated Volition, then they can hire humans to use as ground truth for that too. The criterion "rule of law" does a lot of work here. If humans cannot be turned into slaves, then that puts a regulatory constraint on how little the human owners of capital can pay to keep other people around as retainers and entertainers. In this way, humans could provide value the way horses do today. Not because horses provide cheap physical labor, but because riding a horse is a fun status symbol, and because horses are lovable pets. Human corpses could even be used as a store of value to diversity against the volatility of other assets, since the production of human corpses would be limited by rule of law in a way that the transmutation of gold is not. Surely many rich people want a throne room built out of real human skulls, and not fake ones. In addition, many laws make it valuable to be a human being to e.g. file paperwork at a consulate in California. In today's 2025 world, chess tournaments are dominated by human players, despite computers being unbeatable at chess.

Meanwhile, the production of manufacturable assets like artificial sushi becomes extremely cheap.

The equilibrium is a situation where the price of human labor (measured in something like FLOPs, chocolate cake or procedurally-generated MrBeast video knockoffs) plummets, but the price of manufactured goods relative to that same reference point decreases even faster, due to lower regulatory barriers. Humans on Earth live lives that are luxurious by today's standards (exclusing industries like housing, the price of which is driven by government regulation), but insignificantly poor compared to the owners of capital, who are limited only by how close they can get to building Von Neumann probes.

The final solution to this tension between law and economics is to invent something like a Blade Runner replicant that looks and functions like a human slave, but is legally non-human property.

This all seems really clearly true.

Thanks! I basically agree.

I think that, if we assume that there’s a world in which (1) at least some humans own some capital in the post-AGI economy (hence rapidly exponentially growing wealth), (2) nobody is worried about expropriation or violence, and (3) humans have the knowledge and power to pass and enforce effective laws holding up human interests in regards to externalities (e.g. AGIs creating new exotic forms of lethal pollution while following the letter of the existing law, or building a Dyson swarm that blocks out the sun)…

…then that’s already pretty great! That would be far better than my baseline expectation.

I think that, if we assume (1-3), then the non-capital-owning humans have a great chance of doing OK too, via (A) charity from the fabulously-wealthy capital-owning humans, or through (B) political imposition of UBI (assuming democracy), or, like you said, (C) getting employed by the fabulously-wealthy capital-owning humans who specifically want to employ other humans (or selling ownerships rights to their posthumous skulls, ofc :) ).

So here’s a question: When we have AGI, what happens to the price of chips, electricity, and teleoperated robots?

 

As measured in what units?

  • The price of one individual chip of given specs, as a fraction of the net value that can be generated by using that chip to do things that ambitious human adults do: What Principle A cares about, goes up until the marginal cost and value are equal
  • The price of one individual chip of given specs, as a fraction of the entire economy: What principle B cares about, goes down as the number of chips manufactured increases
  • The price of one individual chip of given specs, relative to some other price such as nominal US dollars, inflation-adjusted US dollars, metric tons of rice, or 2000 square foot single-family homes in Berkeley: ¯\_(ツ)_/¯, depends on lots of particulars, not sure that any high-level economic principles say anything specific here

These only contradict each other if you assume that "the value that can be generated by one ambitious adult human divided by the total size of the economy" is a roughly constant value.

I like that this post lays out the dilemma in principles A (marginal value dominates) and B (marginal cost dominates). One quibble is that the effects are on the supply and demand curves, not on the quantities supplied and demanded, i.e., it's not about the slopes of the curves but the location of the new equilibrium as the curves shift left or right. It's not about which part "equilibrates" faster (with what?) but about the relative strength of the shifts.

If AGI shifts the demand for AI labor to the right, under constant supply, we'd expect a price increase and more AI labor created and consumed. If AGI shifts the supply for AI labor to the right, under constant demand, we'd expect a price decrease and more AI labor created and consumed. Both of these things would happen, so there is a wide range of possible price changes (even no change in price) consistent with more AI labor created and consumed, but what happens to the price depends on which shift is "stronger."

Still, with the quantity of AGI labor created and consumed increasing, you might wonder about how the experience curve impacts it - that's just more right-shift in the supply curve, so maybe we don't have to wonder after all. What about the effect on substitutes like human labor? Well, if the economy has a set number of jobs, you'd expect a lot of human labor displaced, but if the economy can find other useful work for those people, they will do those other jobs, which might be lower-paying (no more coding tasks for you - enjoy 7/11), reducing the average price of human labor, or might be higher-paying (no more coding tasks for you - enjoy this support role for AGI that because of its importance requires, increasing the average price of human labor.

Can those niches exist? Yes, the supply and demand curves are curves of heterogeneous values and production functions. And markets are imperfect. Won't those niches eventually disappear? Well, rinse and repeat. See ATMs and bank tellers, also see building luxury housing supply and the effects on rents throughout the housing supply.

I don't think it's only talking past each other - it's a genuine ton of uncertainty.

I just added a few words to the effect that Principle (A) is a claim about how the demand curve will shift, and Principle (B) is a claim about how the supply curve will shift. Thanks.

Update: Also added a little bulleted list about supply and demand curves racing each other rightwards. Hopefully that will make it more clear what I’m saying. Thanks again.

It's not about which part "equilibrates" faster (with what?) but about the relative strength of the shifts.

If demand for socks quadruples overnight for some exogenous reason, then there will be some period of time when socks are more expensive. Days, weeks, months, I don’t know. But there’s a lead time on building new sock factories (or hiring more workers, or retooling production lines, or whatever it is). In the longer term, presumably socks would wind up at a similar price as today—there’s plenty of cotton. The supply curve can shift, but can't shift instantaneously.

So that’s what I mean by “equilibration”—The demand curve shifts, and then we’re “out of equilibrium”, in the sense that there’s a highly-profitable opportunity (to manufacture more socks), but nobody is exploiting that opportunity yet (because it takes a few weeks or months). Eventually people do build the factories, and now the supply curve has shifted to its new stable home, and the price of socks is its normal inexpensive price as usual.

Hmm, I wonder if you’re assuming that “equilibration” = “price moving to match immediate supply and demand”, whereas I’m using the term “equilibration” more broadly (“some process reaching equilibrium”). I’m open-minded to rewording things. I do want this post to use conventional economics language as much as possible, and I’m rusty.  :)

Anyway, I disagree that it’s “about the relative strength of the shifts”. It’s not a “shift”, because the system has no equilibrium. If the AGI supply curve were stuck at C1, then over time, the AGI demand curve would settle to C2. Separately, if the AGI demand curve were stuck at C2, then over time, the AGI supply curve would settle to C3, which is to the right of C1. See what I mean? The two curves just race each other outwards, forever, until the surface of Earth is plastered with solar cells and factories etc. So, to figure out the current price, it matters which curve can shift faster in response to current conditions.

(Note: have edited the post to make this part clearer.)

Well, if the economy has a set number of jobs, you'd expect a lot of human labor displaced, but if the economy can find other useful work for those people, they will do those other jobs

I think that, in this quote, you’re being the person on the left side of my silly diagram :) You’re assuming that the economy will produce new jobs faster than the factories will produce new chips and robots to fill those jobs. New jobs don’t appear instantaneously, right? …But new chips also don’t appear instantaneously. Which one is less instantaneous? You need to make an argument. That’s my point. :)

Thanks, you had mentioned the short- vs. long-run before, but after this discussion it is more foregrounded and the "racing" explanation makes sense. :) Though I appreciated the references to marginal value and marginal cost.

You’re assuming that the economy will produce new jobs faster than the factories will produce new chips and robots to fill those jobs.

Well, the assumptions are primarily that the supply and demand for AI labor will vary across markets and secondarily that labor can flow across markets. This is an important layer separate from just seeing who (S or D) wins the race. If there is only one homogenous market, then the price trajectory for AI labor (produced through the racing dynamics) tells you all you'll need to know about the price trajectory for its human substitute. So the question is just which is faster.

But if there are heterogenous markets, "which is faster" is informative only for that market and the price of human labor as a substitute in that market. The price trajectory for AI labor in other markets might be subject to different "which is faster" racing dynamics. Then, because of composition effects, the trajectory for the average price of AI labor that is performed may diverge from the trajectory for the average price of human labor that is performed.

This is true even if you assume the economy has no vacancies and will not produce new jobs (i.e., labor cannot flow across markets). For example, average hourly earnings spiked during COVID because the work that was being performed was high-cost/value labor, an increase seemingly entirely due to composition [BLS]. Although I am alleging that predicting the price trajectory remains difficult even if you take a stance on the racing dynamics because you need to know what the alternative human jobs are, in that world where jobs are simply destroyed, the total value accruing to human laborers certainly goes down. This is why I think the labor flows could be considered a secondary assumption for the left-side depending on how much you think that side would be arguing - they are not dispositive of what the price changes will be (the focus of the post was on price), but they definitely will affect whether human labor commands the same total value.

That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.

Less skilled labor has a well-paying niche today?

The point I’m trying to make here is a really obvious one. Like, suppose that Bob is a really great, top-percentile employee. But suppose that Bob’s roommate Alice is an obviously better employee than Bob along every possible axis. Clearly, Bob will still be able to get a well-paying job—the existence of Alice doesn’t prevent that, because the local economy can use more than one employee.

Sure. But in an economy with AIs, humans won't be like Bob. They'll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today's economy lots of such people exist, so any theoretical argument saying it can't happen has got to be wrong.

And if the argument is quantitative - say, that the unemployment rate won't get too high - then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There's no economic principle saying that can't happen.

The context was: Principle (A) makes a prediction (“…human labor will retain a well-paying niche…”), and Principle (B) makes a contradictory prediction (“…human labor…will become so devalued that we won’t be able to earn enough money to afford to eat…”).

Obviously, at least one of those predictions is wrong. That’s what I said in the post.

So, which one is wrong? I wrote: “I have opinions, but that’s out-of-scope for this little post.” But since you’re asking, I actually agree with you!! E.g. footnote here:

“But what about comparative advantage?” you say. Well, I would point to the example of a moody 7-year-old child in today’s world. Not only would nobody hire that kid into their office or high-tech factory, but they would probably pay good money to keep him out, because he would only mess stuff up. And if the 7yo could legally found his own company, we would never expect it to get beyond a lemonade stand, given competition from dramatically more capable and experienced adults. So it will be, I claim, with all humans in a world of advanced autonomous AIs, if the humans survive.

Obviously, at least one of those predictions is wrong. That’s what I said in the post.

Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.

Well, the main thing is that Principle (A) says that the price of the chips + electricity + teleoperated robotics package will be sustainably high, and Principle (B) says that the price of the package will be sustainably low. Those can’t both be true.

…But then I also said that, if the price of the package is low, then human labor will have its price (wage / earnings) plummet way below subsistence via competing against a much-less-expensive substitute, and if it’s high, they won’t. This step brings in an additional assumption, namely that they’re actually substitutes. That’s the part you’re objecting to. Correct?

If so, I mean, I can start listing ways that tractors are not perfect substitutes for mules—mules do better on rough terrain, mules can heal themselves, etc. Or I can list ways that Jeff Bezos is not a perfect substitute for a moody 7yo—the 7yo is cuter, the 7yo may have a more sympathetic understanding of how to market to 7yo’s, etc.

But c’mon, a superintelligent AI CEO would not pay a higher salary to hire a moody 7yo, rather than a lower salary to “hire” another copy of itself, or to “hire” a different model of superintelligent AI. The only situation where human employment is even remotely plausible, IMO, is that the job involves appealing to human consumers. But that doesn’t “grow the pie” of human resources. If that’s the only thing humans can do, collective human wealth will just dwindle to zero as they buy AI-produced goods and services.

So then the only consistent picture here is to say that at least some humans have a sustainable source of increasing wealth besides getting jobs & founding companies. And then humans can sometimes get employed because they have special appeal to those human consumers. What’s the sustainable source of increasing human wealth? It could be capital ownership, or  welfare / UBI / charity from aligned AIs or government, whatever. But if you’re going to assume that, then honestly who cares whether the humans are employable or not? They have money regardless. They’re doing fine.  :)

I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.

I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.

I looked it up, evidently mules still have at least one tiny economic niche in the developed world. Go figure :)

But I don’t think that lesson generalizes because of an argument Eliezer makes all the time: the technologies created by evolution (e.g. animals) can do things that current human technology cannot. E.g. humans cannot currently make a self-contained “artificial cow” that can autonomously turn grass and water into more copies of itself, while also creating milk, etc. But that’s an artifact of our current immature technology situation, and we shouldn’t expect it to last into the superintelligence era, with its more advanced future technology.

Separately, I don’ t think “preps new types of environments for teleoperation” is a good example of a future human job. Teleoperated robots can string ethernet cables and install wifi and whatever just like humans can. By analogy, humans have never needed intelligent extraterrestrials to come along and “prep new types of environments for human operation”. Rather, we humans have always been able to bootstrap our way into new environments. Why don’t you expect AGIs to be able to do that too?

(I understand that it’s possible to believe that there will be economic niches for humans, because of more abstract reasons, even if we can’t name even a single plausible example right now. But still, not being able to come up with any plausible examples is surely a bad sign.)

Why don’t you expect AGIs to be able to do that too?

I do, I just expect it to take a few iterations. I don't expect any kind of stable niche for humans after AGI appears.

I think he’s talking about coast disease? 

https://en.m.wikipedia.org/wiki/Baumol_effect

The purely technical reason why principle A does not apply in this way is opportunity cost.

Let's say S is a highly productive worker who could generate $500,000 for the company over 1 year. Moreover S is willing to work for only $50,000! But if investing $50,000 in AI instead would generate $5,000,000, the true cost of hiring S is actually $4,550,000.

Addendum

I mostly retract this comment. It doesn't address Steven Byrnes's question about AI cost. But it is tangentially relevant as many lines of reasoning can lead to similar conclusions.

We can imagine a hypothetical world where a witch cast a magical spell that destroyed 99.9999999% of existing chips, and made it such that it’s only possible to create one new computer chip per day. And the algorithms are completely optimized—as good as they could possibly be. In that case, the price of compute would get bid up to the maximum economic value that it can produce anywhere in the world, which would be quite high.

The company would not have an opportunity cost, because using AI would not be a cheap option.

See what I mean? You’re assuming that the price of AI will wind up low, instead of arguing for it. As it happens, I do think the price of AI will wind up low!! But if you want to convince someone who believes in Principle (A), you need to engage with the idea of this race between the demand curve speeding to the right versus the supply curve speeding to the right. It doesn’t just go without saying.

A few key points…

1) Based on analogy with the human brain (which is quite puny in terms of energy & matter) & also based on examination of current trends, merely super human intelligence should not be especially costly.

(It is of course possible that the powerful would channel all AI into some tasks of very high perceived value like human brain emulation, radical life extension or space colonization leaving very little AI for every thing else...)

2) Demand & supply curves are already crude. Combining AI labor & human labor into the same demand & supply curves seems like a mistake.

3) Realistically I suspect that human labor supply will shift to the left b/c of ‘UBI’.

4) Ignoring preference for humans, demand for human labor may also shift to the left as AI entrepreneurs would tend to optimize things around AI.

5) The economy will probably grow quite a bit. And preference for humans is likely substantial for certain types of jobs eg NFL player, runway model etc.

6) Combining 4 & 5 suggests a very steep demand curve for human labor.

7) Combining 3 & 6 suggests that a few people (eg 20% of adults) will have decent paying jobs & the rest will live off of savings or ‘UBI’.

I agree that I initially misread your post. I will edit my other comment.

“Humans are the horses of the future! Just accept it & go on with your lives.” - Ghora Sutra

Thank you for writing this and hopefully contributing some clarity to what has been a confused area of discussion.

So here’s a question: When we have AGI, what happens to the price of chips, electricity, and teleoperated robots?

(…Assuming free markets, and rule of law, and AGI not taking over and wiping out humanity, and so on. I think those are highly dubious assumptions, but let’s not get into that here!)

Principle (A) has an answer to this question. It says: prices equilibrate to marginal value, which will stay high, because AGI amounts to ambitious entrepreneurial skilled labor, and ambitious entrepreneurial skilled labor will always find more new high-value things to do. That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.

First off, I'm guessing you're familiar with the economic arguments in The Sun is Big.

Secondly -

If we're talking about prices for the same chips, [rate of] electricity, teleoperated robots, etc., of course they'll go down, as the AGI will have invented better versions.

AGI amounts to ambitious entrepreneurial skilled labor

This is really just a false thing to believe about AGI, from us humans' perspective. It amounts to a new world political order. Unless you specifically build it to prevent all other future creations of humanity from becoming politically interventionist superintelligences, while also not being politically interventionist itself.

First off, I'm guessing you're familiar with the economic arguments in The Sun is Big.

You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A), but I was not endorsing it as actually being true. Indeed, the very next sentence talks about how one can make a parallel argument for the exact opposite conclusion.

I just changed the wording from “implies” to “would imply”. Hope that helps.

If we're talking about prices for the same chips, [rate of] electricity, teleoperated robots, etc., of course they'll go down, as the AGI will have invented better versions.

Well, costs will go down. You can argue that prices will equilibrate to costs, but it does need an argument. That’s my whole point. Normally, markets reach equilibrium where prices ≈ costs to producers ≈ value to consumers, with allowance for profit margin and so on. But this system has no such equilibrium! The value of producing AGI will remain much higher than the cost, all the way to Dyson spheres etc. So it’s at least not immediately obvious what the price will be at any given time.

This is really just a false thing to believe about AGI, from us humans' perspective. It amounts to a new world political order. Unless you specifically build it to prevent all other future creations of humanity from becoming politically non-interventionist superintelligences, while also not being politically interventionist itself.

I already included caveats in two different places that I was assuming no AGI takeover etc., and that I find this assumption highly dubious, and that I think this whole discussion is therefore kinda moot. I mean, I could add yet a third caveat, but that seems excessive :-P

You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A),

My position is that if you accept certain arguments made about really smart AIs in "The Sun is Big", Principle A, by itself, ceases to make sense in this context.

costs will go down. You can argue that prices will equilibriate to costs, but it does need an argument.

Assuming constant demand for a simple input, sure, you can predict the price of that input based on cost alone. The extent to which "the price of compute will go down", is rolled in to how much "the cost of compute will go down". But IIUC, you're more interested in predicting the price of less abstract assets. Innovation in chip technology is more than just making more and more of the same product at a lower cost. [ "There is no 'lump of chip'." ] A 2024 chip is not just [roughly] 2^10 2004 chips - it has logistical advantages, if nothing else. And those aren't accounted for if you insist on predicting chip price using only compute cost and value trendlines. Similar arguments hold for all other classes of material technological assets whose value increases in response to innovation.

"AI will [roughly] amount to X", for any X, including "high-skilled entrepreneurial human labor" is a positive claim, not a default background assumption of discourse, and in my reckoning, that particular one is unjustified.

I’m still pretty sure that you think I believe things that I don’t believe. I’m trying to narrow down what it is and how you got that impression. I just made a number of changes to the wording, but it’s possible that I’m still missing the mark.

My position is that if you accept certain arguments made about really smart AIs in "The Sun is Big", Principle A, by itself, ceases to make sense in this context.

When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”, and put in a link to a wikipedia article with more details. You see what I mean? I wasn’t endorsing it as always and forever true. Quite the contrary: The punchline of the whole article is: “here are three traditional economic principles, but at least one will need to be discarded post-AGI.”

"AI will [roughly] amount to X", for any X, including "high-skilled entrepreneurial human labor" is a positive claim, not a default background assumption of discourse, and in my reckoning, that particular one is unjustified.

I did some rewriting of this part, any chance that helps?

When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”,

I don't think you think Principle [A] must hold, but I do think you think it's in question. I'm saying that, rather than taking this very broad general principle of historical economic good sense, and giving very broad arguments for why it might or might not hold post-AGI, we can start reasoning about superintelligent manufacturing [including R&D] and the effects it will have, more granularly, out the gates.

Like, with respect to Principle [C] my perspective is just "well of course the historical precedent against extremely fast economic growth doesn't hold after the Singularity, that's more or less what the Singularity is".

Edit: Your rewrite of Principle [B] did make it clear to me that you're considering timelines that are at least somewhat bad for humans; thank you for the clarification. [Of course I happen to think we can also discard "AI will be like a manufactured good, in terms of its effects on future prices", out the gates, but it's way clearer to me now that the trilemma is doing work on idea-space.]

I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong.

Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero.

AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong.

So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.

So, kudos to Principle (A). Do you agree?

So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.

This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn't know; I honestly haven't spent a ton of time talking to such people.

Principle [A] doesn't just say AIs won't run out of productive things to do; it makes a prediction about how this will affect prices in a market. It's true that superintelligent AI won't run out of productive things to do, but it will also change the situation such that the prices in the existing economy won't be affected by this in the normal way prices are affected by "human participants in the market won't run out of productive things to do". Maybe there will be some kind of legible market internal to the AI's thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson's Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.

(…Assuming free markets, and rule of law, and AGI not taking over and wiping out humanity, and so on. I think those are highly dubious assumptions, but let’s not get into that here!)


Assuming all is correct, isn't the answer therefore: "at least one of these assumptions must be false"?

(Personally I have long suspected that free markets are a gross oversimplification, but I'm not an economist; and even if I was, with those three options I've got reason to wish for that specific option)

Adding the time dimension solves the issue. The price of "AI from two years ago" will be cheap, and the price of "state of the art AI" will be high. You already see this happening today, and it is in many ways the economic history of technology thus far anyway. 

The situation should result in large percentage increase of world GDP growth rate compared to current rate, but it's hard to know exactly how high as it depends on various bottlenecks that might be hard to foresee right now.

Minor quibble: It's a bit misleading to call B "experience curves", since it is also about capital accumulation and shifts in labor allocation. Without any additional experience/learning, if demand for candy doubles, we could simply build a second candy factory that does the same thing as the first one, and hire the same number of workers for it.

How will the economic growth happen exactly is a more important question. I'm not an economics nerd, but the basic principle is if more players want to buy stocks, they go up.
Right now, as I understand, quite a lot of stocks are being sought by white collar retail investors, including indirectly through mutual funds, pension funds, et cetera. Now AGI comes and wipes out their salary.
They are selling their stocks to keep sustaining their life, arent they? They have mortages, car loans, et cetera.
And even if they don't want to sell all stocks because of potential "singularity upside" if the market is going down because everyone is selling, they are motivated to sell even more. I'm not enough versed in economics, but it seems to me your explosion can happen both ways, and on paper it's kinda more likely it goes down, no?
One could say the big firms // whales will buy all stocks going down, but will it be enough to counteract the effect of a downward spiral caused by so many people going out of jobs or expecting to do so near-term?
Downside of integrating AGI is wiping out incomes as it is being integrated.
Might it be the missing piece that will make all these principles make sense?