Ben Thompson interviewed Sam Altman recently about building a consumer tech company, and about the history of OpenAI. Mostly it is a retelling of the story we’ve heard before, and if anything Altman is very good about pushing back on Thompson when Thompson tries to turn OpenAI’s future into the next Facebook, complete with an advertising revenue model.
It is such a strange perspective to witness. They do not feel the AGI, let alone the ASI. The downside risks of AI, let alone existential risks, are flat out not discussed, this is a world where that’s not even a problem for Future Earth.
Then we contrast this with the new Epoch model of economic growth from AI, which can produce numbers like 30% yearly economic growth. Epoch feels the AGI.
Sam Altman: The GPT-2 release, there were some people who were just very concerned about, you know, probably the model was totally safe, but we didn’t know we wanted to get — we did have this new and powerful thing, we wanted society to come along with us.
Now in retrospect, I totally regret some of the language we used and I get why people are like, “Ah man, this was like hype and fear-mongering and whatever”, it was truly not the intention. The people who made those decisions had I think great intentions at the time, but I can see now how it got misconstrued.
As I said a few weeks ago, ‘this is probably totally safe but we don’t know for sure’ was exactly the correct attitude to initially take to GPT-2, given my understanding of what they knew at the time. The messaging could have made this clearer, but it very much wasn’t hype or fearmongering.
What Even is AGI
Altman repeatedly emphasizes that what he wanted to do from the beginning, what me still most wants to do, is build AGI.
Altman’s understanding of what he means by that, and what the implications will be, continues to seem increasingly confused. Now it seems it’s… fungible? And not all that transformative?
Sam Altman: My favorite historical analog is the transistor for what AGI is going to be like. There’s going to be a lot of it, it’s going to diffuse into everything, it’s going to be cheap, it’s an emerging property of physics and it on its own will not be a differentiator.
This seems bonkers crazy to me. First off, it seems to include the idea of ‘AGI’ as a fungible commodity, as a kind of set level. Even if AI stays for substantial amounts of time at ‘roughly human’ levels, differentiation between ‘roughly which humans in which ways, exactly’ is a giant deal, as anyone who has dealt with humans knows. There isn’t some natural narrow attractor level of capability ‘AGI.’
Then there’s the obvious question of why you can ‘diffuse AGI into everything’ and expect the world to otherwise look not so different, the way it did with transistors? Altman also says this:
Ben Thompson: What’s going to be more valuable in five years? A 1-billion daily active user destination site that doesn’t have to do customer acquisition, or the state-of-the-art model?
Sam Altman: The 1-billion user site I think.
That again implies little differentiation in capability, and he expects commoditization of everything but the very largest models to happen quickly.
Charles: This seems pretty incompatible with AGI arriving in that timeframe or shortly after, unless it gets commoditised very fast and subsequently improvements plateau.
The whole thing is pedestrian, he’s talking about the Next Great Consumer Product. As in, Ben Thompson is blown away that this is the next Facebook, with a similar potential. Thompson and Altman are talking about issues of being a platform versus an aggregator and bundling and how to make ad revenue. Altman says they expect to be a platform only in the style of a Google, and wisely (and also highly virtuously) hopes to avoid the advertising that I sense has Thompson very excited, as he continues to assume ‘people won’t pay’ so the way you profit from AGI (!!!) is ads. It’s so weird to see Thompson trying to sell Altman on the need to make our future an ad-based dystopia, and the need to cut off the API to maximize revenue.
Such considerations do matter, and I think that Thompson’s vision is wrong on both business level and also on the normative level of ‘at long last we have created the advertising fueled cyberpunk dystopia world from the novel…’ but that’s not important now. Eyes on the damn prize!
I don’t even know how to respond to a vision so unambitious. I cannot count that low.
I mean, I could, and I have preferences over how we do so when we do, but it’s bizarre how much this conversation about AGI does not feel the AGI.
Seeking Deeply Irresponsibly
Altman’s answers in the DeepSeek section are scary. But it’s Thompson who really, truly, profoundly, simply does not get what is coming at all, or how you deal with this type of situation, and this answer from Altman is very good (at least by 2025 standards):
Ben Thompson: What purpose is served at this point in being sort of precious about these releases?
Sam Altman: I still think there can be big risks in the future. I think it’s fair that we were too conservative in the past. I also think it’s fair to say that we were conservative, but a principle of being a little bit conservative when you don’t know is not a terrible thing.
I think it’s also fair to say that at this point, this is going to diffuse everywhere and whether it’s our model that does something bad or somebody else’s model that does something bad, who cares? But I don’t know, I’d still like us to be as responsible an actor as we can be.
Other Altman statements, hinting at getting more aggressive with releases, are scarier.
They get to regulation, where Thompson repeats the bizarre perspective that previous earnest calls for regulations that only hit OpenAI and other frontier labs were an attempt at regulatory capture. And Altman basically says (in my words!), fine, the world doesn’t want to regulate only us and Google and a handful of others at the top, so we switched from asking for regulations to protect everyone into regulations to pave the way for AI.
Thus, the latest asks from OpenAI are to prevent states from regulating frontier models, and to declare universal free fair use for all model training purposes, saying to straight up ignore copyright.
Others Don’t Feel the AGI
Some of this week’s examples, on top of Thompson and Altman.
Spor: I genuinely get the feeling that no one *actually* believes in superintelligence except for the doomers
I think they were right about this (re: common argument against e/acc on x) and i have to own up to that.
John Pressman: There’s an entire genre of Guy on here whose deal is basically “Will the singularity bring me a wife?” and the more common I learn this guy is the less I feel I have in common with others.
Also this one:
Rohit: Considering AGI is coming, all coding is about to become vibe coding, and if you don’t believe it then you don’t really believe in AGI do you
Ethan Mollick: Interestingly, if you look at almost every investment decision by venture capital, they don’t really believe in AGI either, or else can’t really imagine what AGI would mean if they do believe in it.
Epoch Feels the AGI
Epoch creates the GATE model, explaining that if AI is highly useful, it will also get highly used to do a lot of highly useful things, and that would by default escalate quickly. The model is, as all such things are, simplified in important ways, ignoring regulatory friction issues and also the chance we lose control or all die.
My worry is that by ignoring regulatory, legal and social frictions in particular, Epoch has not modeled the questions we should be most interested in, as in what to actually expect if we are not in a takeoff scenario. The paper does explicitly note this.
Their default result of their model, excluding the excluded issues, is roughly 30% additional yearly economic growth.
Epoch AI: We developed GATE: a model that shows how AI scaling and automation will impact growth.
It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation in decades.
Tweak the parameters—these transformative outcomes are surprisingly hard to avoid.
Imagine if a central bank took AI seriously. They’d build GATE—merging economics with AI scaling laws to show how innovation, automation, and investment interact.
At its core: more compute → more automation → growth → more investment in chips, fabs, etc.
Even when investors are uncertain, GATE predicts explosive economic growth within two decades. Trillions of dollars flow into compute, fabs, and related infrastructure—even before AI generates much value—because investors anticipate massive returns from widespread AI automation.
We’ve created an interactive sandbox so you can explore these dynamics yourself. Test your own assumptions, run different scenarios, and visualize how the economy might evolve as AI automation advances.
GATE has important limitations: no regulatory frictions, no innovation outside AI, and sensitivity to uncertain parameters. We see it as a first-order approximation of AI’s dynamics—try it out to learn how robust its core conclusions are!
Charles Foster: Epoch AI posts, for dummies
Epoch’s Ege Erdil and Matthew Barnett argue that most AI value will come from broad automation, not from R&D and scientific progress. That’s not a knock on the value of R&D, it’s saying the broad automation is even more valuable. Broadly automating labor is the name of their game.
One could say ‘wait, the reason you can do broad automation is exactly because of AI R&D, so that still makes R&D super valuable,’ so make of that dilemma what you will. A lot of this likely comes from the conflation of ‘formalized R&D’ as it is currently practiced, and ‘generally figuring things out’ which goes well beyond formalized R&D and will be turbocharged across the board. Yes, only ~20% of productivity gains look like they come from ‘R&D’ but that’s because we’re only counting formal R&D, and plausibly ignoring compounding effects.
Epoch argues against the likelihood of a full-on AI R&D singularity (or full RSI) via the standard arguments and emphasizing the need for a variety of cognitive skills. I don’t find this generally convincing, but they’re making a narrower argument than usual, that AI will first be able to fully automate a large share of the general workforce. I suppose sure, but also that’s going to happen rather quickly in such scenarios anyway?
I was pleased to see Tyler Cowen link to this without derision or claiming that it assumed its conclusions, even though it would have been highly reasonable to make similar objections.
I find very high growth scenarios highly plausible if we assume AI away loss of control and other existential risks along the way, and assume all of this is pointed in pro-human directions and that capabilities continue to scale.
True Objections to Widespread Rapid Growth
Alas, I worry that the model here does not address most people’s ‘true objections.’
I don’t intend for this to be my definitive version of this, but sketching quickly, the plausibly true objections fall roughly into two categories.
First, there’s the ones that are often driving objections but that aren’t actually objections in this context:
ASI likely kills everyone. Yeah, yeah. True but here we’re assuming it doesn’t.
Disbelief on priors, absurdity heuristic, ‘sounds sci-fi’ or Nothing Ever Happens.
Belief that tech is hype or always involves tons of hype, so this is also hype.
Large growth would have implications I don’t want to think about, so no.
Large growth means nothing matters so I’m going to act as if it won’t happen.
Failure to even feel the AGI.
That’s all understandable, but not especially relevant. It’s a physical question, and it’s of the form of solving for the [Y] in ‘[X] → [Y].’
Second, there’s actual arguments, in various combinations, such as:
AI progress will stall before we reach superintelligence (ASI), because of reasons.
AI won’t be able to solve robotics or do act physically, because of reasons.
Partial automation, even 90% or 99%, is very different from 100%, o-ring theory.
Physical bottlenecks and delays prevent growth. Intelligence only goes so far.
Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.
Decreasing marginal value means there literally aren’t goods with which to grow.
Dismissing ability of AI to cause humans to make better decisions.
Dismissing ability of AI to unlock new technologies.
And so on.
One common pattern is that relatively ‘serious people’ who do at least somewhat understand what AI is going to be put out highly pessimistic estimates and then call those estimates wildly optimistic and bullish. Which, compared to the expectations of most economists or regular people, they are, but that’s not the right standard here.
Dean Ball: For the record: I expect AI to add something like 1.5-2.5% GDP growth per year, on average, for a period of about 20 years that will begin in the late 2020s.
That is *wildly* optimistic and bullish. But I do not believe 10% growth scenarios will come about.
Daniel Kokotajlo: Does that mean you think that even superintelligence (AI better than the best humans at everything, while also being faster and cheaper) couldn’t grow the economy at 10%+ speed? Or do you think that superintelligence by that definition won’t exist?
Dean Ball: the latter. it’s the “everything” that does it. 100% is a really big number. It’s radically bigger than 80%, 95%, or 99%. if bottlenecks persist–and I believe strongly that they will–we will have see baumol issues.
Daniel Kokotajlo: OK, thanks. Can you give some examples of things that AIs will remain worse than the best humans at 20 years from now?
Dean Ball: giving massages, running for president, knowing information about the world that isn’t on the internet, performing shakespeare, tasting food, saying sorry.
Samuel Hammond (responding to DB’s OP): That’s my expectation too, at least into the early 2030s as the last mile of resource and institutional constraints get ironed out. But once we have strong AGI and robotics production at scale, I see no theoretical reason why growth wouldn’t run much faster, a la 10-20% GWP. Not indefinitely, but rapidly to a much higher plateau.
Think of AGI as a step change increase in the Solow-Swan productivity factor A. This pushes out the production possibilities frontier, making even first world economies like a developing country. The marginal product of capital is suddenly much higher, setting off a period of rapid “catch up growth” to the post-AGI balanced growth path with the capital / labor ratio in steady state, signifying Baumol constraints.
Dean Ball: Right—by “AI” I really just meant the software side. Robotics is a totally separate thing, imo. I haven’t thought about the economics of robotics carefully but certainly 10% growth is imaginable, particularly in China where doing stuff is legal-er than in the us.
Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets. You can talk about that if you want, but it’s not the question we should be asking. And even then, I expect more – for example I asked Claude about automating 80% of non-physical tasks, and it estimated about 5.5% additional GDP growth per year.
Another way of thinking about Dean Ball’s growth estimate is that in 20 years of having access to this, that would roughly turn Portugal into the Netherlands, or China into Romania. Does that seem plausible?
If you make a sufficient number of the pessimistic objections on top of each other, where we stall out before ASI and have widespread diffusion bottlenecks and robotics proves mostly unsolvable without ASI, I suppose you could get to 2% a year scenario. But I certainly wouldn’t call that wildly optimistic.
Distinctly, on the other objections, I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism. The ASI cuts through everything. And yes, physical actions take non-zero time, but that’s being taken into account, future automated processes can go remarkably quickly even in the physical realm, and a lot of claims of ‘you can only know [X] by running a physical experiment’ are very wrong, again a Skill Issue.
On the decreasing marginal value of goods, I think this is very much a ‘dreamed of in your philosophy’ issue, or perhaps it is definitional. I very much doubt that the physical limits kick in that close to where we are now, even if in important senses our basic human needs are already being met.
Tying It Back
Altman’s model of the how AGI will impact the world is super weird if you take it seriously as a physical model of a future reality.
It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.
There’s only so intelligent a thing can be, either in practice around current tech levels or in absolute terms, it’s not clear which. But it’s not sufficiently beyond us to be that dangerous, or for the resulting world to look that different. There’s risks, things that can go wrong, but they’re basically pedestrian, not that different from past risks. AGI will get released into the world, and ‘no one will care that much’ about the first ‘AGI products.’
I’m not willing to say that something like that is purely physically impossible, or has probability epsilon or zero. But it seems pretty damn unlikely to be how things go. I don’t see why we should expect this fungibility, or for capabilities to stall out exactly there even if they do stall out. And even if that did happen, I would expect things to change quite a lot more.
It’s certainly possible that the first AGI-level product will come out – maybe it’s a new form of Deep Research, let’s say – and initially most people don’t notice or care all that much. People often ignore exponentials until things are upon them, and can pretend things aren’t changing until well past points of no return. People might sense there were boom times and lots of cool toys without understanding what was happening, and perhaps AI capabilities don’t get out of control too quickly.
It still feels like an absurd amount of downplaying, from someone who knows better. And he’s far from alone.
Altman’s model of the how AGI will impact the world is super weird if you take it seriously as a physical model of a future reality
My instinctive guess is that these sorts of statements from OpenAI are Blatant Lies intended to lower the AGI labs' profile and ensure there's no widespread social/political panic. There's a narrow balance to maintain, between generating enough hype targeting certain demographics to get billions of dollars in investments from them ("we are going to build and enslave digital gods and take over the world, do you want to invest in us and get a slice of the pie, or miss out and end up part of the pie getting sliced up?") and not generate so much hype of the wrong type that the governments notice and nationalize you ("it's all totally going to be business-as-usual, basically just a souped-up ChatGPT, no paradigm shifts, no redistribution of power, Everything will be Okay").
Sending contradictory messages such that each demographic hears only what they want to hear is a basic tactic for this. The tech investors buy the hype/get the FOMO and invest, the politicians and the laymen dismiss it and do nothing.
They seem to be succeeding at striking the right balance, I think. Hundreds of billions of dollars going into it from the private sector while the governments herp-derp.
certainly possible that the first AGI-level product will come out – maybe it’s a new form of Deep Research, let’s say – and initially most people don’t notice or care all that much
My current baseline expectation is that it won't look like this (unless the AGI labs/the AGI will want to artificially make it look like this). Attaining actual AGI, instead of the current shallow facsimiles, will feel qualitatively different.
For me, with LLMs, there's a palatable sense that they need to be babied and managed and carefully slotted into well-designed templates or everything will fall apart. It won't be like that with an actual AGI, an actual AGI would be exerting optimization pressure from its own end to make things function.
Relevant meme
There'll be a palatable feeling of "lucidity" that's currently missing with LLMs. You wouldn't confuse the two if you had their chat windows open side by side, and the transformative effects will be ~instant.
Your list of "actual arguments" against explosive growth seems to be missing the one that is by far the most important/convincing IMO, namely Baumol effects.
This argument has been repeatedly brought up by growth economists in earlier rounds from the AI-explosive-growth debate. So rather than writing my own version of this argument, I'll just paste some quotes below.
As far as I can tell, the phenomenon discussed in these quotes is excluded by construction from the GATE model: while it draws a distinction between different "tasks" on the production side, its model of consumption effectively has only one "consumable good" which all these tasks produce (or equivalently, multiple goods which are all perfect substitutes for one another).
In other words, it stipulates what Vollrath (in the first quote below) calls "[the] truly unbelievable assumption that [AI] can innovate *precisely* equally across every product in existence." Of course, if you do assume this "truly unbelievable" thing, then you don't get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Sure, maybe AI will be different in a way that turns off Baumol effects, for some reason or other. But if that is the claim, then an argument needs to be made for that specific claim, and why it will hold for AI when it hasn't for anything else before. It can't be justified as a mere "modeling simplification," because the same "simplification" would have led you to wrongly expect similar explosive growth from past agricultural automation, from Moore's Law, etc.
History suggests that people tend to view many goods and services as complements. Yes, within specific sub-groups (e.g. shoes) different versions are close substitutes, but across those groups (e.g. shoes and live concerts) people treat them as complements and would like to consume some of both.
What does that do to the predictions of explosive growth? It suggests that it may “eat itself”. AI or whatever will deliver productivity growth to some products faster than others, barring a truly unbelievable assumption that it can innovate *precisely* equally across every product in existence. When productivity grows more rapidly in product A than in product B (50% versus 10%, say), the relative price of product A falls relative to product B. Taking A and B as complements, what happens to the total expenditure on A (price times quantity)? It falls. We can get all the A we want for very cheap, and because we like both A and B, we have a limit on how much A we want. So total spending on A falls.
But growth in aggregate productivity (and in GWP, leaving aside my comments on inputs above) is a weighted average of productivity growth in all products. The weights are the expenditure shares. So in the A/B example, as A gets more and more productive relative to B, the productivity growth rate *falls* towards the 10% of product B. In general, the growth rate of productivity is going to get driven towards the *lowest* productivity growth rate across the range of products we consume.
And the faster that productivity grows in product A, the sooner the aggregate growth rate will fall to the productivity growth rate of B. So a massive question for this report is how widespread explosive growth is expected to be. Productivity growth in *all* products of 10% forever would deliver 10% growth in productivity forever (and perhaps in GWP). Great. But productivity growth of 100% in A and 0% in B will devolve into productivity growth of 0% over time.
This has nothing to do with the nature of R&D or the knife-edge conditions on growth models. This is simply about the nature of demand for products.
From Ben Jones' review of the same Davidson 2021 report:
[W]e have successfully automated an amazing amount of agricultural production (in advanced economies) since the 19th century. One fact I like: In 2018, a farmer using a single combine harvester in Illinois set a record by harvesting 3.5 million pounds of corn in just 12 hours. That is really amazing. But the result is that corn is far cheaper than it used to be, and the GDP implications are modest. As productivity advances and prices fall, these amazing technologies tend to become rounding errors in GDP and labor productivity overall. Indeed, agricultural output used to be about half of all GDP but now it is down to just a couple percent of GDP. The things you get good at tend to disappear as their prices plummet. Another example is Moore’s Law. The progress here is even more mind-boggling – with growth rates in calculations per unit of resource cost going up by over 30% per year. But the price of calculations has plummeted in response. Meanwhile, very many things that we want but don’t make rapid progress in – generating electricity; traveling across town; extracting resources from mines; fixing a broken window; fixing a broken limb; vacation services – see sustained high prices and come to take over the economy. In fact, despite the amazing results of Moore’s Law and all the quite general-purpose advances it enables – from the Internet, to smartphones, to machine learning – the productivity growth in the U.S. economy if anything appears to be slowing down.
There are two ways to "spend" an increase in productivity driven by new ideas. You can use it to produce more goods and services given the same amount of inputs as before, or you can use it to reduce the inputs used while producing the same goods and services as before. If we presume that AI can generate explosive growth in ideas, a very real choice people might make is to "spend" it on an explosive decline in input use rather than an explosive increase in GDP.
Let's say AI becomes capable of micro-managing agricultural land. There is already a "laser-weeder" capable of rolling over a field and using AI to identify weeds and then kill them off with a quick laser strike. Let's say AI raises agricultural productivity by a factor of 10 (even given all the negative feedback loops mentioned above). What's the response to this? Do we continue to use the same amount of agricultural land as before (and all the other associated resources) and increase food production by a factor of 10? Or do we take advantage of this to shrink the amount of land used for agriculture by a factor of 10? If you choose the latter - which is entirely reasonable given that worldwide we produce enough food to feed everyone - then there is no explosive growth in agricultural output. There isn't any growth in agricultural output. We've taken the AI-generate idea and generated exactly zero economic growth, but reduced our land use by around 90%.
Which is amazing! This kind of productivity improvement would be a massive environmental success. But ideas don't have to translate into economic growth to be amazing. More important, amazing-ness does necessarily lead to economic growth.
In general I find the AI explosive growth debate pretty confusing and frustrating, for reasons related to what Vollrath says about "amazing-ness" in that last quote.
Often (and for instance, in this post), the debate gets treated as indirect "shadowboxing" about the plausibility of various future AI capabilities, or about the degree of "transformation" AI will bring to the future economy – if you doubt explosive growth you are probably not really "feeling the AGI," etc.
But if we really want to talk about those things, we should just talk about them directly. "Will there be explosive growth?" is a poor proxy for "will AI dramatically transform the world economy?", and things get very muddled when we talk about the former and then read into this talk to guess what someone really thinks about the latter.
Maybe AI will be so transformative that "the economy" and "economic growth" won't even exist in any sense we would now recognize. Maybe it attains capabilities that could sustain explosive growth if there were consumers around to hold up the demand side of that bargain, but it turns out that humans just can't meaningfully "consume" at 100x (or 1000x or whatever) of current levels, at some point there's only 24h in a day, and only so much your mind can attend to at once, etc. Or maybe there is explosive growth, but it involves "synthetic demand" by AIs for AI-produced goods in a parallel economy humans don't much care about, and we face the continual nuisance of filtering that stuff out of GDP so that GDP still tracks anything meaningful to us.
Or something else entirely, who knows! What we care about is the actual content of the economic transformation – the specific "amazing" things that will happen, in Vollrath's terms. We should argue over those, and only derive the answer to "will there be explosive growth?" as a secondary consequence.
The list doesn't exclude Baumal effects as these are just the implication of:
Physical bottlenecks and delays prevent growth. Intelligence only goes so far.
Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.
Like Baumal effects are just some area of the economy with more limited growth bottlenecking the rest of the economy. So, we might as well just directly name the bottleneck.
Your argument seems to imply you think there might be some other bottleneck like:
There will be some cognitive labor sector of the economy which AIs can't do.
But, this is just a special case of "will there be superintelligence which exceeds human cognitive performance in all domains".
In other words, it stipulates what Vollrath (in the first quote below) calls "[the] truly unbelievable assumption that [AI] can innovate precisely equally across every product in existence." Of course, if you do assume this "truly unbelievable" thing, then you don't get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Huh? It doesn't require equal innovation across all products, it just requires that the bottlenecking sectors have sufficiently high innovation/growth that the overall economy can grow. Sufficient innovation in all potentially bottlenecking sectors != equal innovation.
Suppose world population was 100,000x higher, but these additional people magically didn't consume anything or need office space. I think this would result in very fast economic growth due to advancing all sectors simultaneously. Imagining population growth increases seems to be to set a lower bound on the implications of highly advanced AI (and robotics).
As far as I can tell, this Baumol effect argument is equally good at predicting that 3% or 10% growth rates are impossible from the perspective of people in agricultural societies with much lower growth rates.
So, I think you have to be quantitative and argue about the exact scale of the bottleneck and why it will prevent some rate of progress. The true physical limits (doubling time on the order of days or less, dyson sphere or even consuming solar mass faster than this) are extremely high, so this can't be the bottleneck - it must be something about the rate of innovation or physical capital accumulation leading up to true limits.
Perhaps your view is: "Sure, we'll quickly have a Dyson sphere and ungodly amounts of compute, but this won't really result in explosive GDP growth as GDP will be limited by sectors that directly interface with humans like education (presumably for fun?) or services where the limits are much lower." But, this isn't a crux for the vast majority of arguments which depend on the potential for explosive growth!
Reason: I've been spending too much time commenting/arguing on LW lately and this long comment was the kind of thing I'd like to produce less of, irrespective of its merits
I second the general point that GDP growth is a funny metric … it seems possible (as far as I know) for a society to invent every possible technology, transform the world into a wild sci-fi land beyond recognition or comprehension each month, etc., without quote-unquote “GDP growth” actually being all that high — cf. What Do GDP Growth Curves Really Mean? and follow-up Some Unorthodox Ways To Achieve High GDP Growth with (conversely) a toy example of sustained quote-unquote “GDP growth” in a static economy.
This is annoying to me, because, there’s a massive substantive worldview difference between people who expect, y’know, the thing where the world transforms into a wild sci-fi land beyond recognition or comprehension each month, or whatever, versus the people who are expecting something akin to past technologies like railroads or e-commerce. I really want to talk about that huge worldview difference, in a way that people won’t misunderstand. Saying “>100%/year GDP growth” is a nice way to do that … so it’s annoying that this might be technically incorrect (as far as I know). I don’t have an equally catchy and clear alternative.
(Hmm, I once saw someone (maybe Paul Christiano?) saying “1% of Earth’s land area will be covered with solar cells in X number of years”, or something like that. But that failed to communicate in an interesting way: the person he was talking to treated the claim as so absurd that he must have messed up by misplacing a decimal point :-P ) (Will MacAskill has been trying “century in a decade”, which I think works in some ways but gives the wrong impression in other ways.)
What I would really like to see is cost of living plummet to 0. Then cost of thriving plummet to 0. Which would also cause GDP to plummet. However, this is only a problem in practical terms if the forces of automation require money to keep running, rather than, say, a benevolent ASI taking care of humanity as a personal hobby.
One way or another, though, AGI is going to have an impact on this world of a magnitude equivalent to something like a 30% growth in GWP per year at least. This includes all life getting wiped out, of course.
Maybe we need a standard metric for the rate of unrecognizability/incomprehensibility of the world and talk about how AGI will accelerate this. Like how much a person accustomed to life in 1500 would have to adjust to fit in to the world of 2000. A standard shock level (SSL), if you will.
The shock level of 2000 relative to 1500 may end up describing the shock level of 2040 relative to 2020, assuming AGI has saturated the global economy by then. The time it takes for the world to become unrecognizable (again and again) will shrink over time as intelligence grows, whether manifested as GDP growth, GDP collapse, or paperclipping. If ordinary people understood that at least, you might get more push for investment into alignment research or for stricter regulations.
which can produce numbers like 30% yearly economic growth. Epoch feels the AGI.
Ironic. My understanding is that Epoch's model substantially weakens/downplays the effects of AI over the next decade or two. Too busy now but here's a quote from their FAQ:
The main focus of GATE is on the dynamics in the leadup towards full automation, and it is likely to make poor predictions about what happens close to and after full automation. For example, in the model the primary value of training compute is in increasing the fraction of automated tasks, so once full automation is reached the compute dedicated to training falls to zero. However, in reality there may be economically valuable tasks that go beyond those that humans are able to perform, and for which training compute may continue to be useful.
(I love Epoch, I think their work is great, I'm glad they are doing it.)
I don't really get the point in releasing a report that explicitly assumes x-risk doesn't happen. Seems to me that x-risk is the only outcome worth thinking about given the current state of the AI safety field (i.e. given how little funding goes to x-risk). Extinction is so catastrophically worse than any other outcome* that more "normal" problems aren't worth spending time on.
I don't mean this as a strong criticism of Epoch, more that I just don't understand their worldview at all.
*except S-risks but Epoch isn't doing anything related to those AFAIK
Working through a model of the future in a better-understood hypothetical refines gears applicable outside the hypothetical. Exploratory engineering for example is about designing machines that can't be currently built in practice and often never will be worthwhile to build as designed. It still gives a sense of what's possible.
(Attributing value to steps of a useful activity is not always practical. Research is like that, very useful that it's happening overall, but individual efforts are hard to judge, and so acting on attempts to judge them risks goodhart curse.)
Many who feel the AGI are not feeling Drosophila, an intermediate level of impact between AGI and superintelligence. Biomass doubling time of 1-3 days is way more than 30% yearly growth. This kind of shortcut to scaling physical infrastructure (robots, fusion plants) doesn't merely continue the trend of human industry supercharged with AI automation and speed advantage.
It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.
I don't think he ever suggests this. Though he does suggest we'll be in a pretty slow takeoff world.
I enjoy reading your posts, but I skip over the 300-word blocks of text like the following. Without new paragraphs or white space, it's too dense for me to want to read them.
Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets. You can talk about that if you want, but it’s not the question we should be asking. And even then, I expect more – for example I asked Claude about automating 80% of non-physical tasks, and it estimated about 5.5% additional GDP growth per year. Another way of thinking about Dean Ball’s growth estimate is that in 20 years of having access to this, that would roughly turn Portugal into the Netherlands, or China into Russia. Does that seem plausible? If you make a sufficient number of the pessimistic objections on top of each other, where we stall out before ASI and have widespread diffusion bottlenecks and robotics proves mostly unsolvable without ASI, I suppose you could get to 2% a year scenario. But I certainly wouldn’t call that wildly optimistic. I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism. The ASI cuts through everything. And yes, physical actions take non-zero time, but that’s being taken into account, future automated processes can go remarkably quickly even in the physical realm, and a lot of claims of ‘you can only know [X] by running a physical experiment’ are very wrong, again a Skill Issue. On the decreasing marginal value of goods, I think this is very much a ‘dreamed of in your philosophy’ issue, or perhaps it is definitional. I very much doubt that the physical limits kick in that close to where we are now, even if in important senses our basic human needs are already being met.
I suspect an issue with the RSS cross-posting feature. I think you may used the "Resync RSS" button (possibly to sync an unrelated edit), and that may have fixed it? The logs I'm looking at are consistent with that being what happened.
Zvi's post is imported, so it's stored a little differently than normal posts. Here's two copies I made stored differently (1, 2), I'd appreciate you letting me know if either of these look correct on mobile.
(Currently it looks fine on my iPhone, are you on an Android?)
The G7 (where we expect to see automation first), is only about 30% of the GWP. So for 100 extra basis points of global product from AI, if concentrated in those countries, looks like 333 basis points of growth in those countries. Remember Amhdahls law of slowdowns. If we expect the growth to only be in one breakout country, for that country it looks like 400 or 500 basis point in the 1% extra scenario, and the expected 1000 or more basis points in the pessismistic 2.5 percent delta. With the existing 200 basis points of baseline growth.
The poorest countries don't have much acess to capital, which means that changes in means in production (which always takes switching costs and thus capital), are really hard.
I think there is a transition dynamic about whether appllications/productivity is measured in the fastest growing or global economies.
It really takes a long time for new means of productions to find there ways to the darker corners of the global economy. If that does not happen, we expect to see lower growth (since growth is limited by the tightest constraint and even big companies have some small/globally disadvantage suppliers)
One of the fundamental shifts that still seems missing in the thinking of Altman, Thompson, and many others discussing AGI is the shift from technological thinking to civilizational thinking.
They're reasoning in the paradigm of "products" — something that can diffuse, commoditize, slot into platform dynamics, maybe with some monetization tricks. Like smartphones or transistors. But AGI is not a product. It's the point after which the game itself changes.
By definition, AGI brings general-purpose cognitive ability. That makes the usual strategic questions — like "what’s more valuable, the model or the user base?" — feel almost beside the point. The higher-order question becomes: who sets the rules of the game?
This is not a shift in tools; it’s a shift in the structure of goals, norms, and meaning.
If you don’t feel the AGI — maybe it’s because you’re not yet thinking at the right level of abstraction.
A lot of this likely comes from the conflation of ‘formalized R&D’ as it is currently practiced, and ‘generally figuring things out’ which goes well beyond formalized R&D and will be turbocharged across the board. Yes, only ~20% of productivity gains look like they come from ‘R&D’ but that’s because we’re only counting formal R&D, and plausibly ignoring compounding effects.
No, the way they model R&D is meant to be quite general, just any dedication of resources toward improving software or hardware. They abstract away details by measuring that "dedication of resources" in real dollars, but you should think of that as representing researcher time, compute resources devoted to improvements, etc. And compounding is built-in both indirectly via the fact that improvements in software and hardware increase the resources available to invest and directly via the \phi_S and \phi_H parameters.
I haven't yet dug into the ~20% result---decomposition can be complicated---but yours is not an accurate explanation of it.
Nevermind. I'm the inaccurate one here. What I said is true of the GATE model, but I now see that your paragraph was about a separate piece of Epoch commentary that was not based on the GATE model. And that separate piece definitely is talking specifically about formal R&D.
It's a separate question whether the Epoch commentary is accurately representing the papers it is citing---and whether your response applies---but I haven't delved into that.
Same, here's a screenshot. Perhaps Molony is using a third-party web viewer?