This narrative (on timing) promotes building $150bn training systems in 2026-2027. AGI is nigh, therefore it makes sense to build them. If they aren't getting built, that might be the reason AGI hasn't arrived yet, so build them already (implies the narrative).
Actual knowledge that this last step of scaling is just enough to be relevant doesn't seem likely. This step of scaling seems to be beyond what happens by default, so a last push to get it done might be necessary. And the step after it won't be possible to achieve with mere narrative. While funding keeps scaling, the probability of triggering an intelligence explosion is higher; once it stops scaling, the probability (per year) goes down (if intelligence hasn't exploded by then). In this sense the narrative has a point.
Thanks for linking these! I also want to highlight that Sam shared his AGI timeline in the Bloomberg interview: "I think AGI will probably get developed during this president’s term, and getting that right seems really important."
Worth keeping in mind that OpenAI is burning through crazy amounts of money and is constantly in need of more:
OpenAI raised $6.6 billion [in October], the largest venture capital round in history. But it plans to lose $5 billion [2024] alone. And by 2026, it could be losing $14 billion per year. That’s head exploding territory. If OpenAI keeps burning money at this rate, it will have to raise another round soon. Perhaps as early as 2025.
As a result, Altman has a significant financial incentive to believe/say that OpenAI is on the verge of a breakthrough and that it's worth it for their investors to continue giving them money.
I think the title greatly undersells the importance of these statements/beliefs. (I would've preferred either part of your quote or a call to action.)
I'm glad that Sam is putting in writing what many people talk about. People should read it and take them seriously.
More context in this Bloomberg piece.