If the Superintelligence were near fallacy
People will say: * "If the Superintelligence were near, OpenAI wouldn't be selling ads." * "If the Superintelligence were near, OpenAI wouldn't be adding adult content to ChatGPT." * "If the Superintelligence were near, OpenAI wouldn't be taking ecommerce referral fees." * "If the Superintelligence were near and about to automate software development, Anthropic wouldn't have a dozen of open roles for software developers." * "If the Superintelligence were near, OpenAI wouldn't be trying to take a cut of scientific innovations created with OpenAI models." * "If the Superintelligence were near, OpenAI employees wouldn't be selling OpenAI equity in the secondary market." * "If the Superintelligence were near, OpenAI wouldn't be doing acquisitions such as io, Roi, Torch, Sky, and Neptune." * "If the Superintelligence were near, OpenAI wouldn't be spending compute with Studio Ghibli or the Sora app." * "If the Superintelligence were near, Anthropic wouldn't be rumored to have hired lawyers for a 2026 IPO." * "If the Superintelligence were near, Google wouldn't be selling and renting TPUs to Anthropic." * "If the Superintelligence were near, Trump would know that and he wouldn't allow H200 sales to China." * "If the Superintelligence were near, Ilya wouldn't have left OpenAI to create his own underfunded AI Lab." * "If the Superintelligence were near, Mira Murati and John Schulman wouldn't have left OpenAI to create their own underfunded AI Lab." * "If the Superintelligence were near, Anthropic wouldn't be cheap and would allow us to use Claude Max subscription inside of OpenCode." I will keep updating the list above over time. I believe the public has been using very bad heuristics to decide how much they should care about the field of artificial intelligence. The goal of this essay is to try to explain why having a world model of imminent Superintelligence isn't in opposition with the way the Labs behave. The audience I expect to read this text are Les
Someone, somewhere, should put some effort and answer the following question: is the cost of AI tasks increasing exponentially that at some point, LLMs coding at a certain time horizon will be more expensive than humans?