Anthropic is raising even more funds and the pitch deck seems scary. A choice quote from the article:
These models could begin to automate large portions of the economy,” the pitch deck reads. “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.
This frontier model could be used to build virtual assistants that can answer emails, perform research and generate art, books and more, some of which we have already gotten a taste of with the likes of GPT-4 and other large language models.
Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with “tens of thousands of GPUs.”
It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.
But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).
Would it still increase P(doom)?
What if the oracle said P(doom) is 5%?
I am not trying to make any specific point, just interested in what people think.