On balance, I support banning/obstructing datacenter buildout. That said, I'm not actually sure whether that impacts the omnicide risk positively or negatively.
I don't think the LLM paradigm is AGI-complete. I don't have utter confidence in that, but I think it's more likely than not. And in worlds in which it is non-AGI-complete, the labs' obsession with them is very helpful. They're wasting macroeconomic amounts of money on it, pouring most of our generation's AI-researcher talent into it, distracting funding from other AGI approaches. If LLMs then fail to usher in the Singularity, if it does turn out to be a bubble that pops (e. g., in 2030-2032, when the ability to aggressively scale compute is supposed to run out), this should cause another AI winter. AGI would become a decidedly unsexy thing to work on once more, in industry and probably in academia both.
What would obstructing the LLM paradigm (via various compute limits) do? Well, it may cause the AGI megacorps to start looking into other directions now, while they still have massive amount of unspent manpower and capital. Which may lead to someone "succeeding" sooner.[1]
Perhaps one shouldn't interrupt their enemy while they're making a mistake.
Or perhaps that's not how the story goes. Perhaps: LLMs are a strong signal that AI is a massively powerful technology, a signal legible to many more people than theoretical arguments. They're attracting macroeconomic amounts of funding to AI, funneling a large fraction of our generation's talent towards working on AI, fueling inter-company and geopolitical AI-race dynamics. And while they cause other AGI approaches to be relatively neglected, they cause so much more absolute attention to be pointed at the AI industry that the amounts of funding/talent going into non-LLM AGI routes is still much greater than in the no-LLMs counterfactual. On top of that, even if LLMs aren't AGI-complete in themselves, they may still be useful enough to speed up SWE/research alo