I wrote a post to my Substack attempting to compile all of the best arguments against AI as an existential threat.
Some arguments that I discuss include: international game theory dynamics, reference class problems, knightian uncertainty, superforecaster and domain expert disagreement, the issue with long-winded arguments, and more!
Please tell me why I'm wrong, and if you like the article, subscribe and share it with friends!
+1 to one of the things Charlie said in his comment, but I’d go even further:
The proposition “The current neural architecture paradigm can scale up to Artificial General Intelligence (AGI) (especially without great breakthroughs)” is not only unnecessary for the proposition “AI is an extinction threat” to be true, it’s not even clear that it’s evidence for the proposition “AI is an extinction threat”! One could make a decent case that it’s evidence against “AI is an extinction threat”! That argument would look like: “we’re gonna make AGI sooner or later, and LLMs are less dangerous than alternative AI algorithms for the following reasons …”.
As an example, Yann LeCun thinks AGI will be a different algorithm rather than LLMs, and here’s my argument that the AGI algorithm LeCun expects is actually super-dangerous. (LeCun prefers a different term to “AGI” but he’s talking about the same thing.)
I’m trying to figure where you were coming from that you brought up “The current neural architecture paradigm can scale up to Artificial General Intelligence (AGI) (especially without great breakthroughs)” as a necessary part of the argument.
One possibility is, you’re actually interested in the question of whether transformer-architecture self-supervised (etc.) AI is an extinction threat or not. If so, that’s a weirdly specific question, right? If it’s not an extinction threat, but a different AI algorithm is, that would sure be worth mentioning, right? But fine. If you’re interested in that narrow question, then I think your post should have been titled “against transformer-architecture self-supervised (etc) AI as an extinction threat” right? Related: my post here.
Another possibility is, you think that the only two options are, either (1) the current paradigm scales to AGI, or (2) AGI is impossible or centuries away. If so, I don’t know why you would think that. For example, Yann LeCun and François Chollet are both skeptical of LLMs, but separately they both think AGI (based on a non-LLM algorithm) is pretty likely in the next 20 years (source for Chollet). I’m more or less in that camp too. See also my brief comment here.