I wrote a post to my Substack attempting to compile all of the best arguments against AI as an existential threat.
Some arguments that I discuss include: international game theory dynamics, reference class problems, knightian uncertainty, superforecaster and domain expert disagreement, the issue with long-winded arguments, and more!
Please tell me why I'm wrong, and if you like the article, subscribe and share it with friends!
"Long-winded arguments tend to fail" is a daring section title in your 5,000 word essay :P
In general, I think the genre "collect all the arguments I can find on only one side of a controversial topic" is bound to lead to lower-quality inclusions, and that section is probably among them. I prefer the genre "collect the best models I can find of a controversial topic and try to weigh them."
Why is "The argument for AI risk has a lot of necessary pieces, therefore it's unlikely" a bad argument?
Well, if you're really shooting for the least, you've already found the structure: just frame the argument for non-doom in terms of a lot of conjunctions (you have to build AI that we understand how to give inputs to, and also you have to solve various coordination and politics problems, and also you have to be confident that it won't have bugs, and also you have to solve various philosophical problems about moral progress, etc.), and make lots of inde... (read more)