Taxonomy of AI-risk counterarguments
Partly inspired by The Crux List, the following is a non-comprehensive taxonomy of positions which imply that we should not be worried about existential risk from artificial superintelligence. Each position individually is supposed to be a refutation of AI X-risk concerns as a whole. These are mostly structured as specific points of departure from the regular AI X-risk position, taking the other areas as a given. This may result in skipping over positions which have multiple complex dependencies. Some positions are given made-up labels, including each of the top-level categories: "Fizzlers", "How-skeptics", "Why-skeptics", "Solvabilists", and "Anthropociders". (Disclaimer: I am not an expert on the topic. Apologies for any mistakes or major omissions.) Taxonomy 1. "Fizzlers": Artificial superintelligence is not happening. 1. AI surpassing human intelligence is fundamentally impossible (or at least practically impossible). 1. True intelligence can only be achieved in biological systems, or at least in systems completely different from computers. 1. Biological intelligences rely on special quantum effects, which computers cannot replicate. 2. Dualism: The mental and physical are fundamentally distinct, and non-mental physical constructions cannot create mental processes. 3. Intelligence results from complex, dynamic systems of a kind which cannot be modeled mathematically by computers. 2. Mysterianists: A particular key element of human thinking, such as creativity, common sense, consciousness, or conceptualization, is so beyond our ability to understand that we will not be able to create an AI that can achieve it. Without this element, superintelligence is impossible. 3. Intelligence isn't a coherent or meaningful concept. Capability gains do not generalize. 4. There is a fundamental ceiling on intelligence, and it is around where humans are. 2. "When-skeptics": ASI is very, very far away.
The site appears to have been shut down. Unfortunate.