Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks
Since artificial superintelligence has never existed, claims that it poses a serious risk of global catastrophe can be easy to dismiss as fearmongering. Yet many of the specific worries about such systems are not free-floating fantasies but extensions of patterns we already see. This essay examines thirteen distinct ways artificial superintelligence could go wrong and, for each, pairs the abstract failure mode with concrete precedents where a similar pattern has already caused serious harm. By assembling a broad cross-domain catalog of such precedents, I aim to show that concerns about artificial superintelligence track recurring failure modes in our world. This essay is also an experiment in writing with extensive assistance from artificial intelligence, producing work I couldn’t have written without it. That a current system can help articulate a case for the catastrophic potential of its own lineage is itself a significant fact; we have already left the realm of speculative fiction and begun to build the very agents that constitute the risk. On a personal note, this collaboration with artificial intelligence is part of my effort to rebuild the intellectual life that my stroke disrupted and hopefully push it beyond where it stood before. Section 1: Power Asymmetry and Takeover Artificial superintelligence poses a significant risk of catastrophe in part because an agent that first attains a decisive cognitive and strategic edge can render formal checks and balances practically irrelevant, allowing unilateral choices that the rest of humanity cannot meaningfully contest. When a significantly smarter and better organized agent enters a domain, it typically rebuilds the environment to suit its own ends. The new arrival locks in a system that the less capable original agents cannot undo. History often shows that the stronger party dictates the future while the weaker party effectively loses all agency. The primary risk of artificial superintelligence is that we
Agreed, the article would have been stronger if it included successful defenses.