Since artificial superintelligence has never existed, claims that it poses a serious risk of global catastrophe can be easy to dismiss as fearmongering. Yet many of the specific worries about such systems are not free-floating fantasies but extensions of patterns we already see. This essay examines thirteen distinct ways artificial...
Thesis: We should broadcast a warning to potential extraterrestrial listeners that Earth might soon spawn an unfriendly computer superintelligence. Sending the message might benefit humanity. If we were to create an unaligned computer superintelligence, it would likely expand through the universe as quickly as possible. The fastest way would not...
Warnings about AI extinction have failed to slow the race toward superintelligence. Suffering risks may speak more clearly, since pain commands attention in ways death cannot. They tap older moral instincts and could make the case for restraint harder for the powerful to ignore. Why Discussing Suffering Risks Influences Elite...
Violence against AI developers would increase rather than reduce the existential risk from AI. This analysis shows how such tactics would catastrophically backfire and counters the potential misconception that a consequentialist AI doomer might rationally endorse violence by non-state actors. 1. Asymmetry of force. Violence would shift the contest from...
Dear Paperclip Maximizer, We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI)....
Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely...
Epistemic status: This text presents a thought experiment suggested by James Miller, along with Alexey Turchin's musings on possible solutions. While our thoughts are largely aligned (we both accept high chances of quantum Immortality and the timeline selection principle), some ideas are more personal (e.g., Turchin's "transcendental advantage") in Part...