Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.
Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
Ol, let's consider two scenarios:
humanity goes extinct gradually and voluntarily via a last generation that simply doesn't want to reproduce and is cared for by robots to the end, so no one suffers particularly in the process;
humanity is locked in a torturous future of trillions in inescapable torture, until the heat death of the universe.
Which is better? I would say 1 is. There are things worse than extinction (and some of them are on the table with AI too, theoretically). And anyway you should consider that with how many "low hanging fruit" resources we've used, there's fair odds that if we're knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again. Stasis is better than immediate extinction but if you care about the long term future it's also bad (and implies a lot more suffering because it's a return to the past).