Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.
Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
You can't really say anything is objectively wrong when it comes to morals, but also, I generally think that evaluating the well-being of potential entities to be leads to completely nonsensical moral imperatives like the Repugnant Conclusion. Since no one experiences all of the utility at the same time, I think "expected utility probability distribution" is a much more sensible metric (as in, suppose you were born as a random sentient in a given time and place: would you be willing to take the bet?).
That said, I do think extinction is worse than just a lot of death, but that's as a function of the people who are about to witness it and know they are the last. In addition, I think omnicide is worse than human extinction alone because I think animals and the rest of life have moral worth too. But I wouldn't blame people for simply considering extinction as 8 billion deaths, which is still A LOT of deaths anyway. It's a small point that's worthless arguing. We have wide enough uncertainties on the probability of these risks anyway that we can't really put fixed numbers to the expect harms, just vague orders of magnitude. While we may describe them as if they were numerical formulas, these evaluations really are mostly qualitative; enough uncertainty makes numbers almost pointless. Suffice to say, I think if someone considers, say, a 5% chance of nuclear war a bigger worry than a 1% chance of AI catastrophe, then I don't think I can make a strong argument for them being dead wrong.
I agree this makes no sense, but it's a completely different issue. That said, I think the biggest uncertainty re: X-risk remains whether AGI is really as close as some estimate it is at all. But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it's possible, and then it's dangerous, or it's still way far off, and then it's a waste of money and precious resources and ingenuity.