The following is a post I published on my personal blog in response to criticisms published by Seth Lazar, Jeremy Howard, and Arvind Narayanan of Center for AI Safety's recent statement on AI existential risk. The target audience are those individuals who are not convinced of AGI x-risk. If you find the following arguments convincing, please share the blog post broadly so more people can can engage with arguments in favour of taking AGI x-risk seriously.


It has been a big week in the world of AI governance. Highlighting the two most important developments:

First, the Center for AI Safety led the signing and release of a "Statement on AI Risk", signed by dozens of incredibly knowledgeable and credible members of the AI community, including the CEOs of the 3 most advanced AI companies in the world (OpenAI, Google DeepMind, Anthropic) as well as some of the world's most prominent AI research leaders (Geoffrey Hinton, Yoshua Bengio, Stuart Russell), which reads:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Soon after, 3 highly respected AI experts in their own right, Seth Lazar, Jeremy Howard, and Arvind Narayanan, published a rebuttal letter to the statement.

These 3 individuals are clearly knowledgeable in their domain and must have very good reasons for believing what they do. However, as a lifelong technologist who has spent thousands of hours investigating and forming an independent view of the arguments for and against the existential risk to humanity from AI[1], I found their arguments to be significantly flawed and ultimately not an effective rebuttal of the existential risks raised by the Center for AI Safety's letter and by the global community of people who have dedicated themselves to tackling the extinction risks of AI.

I want to highlight just some of the gaps in their rebuttal arguments, to show that, yes we should care about existential risk from AI. Humanity's very survival in the coming decades may depend on how we respond to this moment in history. Let's jump right into it.

AI extinction risk is more important than other AI risks

In the letter, the authors seem to indicate that human extinction risk should be considered no more important than other, non-extinction risks from AI (such as inequality). Quote:

Other AI risks are as important

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks

The argument against this is simple: human extinction is a terrible, terminal outcome and is therefore more important than non-extinction risks.  Yes, other risks from AI (e.g. inequality, concentration of power) are bad, but these are not terminal -- we can recover from them, making them categorically less important than extinction risk. Note that this includes extinction risk from malicious human actors, which the signatories do take seriously, contrary to what's implied in the rebuttal letter.

AI extinction risk is urgent

The authors also seem to think AI extinction risk is not urgent. Quote:

Other AI risks... are much more urgent

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet

While no one can predict the future, the fact that AI extinction is urgent is a view held by many AI experts, including those who've signed this statement.

Demis Hassabis, CEO of Google DeepMind, predicts AGI within a decade. Sam Altman, CEO of OpenAI, predicts in the first half of the century. Geoffrey Hinton, leading researcher, says within 20 years. Countless other AI experts have been quoted with similarly short timelines. Many even raised alarms long before recent major breakthroughs such as GPT-4. I've personally deeply investigated the question of when we'll see AGI on The AGI Show podcast by interviewing AI experts and have come to the same conclusion that it is likely within the next few decades.

The other important aspect in determining urgency is just how difficult the problem will be once it arrives – can we afford to simply be reactive? The people who have examined this problem most closely have called out its immense difficulty, considering it arguably the hardest problem we will have ever faced as a species.

Considering both the potential timeline (within a few decades) and difficulty (wickedly hard) of this problem, it's fair to say many AI experts believe this problem is urgent and should there be tackled proactively, not reactively.

Signing the statement purely for personal benefit

There is an implication made throughout the letter that the signatories are signing this statement out of personal benefit -- for profit and power. Quote:

in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.

By simply looking more closely at the facts, we can see that this is simply not the case.

As with any regulation, a societal response to AI extinction risk could lead to undesirable "winners and losers". However, extinction is not something we should be playing politics with, and there is plenty of evidence that the signatories of this statement are not signing it for personal or political gain.

For one, many of the most prominent signatories have nothing to gain and much to lose from signing. World leading AI researchers such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell have dedicated their life to advancing AI and are now calling to curtail and redo much of the work they have led for many decades. In Hinton's case, he's even resigned from a prominent, lucrative role at Google to be able to follow his conscience and speak out about these massive challenges.

Even the argument that leading AI organisations are doing this for their own benefit is incredibly weak. Can you imagine a CEO of a company coming out and saying, "oh, and my product could wipe out humanity", as a way to benefit their company? It's an incredibly stretched argument. And we can see just how weak it is by looking at the fact that some of these company leaders have been warning about AI risk since well before they took on their roles or were in a leading position. Google DeepMind co-founder and statement signatory Shane Legg even wrote his PhD dissertation on the topic. OpenAI has been researching AGI safety since their founding (well before they became industry leaders), such as this 2016 paper co-authored by OpenAI co-founder Greg Brockman, published alongside Stanford and Berkeley AI safety researchers.

Why don't they just shut it down then?

The rebuttal letter ponders: if these companies are worried about AI extinction risk, then why not simply stop all work and shut it down? Quote:

Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

The argument against this is clear – no single AI organisation can simply shut down and solve this global problem. This is a classic coordination problem, analogous to the famous Prisoner's Dilemma.

The leaders of advanced AI companies that have signed this letter want to do the right thing, but if they shut things down, a less safety conscious company would quickly step in and take the lead. At that point, the world would be at even greater extinction risk from AI because the new companies leading the AI race would care even less about the risks.

So to solve this problem, we need a globally coordinated solution. This is the domain of governments and international organisations. A concrete example of a good response is the International Atomic Energy Agency, devised to prevent the global extinction risk of nuclear war. We're in an analogous situation here.

Conclusion

There are many more arguments that could be made to demonstrate that this letter is not an effective refutation of the case for taking AI extinction risk much more seriously than we do today.

The rebuttal letter's authors are credible, thoughtful, and have many good ideas for how to make the world a better place, including by raising awareness for the myriad AI risks we face as a society beyond existential risk. However, we should not downplay the critically important, urgent, and complex challenge of AI extinction risk. Preventing extinction risk is the only way we survive as a species long enough to prosper and continue our work on other societal challenges.

Acknowledgements

Thank you to Hunter Jay @HunterJay for his great feedback on the post.

 

  1. ^

    I'm currently on a full-time sabbatical investigating the existential risks of AI for humanity. I decided to do so because of the compelling arguments for AI existential risk (see here for an example). Except for my work in this area to help prevent AI related catastrophe, I have no prior affiliation with an AI org and therefore do not personally benefit from convincing others of AI existinction risk. In fact, I and the rest of humanity would strongly benefit from this not being a risk, so if anything I am biased to think it's not a risk.

New Comment