On the bright side Connor Leahy from Conjecture is going to be at the summit so there will be at least one strong voice for existential risk present there
Update, 16th October:
The Q&A with Secretary of State for Science, Michelle Donelan MP, has been moved to today on LinkedIn.
The programme for the summit has been released. Brief summary:
Day 1
Roundtables on "understanding frontier AI risks":
1. Risks to Global Safety from Frontier AI Misuse
2. Risks from Unpredictable Advances in Frontier AI Capability
3. Risks from Loss of Control over Frontier AI
4. Risks from the Integration of Frontier AI into Society
Roundtables on "improving frontier AI safety":
1. What should Frontier AI developers do to scale responsibly?
2. What should National Policymakers do in relation to the risk and opportunities of AI?
3. What should the International Community do in relation to the risk and opportunities of AI?
4. What should the Scientific Community do in relation to the risk and opportunities of AI?
Panel discussion on "AI for good – AI for the next generation".
Day 2
"The Prime Minister will convene a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good. In parallel, UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree next steps."
While preparing for an upcoming Convergence Analysis post on the UK AI Taskforce and our recommendations, I looked into their reports, their plans for the upcoming AI safety summit, the recommendations from other orgs, and some miscellaneous UK AI events. I doubt we’ll include this in our post but I thought the updates were worth sharing, so here is a brief summary.
The UK AI taskforce
In April 2023, the UK government committed £100 million to its new AI Foundation Models Taskforce, led by Ian Hogarth. The taskforce was created in response to a white paper titled A pro-innovation approach to AI regulation in March, and was modeled on the 2020 Vaccine Taskforce, with similar “agility and delegated authority”.
The government announced an AI Safety Summit on the 1st and 2nd of November at Bletchley Park, and put out a call for expressions of interest, looking for:
In September, the taskforce released their first report. In summary:
The UK AI safety summit
The organizers have released some more details on the upcoming summit. In summary:
There are some official pre-summit events, details about which will apparently be published on @SciTechgovuk and other social media channels:
And two upcoming opportunities for public engagement:
Misc.
Several organizations have published recommendations for the taskforce and for the summit, such as:
Some news articles (e.g. here, here) claim that the government is rushing to finalize an agreement among world leaders before the summit in November.
There’s also the AI Fringe, a series of events on safe and responsible AI across the UK from October 30th till November 3rd, separate but complementary to the government’s summit. Their events “will feature a series of keynotes, fireside chats, panels, roundtables and workshops to expand the conversation on AI.”
Public perception of existential risk
I’ll finish with some personal thoughts on existential risk. In my opinion, it’s a shame that the taskforce, their summit, and the AI Fringe do not address existential risk. There are brief mentions of AI biosecurity risks and hints that the summit may include discussion about “losing control” of AI, but I feel the topic is still largely neglected in their report and summit plans (and at the AI Fringe, even though they found time for a session on moon base psychophysics with lunar-themed entertainment).
It makes sense to me that they are focused on short and medium-term risks, but I would like to see investment in mitigating longer-term, larger-scale risks. I also suspect (though I won’t try to provide evidence here) that a good chunk of the public see existential risk from AI as a fringe idea[1], and I think that experts and the government should combat that by publicly recognizing and addressing existential risks. Indeed, despite the limited mention of existential risk, some think the UK is too focused on it: in Why the UK AI Safety Summit will fail to be meaningful, Dr Keegan McBride, research lecturer at the Oxford Internet Institute, writes that the “summit and the UK’s current AI strategy is primarily concerned with existential risks” even though “the idea that AI will bring about the end of the world in the near future is not grounded in reality”. Politico also published How Silicon Valley doomers are shaping Rishi Sunak’s AI plans, which, like Dr McBridge, describes how effective altruists are pushing the UK’s policy in the wrong direction:
I strongly disagree with both these articles, and I’m interested in how people think we can improve public perception of EA and advocacy for existential safety work, as well as how people feel about the UK’s AI safety efforts more broadly. Do you think the UK’s response is promising or lackluster? Should they focus on existential risk, or will that follow from short and medium-term risk reduction? In what direction should we try to steer the taskforce?
My colleague, Dr Justin Bullock, has written an article on public perception of AI risk and safety published in the Humanist Perspectives journal. He argues that the recent AIPI and AIMS survey data shows that the public do recognise the existential risk from AI.