Listen to the AI Safety Newsletter for free on Spotify.
White House Executive Order on AI
While Congress has not voted on significant AI legislation this year, the White House has left their mark on AI policy. In June, they secured voluntary commitments on safety from leading AI companies. Now, the White House has released a new executive order on AI. It addresses a wide range of issues, and specifically targets catastrophic AI risks such as cyberattacks and biological weapons.
Companies must disclose large training runs. Under the executive order, companies that intend to train “dual-use foundation models” using significantly more computing power than GPT-4 must take several precautions. First, they must notify the White House before training begins. Then, they’ll need to report on their cybersecurity measures taken to prevent theft of model weights. Finally, the results of any red teaming and risk evaluations of their trained AI system must be shared with the White House.
This does not mean that companies will need to adopt sufficient or effective safety practices, but it does provide visibility for the White House on the processes of AI development and risk management. To improve the science of AI risk management, NIST has been tasked with developing further guidelines.
Compute clusters must register and report on foreign actors. AIs are often trained on compute clusters, which are networks of interconnected computer chips that can be rented by third parties. The executive order requires large computing clusters to be reported to the Department of Commerce. Further, to provide transparency on AI development by foreign actors, any foreign customer of a US-based cloud compute service will need to verify their identity to the US government. Some have argued that these know-your-customer requirements should extend to domestic customers as well.
Requiring safety precautions at biology labs. One nightmare scenario for biosecurity researchers is that someone could submit an order to a biology lab for the synthesized DNA of a dangerous pathogen. Some labs screen incoming orders and refuse to synthesize dangerous pathogens, but other labs do not.
To encourage adoption of this basic precaution, the executive order requires any research funded by the federal government to exclusively use labs that screen out dangerous compounds before synthesis. This may help combat the growing concern that AI could help rogue actors build biological weapons. The executive order also tasks several federal agencies with analyzing biosecurity risks from AI, including by producing a report that specifically focuses on the biorisks of open source AI systems.
Building federal AI capacity. The executive order supports many efforts to help the US government use AI safely and effectively. Several agencies have been tasked with using AI to find and fix security vulnerabilities in government software. The National Science Foundation has been directed to create a pilot version of the National AI Research Resource, which would provide computing resources for AI researchers outside of academia.
The full text of the executive order addresses many other issues, including privacy, watermarking of AI-generated content, AI-related patent and copyright questions , pathways to immigration for AI experts, and protections for civil rights. Right now, the White House is still in the stages of gathering information and developing best practices around AI. But this executive order will lead to meaningful progress on both of those fronts, and signals a clear commitment to address growing AI risks.
Kicking Off The UK AI Safety Summit
Today marks the first day of the UK’s AI Safety Summit, where politicians, academics, and members of industry and civil society (including the Center for AI Safety’s Director Dan Hendrycks) will meet to discuss AI risks and how governments can help mitigate them. Before the summit began, the UK government announced several new initiatives, including the creation of an international expert panel to assess AI risks and a new research institute for AI safety.
Rishi Sunak’s speech on AI extinction risk. UK Prime Minister Rishi Sunak delivered a speech on the opportunities and catastrophic risks posed by AI. Building on recent papers from the British government, he noted that “AI could make it easier to build chemical or biological weapons.” Then he directly quoted the CAIS expert statement on AI extinction risk, and said, “there is even the risk that humanity could lose control of AI completely.”
The speech also addressed doubts about AI risks. “There is a real debate about this,” Sunak said, and “some experts think it will never happen at all. But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.” Therefore, “leaders have a responsibility to take them seriously, and to act.”
The UK will propose an international expert panel on AI. The UN Intergovernmental Panel on Climate Change (IPCC) summarizes scientific research on climate change to help inform policymaking efforts on the topic. Many have suggested that a similar body of scientific experts could help establish consensus on AI risks. Sunak announced in his speech that the UK will propose a “global expert panel nominated by the countries and organisations attending [the AI Safety Summit] to publish a State of AI Science report.”
New AI Safety Institute to evaluate AI risks. Sunak also announced “the world’s first AI Safety Institute” which will “carefully examine, evaluate, and test new types of AI so that we understand what each new model is capable of.” Few details have been provided so far, but it’s possible that this could serve as a “CERN for AI” allowing countries to work together on AI and AI safety research, thereby mitigating coordination challenges and enabling centralized oversight of AI development.
Progress on Voluntary Evaluations of AI Risks
One common recommendation from those concerned about AI risks is that companies should commit to evaluating and mitigating risks before releasing new AI systems. This recommendation has recently received support from the United States, United Kingdom, and G7 alliance.
The White House’s new executive order on AI requires any company developing a dual-use foundation model to “notify the federal government when training the model, and [they] must share the results of all red-team safety tests.” To help develop better AI risk management techniques, the executive order also directs NIST to develop rigorous standards for red-teaming that companies could adopt.
Finally, the G7 has released a code of conduct that AI companies can voluntarily choose to follow. The policy would, among other things, require companies to evaluate catastrophic risks posed by their systems, invest in cybersecurity, and detect and prevent misuse during deployment.
These voluntary commitments are no substitute for binding legal requirements to ensure safety in AI development. Moreover, a commitment to assess and mitigate risks does not ensure that the risks will be eliminated or reduced below a manageable threshold. Further work is needed to create binding commitments that prevent companies from releasing unsafe AI systems.
Finally, it is important to note that even the ideal safety evaluations would not eliminate AI risks. Militaries might deliberately design AI systems to be dangerous. Economic competition could lead companies to automate large swathes of human labor with AI, leading to increased inequality and concentration of power in the hands of private companies. Eventually, AI systems could be given control of many of the world’s most important decisions, undermining human autonomy on a global scale.
A proposed international treaty on AI would create a three-tiered system for AI training. The most powerful AIs would be trained by a single multilateral institution, while licensed companies could train models with slightly less compute, and unlicensed developers with less compute still.
Leading AI researchers call for government action on AI risks in a new position paper.
Legal analysis of how AI systems should be incorporated into existing legal frameworks.
The terms of service for different AI models offer insights about the legal responsibilities that companies are willing to accept for harms caused by their models.
The Open Philanthropy Foundation (which is one of CAIS’s funders) is hiring for grantmaking and research roles in AI policy, technical AI safety research, and other areas.
For those interested in conducting technical AI safety research, the MATS Program running from January to March 2024 offers mentorship and support.
Artists are trying to poison training data in an effort to prevent AI companies from profiting on their work.
Self-driving car startup Cruise is no longer permitted to operate in the state of California after dragging a pedestrian for 20 feet after an accident.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
White House Executive Order on AI
While Congress has not voted on significant AI legislation this year, the White House has left their mark on AI policy. In June, they secured voluntary commitments on safety from leading AI companies. Now, the White House has released a new executive order on AI. It addresses a wide range of issues, and specifically targets catastrophic AI risks such as cyberattacks and biological weapons.
Companies must disclose large training runs. Under the executive order, companies that intend to train “dual-use foundation models” using significantly more computing power than GPT-4 must take several precautions. First, they must notify the White House before training begins. Then, they’ll need to report on their cybersecurity measures taken to prevent theft of model weights. Finally, the results of any red teaming and risk evaluations of their trained AI system must be shared with the White House.
This does not mean that companies will need to adopt sufficient or effective safety practices, but it does provide visibility for the White House on the processes of AI development and risk management. To improve the science of AI risk management, NIST has been tasked with developing further guidelines.
Compute clusters must register and report on foreign actors. AIs are often trained on compute clusters, which are networks of interconnected computer chips that can be rented by third parties. The executive order requires large computing clusters to be reported to the Department of Commerce. Further, to provide transparency on AI development by foreign actors, any foreign customer of a US-based cloud compute service will need to verify their identity to the US government. Some have argued that these know-your-customer requirements should extend to domestic customers as well.
Requiring safety precautions at biology labs. One nightmare scenario for biosecurity researchers is that someone could submit an order to a biology lab for the synthesized DNA of a dangerous pathogen. Some labs screen incoming orders and refuse to synthesize dangerous pathogens, but other labs do not.
To encourage adoption of this basic precaution, the executive order requires any research funded by the federal government to exclusively use labs that screen out dangerous compounds before synthesis. This may help combat the growing concern that AI could help rogue actors build biological weapons. The executive order also tasks several federal agencies with analyzing biosecurity risks from AI, including by producing a report that specifically focuses on the biorisks of open source AI systems.
Building federal AI capacity. The executive order supports many efforts to help the US government use AI safely and effectively. Several agencies have been tasked with using AI to find and fix security vulnerabilities in government software. The National Science Foundation has been directed to create a pilot version of the National AI Research Resource, which would provide computing resources for AI researchers outside of academia.
The full text of the executive order addresses many other issues, including privacy, watermarking of AI-generated content, AI-related patent and copyright questions , pathways to immigration for AI experts, and protections for civil rights. Right now, the White House is still in the stages of gathering information and developing best practices around AI. But this executive order will lead to meaningful progress on both of those fronts, and signals a clear commitment to address growing AI risks.
Kicking Off The UK AI Safety Summit
Today marks the first day of the UK’s AI Safety Summit, where politicians, academics, and members of industry and civil society (including the Center for AI Safety’s Director Dan Hendrycks) will meet to discuss AI risks and how governments can help mitigate them. Before the summit began, the UK government announced several new initiatives, including the creation of an international expert panel to assess AI risks and a new research institute for AI safety.
Rishi Sunak’s speech on AI extinction risk. UK Prime Minister Rishi Sunak delivered a speech on the opportunities and catastrophic risks posed by AI. Building on recent papers from the British government, he noted that “AI could make it easier to build chemical or biological weapons.” Then he directly quoted the CAIS expert statement on AI extinction risk, and said, “there is even the risk that humanity could lose control of AI completely.”
The speech also addressed doubts about AI risks. “There is a real debate about this,” Sunak said, and “some experts think it will never happen at all. But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.” Therefore, “leaders have a responsibility to take them seriously, and to act.”
The UK will propose an international expert panel on AI. The UN Intergovernmental Panel on Climate Change (IPCC) summarizes scientific research on climate change to help inform policymaking efforts on the topic. Many have suggested that a similar body of scientific experts could help establish consensus on AI risks. Sunak announced in his speech that the UK will propose a “global expert panel nominated by the countries and organisations attending [the AI Safety Summit] to publish a State of AI Science report.”
New AI Safety Institute to evaluate AI risks. Sunak also announced “the world’s first AI Safety Institute” which will “carefully examine, evaluate, and test new types of AI so that we understand what each new model is capable of.” Few details have been provided so far, but it’s possible that this could serve as a “CERN for AI” allowing countries to work together on AI and AI safety research, thereby mitigating coordination challenges and enabling centralized oversight of AI development.
Progress on Voluntary Evaluations of AI Risks
One common recommendation from those concerned about AI risks is that companies should commit to evaluating and mitigating risks before releasing new AI systems. This recommendation has recently received support from the United States, United Kingdom, and G7 alliance.
The White House’s new executive order on AI requires any company developing a dual-use foundation model to “notify the federal government when training the model, and [they] must share the results of all red-team safety tests.” To help develop better AI risk management techniques, the executive order also directs NIST to develop rigorous standards for red-teaming that companies could adopt.
At the request of the United Kingdom, six leading AI companies have published descriptions of their risk assessment and mitigation plans. There are important differences between the policies. For example, Meta argues that open sourcing their models will improve safety, while OpenAI, DeepMind, and others prefer to monitor use of their models to prevent misuse. But each company has provided their safety policy, and the UK has summarized the policies in a review of existing AI safety policies.
Finally, the G7 has released a code of conduct that AI companies can voluntarily choose to follow. The policy would, among other things, require companies to evaluate catastrophic risks posed by their systems, invest in cybersecurity, and detect and prevent misuse during deployment.
These voluntary commitments are no substitute for binding legal requirements to ensure safety in AI development. Moreover, a commitment to assess and mitigate risks does not ensure that the risks will be eliminated or reduced below a manageable threshold. Further work is needed to create binding commitments that prevent companies from releasing unsafe AI systems.
Finally, it is important to note that even the ideal safety evaluations would not eliminate AI risks. Militaries might deliberately design AI systems to be dangerous. Economic competition could lead companies to automate large swathes of human labor with AI, leading to increased inequality and concentration of power in the hands of private companies. Eventually, AI systems could be given control of many of the world’s most important decisions, undermining human autonomy on a global scale.
Links
See also: CAIS website, CAIS twitter, A technical safety research newsletter, An Overview of Catastrophic AI Risks, and our feedback form
Listen to the AI Safety Newsletter for free on Spotify.
Subscribe here to receive future versions.