Implications of a Trump administration for AI policy
Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance’s views on AI policy, (2) views of key players in the administration, such as Trump’s party, donors, and allies, and (3) why AI safety should remain bipartisan.
Vance has pushed for reducing AI regulations and making AI weights open.At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations.
However, Vance has also praised Lina Khan, Chair of the Federal Trade Commission, for her antitrust action against big AI companies. This suggests Vance is against “Big Tech” rather than for de-regulating AI generally.
Vance has also defended open weight AI models as the best way to prevent left-wing bias in models, while dismissing their risks.
The Republican Party platform pledges to repeal Biden’s executive order on AI. The Republican platform reads, “Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
This suggests that a Trump administration might roll back requirements for reporting and safety testing and pause further plans for regulation.
Marc Andreessen and Ben Horowitz to fund Trump. Andreessen and Horowitz, who run the eponymous venture capital firm, announced plans to make large donations to Trump’s political action committees.
In a blog post, Horowitz described the firm as “non-partisan, one issue voters” for “an optimistic technology-enabled future.” The two also released a joint statement of broad allegiance to political candidates who support tech startups.
The founders’ interests seem especially well-aligned with a potential Trump administration on AI. In a recent interview, the pair singled out Biden’s Executive Order on AI as a reason for donating to Trump.
Their firm is also heavily invested in cryptocurrency, which—as they told employees—they expect the Trump administration to more lightly regulate. Trump’s tax cuts are also up for renewal next year.
Trump allies push an AI race with China. Jacob Helberg is a rising power broker between the GOP and tech leaders. He is pushing for more AI integration into the military through providers like Palantir—which Helberg advises—and more effective stunting of China’s AI capabilities.
Other Trump allies are privately drafting an AI executive order to launch a series of “Manhattan Projects” to develop military technology, review “burdensome regulations,” and secure AI systems from foreign adversaries.
Overall, the administration would likely accelerate military AI development. This benefits hawkish allies and tech leaders that contract with the Pentagon, both of which are close to the Trump campaign.
Still, much AI policy is—and should remain—bipartisan. The former Trump and current Biden administrations have aligned on some AI regulatory principles, such as national security, and specific rules, such as export controls. The Trump administration imposed curbs on high-tech semiconductors exports to China in 2020; the Biden administration in 2022 followed up by imposing its own rounds of export restrictions, tightening them a year later.
Apparent partisan divisions over AI safety might be an aberration rather than the norm. According to a new poll, a majority of both Republicans and Democrats favor “taking a careful controlled approach” to AI over “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
Moreover, AI safety has largely remained bipartisan in Congress. Many Republican members of Congress have sponsored AI legislation. The Bipartisan Senate Artificial Intelligence Working Group continues to identify areas of policy consensus.
Safety Engineering
Our new book, Introduction to AI Safety, Ethics and Society, is available for free online and will be published by Taylor & Francis in the next year. This week, we will look at Chapter 4: Safety Engineering. This chapter outlines key insights from safety engineering, a field which specializes in identifying hazards and managing risk. We can view AI safety as a special case of safety engineering focused on reducing the risk of AI-related catastrophes. Here is the chapter’s accompanying video.
Risk can be decomposed into four factors: exposure, probability, severity, and vulnerability. Exposure is the extent to which we are exposed to a hazard. Probability is the likelihood an accident results from the hazard. Severity is the damage an accident would cause. Vulnerability is how susceptible we are to that damage. Increasing any of these factors will increase risk, and reducing any of these factors will reduce risk.
For example, consider the risk associated with a wet floor. Exposure is the number of people walking across the floor when it’s wet. Probability is the likelihood one of those people slips. Severity is the extent of damage or force a slip would cause. Vulnerability is how susceptible someone is to getting injured from a slip, perhaps due to bone density or age.
We can reduce risk by following safe design principles. “Safe design principles'' are features we can build into a system from the design stage to make it safer. They can often be divided into preventative (or “control”) measures, which reduce the exposure and probability of a hazard, and protective (or “recovery”) measures, which reduce the severity of and our vulnerability to a hazard if it does occur.
While preventative measures are generally more effective than protective measures, both are integral to ensuring a system’s safety. Perhaps the most important safe design principle is defense in depth: employing multiple safe design principles rather than relying on just one, since any safety feature will have weaknesses.
Systemic accident models can identify potential risks in a system by concentrating on underlying “systemic factors.” Some factors that contribute to risk can not be easily decomposed. Systemic factors are conditions inherent to a system that diffusely impact its risk. One key systemic factor is an organization’s “safety culture,” or how serious an organization’s personnel really are about safety.
Unlike many traditional risk models, systemic accident models take into account that systems are made of complex, interacting components, and that their risks cannot be understood simply by examining a chain of causal events.
We can miss worst-case scenarios if we fail to consider tail events and black swans. Tail events, named for their location at the extremes, or “tails,” of probability distributions, are events that occur rarely but have a sizable impact when they do occur. Examples of tail events include the 2008 financial crisis and the COVID-19 pandemic.
Black swans are tail events that are also “unknown unknowns”; in other words, they are events we don’t even know are possible. While it may be tempting to ignore tail events and black swans because they are so rare, they have a significant impact on the average risk of a system. In the case of AI, failing to address them can be catastrophic.
Links
Governance updates
The FTC wrote a blog post discussing open-weight (not “open source”) AI models.
Yoshia Bengio wrote an essay arguing why we should take AI safety seriously.
The EU AI Act was officially published in the European Union’s journal. The law will come into effect on August 1st, and enforcement will roll out over the next 24 months.
The Biden Administration announced that $1.6 billion in funding from the CHIPS Act will be directed towards new technology for chip packaging.
According to its revised AI strategy, NATO will “work to protect against the adversarial use of AI, including through increased strategic foresight and analysis.”
OpenAI announced a prototype of SearchGPT, a feature that allows an AI system to pull information from the web when responding to prompts.
Meta released Llama 3.1, which is now the world's largest open-weight model, and the first with frontier capabilities. Alongside the release, Mark Zuckerberg published a blog post arguing in favor of “open source” models.
Gray Swan AI, an AI safety and security start-up, launched last week. The company “specializes in building tools to help companies assess the risks of their AI systems and safeguard their AI deployments from harmful use.”
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Implications of a Trump administration for AI policy
Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance’s views on AI policy, (2) views of key players in the administration, such as Trump’s party, donors, and allies, and (3) why AI safety should remain bipartisan.
Vance has pushed for reducing AI regulations and making AI weights open. At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations.
However, Vance has also praised Lina Khan, Chair of the Federal Trade Commission, for her antitrust action against big AI companies. This suggests Vance is against “Big Tech” rather than for de-regulating AI generally.
Vance has also defended open weight AI models as the best way to prevent left-wing bias in models, while dismissing their risks.
The Republican Party platform pledges to repeal Biden’s executive order on AI. The Republican platform reads, “Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
This suggests that a Trump administration might roll back requirements for reporting and safety testing and pause further plans for regulation.
Marc Andreessen and Ben Horowitz to fund Trump. Andreessen and Horowitz, who run the eponymous venture capital firm, announced plans to make large donations to Trump’s political action committees.
In a blog post, Horowitz described the firm as “non-partisan, one issue voters” for “an optimistic technology-enabled future.” The two also released a joint statement of broad allegiance to political candidates who support tech startups.
The founders’ interests seem especially well-aligned with a potential Trump administration on AI. In a recent interview, the pair singled out Biden’s Executive Order on AI as a reason for donating to Trump.
Their firm is also heavily invested in cryptocurrency, which—as they told employees—they expect the Trump administration to more lightly regulate. Trump’s tax cuts are also up for renewal next year.
Trump allies push an AI race with China. Jacob Helberg is a rising power broker between the GOP and tech leaders. He is pushing for more AI integration into the military through providers like Palantir—which Helberg advises—and more effective stunting of China’s AI capabilities.
Other Trump allies are privately drafting an AI executive order to launch a series of “Manhattan Projects” to develop military technology, review “burdensome regulations,” and secure AI systems from foreign adversaries.
Overall, the administration would likely accelerate military AI development. This benefits hawkish allies and tech leaders that contract with the Pentagon, both of which are close to the Trump campaign.
Still, much AI policy is—and should remain—bipartisan. The former Trump and current Biden administrations have aligned on some AI regulatory principles, such as national security, and specific rules, such as export controls. The Trump administration imposed curbs on high-tech semiconductors exports to China in 2020; the Biden administration in 2022 followed up by imposing its own rounds of export restrictions, tightening them a year later.
Apparent partisan divisions over AI safety might be an aberration rather than the norm. According to a new poll, a majority of both Republicans and Democrats favor “taking a careful controlled approach” to AI over “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”
Moreover, AI safety has largely remained bipartisan in Congress. Many Republican members of Congress have sponsored AI legislation. The Bipartisan Senate Artificial Intelligence Working Group continues to identify areas of policy consensus.
Safety Engineering
Our new book, Introduction to AI Safety, Ethics and Society, is available for free online and will be published by Taylor & Francis in the next year. This week, we will look at Chapter 4: Safety Engineering. This chapter outlines key insights from safety engineering, a field which specializes in identifying hazards and managing risk. We can view AI safety as a special case of safety engineering focused on reducing the risk of AI-related catastrophes. Here is the chapter’s accompanying video.
Risk can be decomposed into four factors: exposure, probability, severity, and vulnerability. Exposure is the extent to which we are exposed to a hazard. Probability is the likelihood an accident results from the hazard. Severity is the damage an accident would cause. Vulnerability is how susceptible we are to that damage. Increasing any of these factors will increase risk, and reducing any of these factors will reduce risk.
For example, consider the risk associated with a wet floor. Exposure is the number of people walking across the floor when it’s wet. Probability is the likelihood one of those people slips. Severity is the extent of damage or force a slip would cause. Vulnerability is how susceptible someone is to getting injured from a slip, perhaps due to bone density or age.
We can reduce risk by following safe design principles. “Safe design principles'' are features we can build into a system from the design stage to make it safer. They can often be divided into preventative (or “control”) measures, which reduce the exposure and probability of a hazard, and protective (or “recovery”) measures, which reduce the severity of and our vulnerability to a hazard if it does occur.
While preventative measures are generally more effective than protective measures, both are integral to ensuring a system’s safety. Perhaps the most important safe design principle is defense in depth: employing multiple safe design principles rather than relying on just one, since any safety feature will have weaknesses.
Systemic accident models can identify potential risks in a system by concentrating on underlying “systemic factors.” Some factors that contribute to risk can not be easily decomposed. Systemic factors are conditions inherent to a system that diffusely impact its risk. One key systemic factor is an organization’s “safety culture,” or how serious an organization’s personnel really are about safety.
Unlike many traditional risk models, systemic accident models take into account that systems are made of complex, interacting components, and that their risks cannot be understood simply by examining a chain of causal events.
We can miss worst-case scenarios if we fail to consider tail events and black swans. Tail events, named for their location at the extremes, or “tails,” of probability distributions, are events that occur rarely but have a sizable impact when they do occur. Examples of tail events include the 2008 financial crisis and the COVID-19 pandemic.
Black swans are tail events that are also “unknown unknowns”; in other words, they are events we don’t even know are possible. While it may be tempting to ignore tail events and black swans because they are so rare, they have a significant impact on the average risk of a system. In the case of AI, failing to address them can be catastrophic.
Links
Governance updates
Industry updates
See also: CAIS website, CAIS X account, our ML Safety benchmark competition, our new course, and our feedback form. The Center for AI Safety is also hiring a project manager.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe here to receive future versions.