We’ve previously written a piece on soft nationalization, discussing the growing importance of US national security on the control and regulation of powerful AI models.
The upcoming shift from Biden’s administration to a Republican one is likely to necessitate some changes in strategy and framing for AI safety initiatives.
US national security interests, closely followed by US economic interests, are likely to dominate any regulation of AI systems in the next four years. A Trump administration is likely to prioritize projecting an image of American innovation, power, and speed regarding AI technologies.
Republicans have indicated that they support initiatives such as pro-business deregulation, military capabilities, and American chip manufacturing. Despite some evidence that Republican policymakers are beginning to raise concerns about existential risks, the incoming administration will likely be hesitant to pass legislation that meaningfully restricts industry interests in pursuit of AI safety.
National security interests that a Trump administration will likely pursue include:
Increasing controls around Chinese & foreign access to AI models and chips
A progressive increase in public / private partnerships developing AI technologies for US military and intelligence goals
International cooperation in the context of promoting American interests and avoiding multilateral agreements that restrict US AI corporations
Economic interests that a Trump administration will likely pursue include:
Policies encouraging rapid innovation and AI capabilities development
Broad deregulation for AI development – or at the minimum, a reluctance to engage in AI safety governance that restricts AI labs
Corporation-friendly policies and increasing amounts of regulatory capture
Despite this focus, we think AI safety initiatives are still very much on the table in the upcoming administration. There is still significant overlap between national security interests and AI safety initiatives.For example, strategies to increase monitoring of the semiconductor supply chain could improve both US national security outcomes (by reducing chip smuggling to China) and AI safety outcomes (by capacity building for the USG).
As a result, we think that AI safety projects should prioritize “leaning into” the common ground between US national interests and AI safety goals. We think there are numerous domains of overlap, and that projects in these domains are the most likely to succeed in the upcoming administration.
Finally, for teams working on AI safety, we believe that alignment with US security & economic interests should be a moderate priority when considering new policy proposals. At the very least, new projects should not advocate for interventions explicitly counter to these described interests.
In this article, we’ll discuss a few overarching perspectives that we’re already starting to see emerge, and provide some evidence for this worldview. We’ll mention examples of AI policy initiatives that we see as likely to succeed, as well as initiatives we think are unlikely to make progress in the new administration.
Identifying Commonalities between AI Safety and Republican Interests
Very little of the political discourse in the 2024 election was centered around AI policy. Specific policy proposals from the Republican party are difficult to identify, as language around AI remains high-level and general in both the GOP 2024 platform and Project 2025.
What we do see is that Republicans have set clear messaging principles about how they intend to discuss AI, with a well-established framing. The GOP platform supports “AI Innovation” and “AI Development rooted in Free Speech and Human Flourishing.”, and such language is broadly mirrored by key Republicans.
Republicans have characterized Biden’s AI regulations as “unnecessary and burdensome”, pushed for “industry-led” agencies, and are organizing plans titled “Make America First in AI”. It’s clear that language from the GOP is currently de-emphasizing AI regulation in favor of AI capabilities development and innovation.
Most concretely, Elon Musk has recently acquired significant sway with Trump, and has strong ties to AI safety and existential risk efforts. His influence could swing the Trump administration to be more amenable to AI safety efforts, despite conservative headwinds.
For more examples of Republican priorities, see these pieceson AI from Michael Kratsios, who is heading the tech policy transition for the Trump administration.
Consequently, we think that there are actually numerous ways in which AI safety initiatives are complementary with Republican priorities. It’s possible the current lack of specificity could even indicate an openness to novel ideas and proposals around AI safety.
As Republican positions transition from high-level generalities on AI to specifics over the next year, we think that most AI safety projects remain quite plausible - but they need to find and advocate for points of commonality with the evolving Republican AI platform.
For example, it may be more effective for AI safety projects to:
Focus AI safety initiatives around benefits to US consumers, rather than focusing on restrictions on corporations
Communicate proposals in terms of preserving American values and individual liberties
Emphasize the role of industry in developing & supporting legislation
Advocate for policies that both reduce Chinese access to frontier AI technologies and improve USG capacity-building
Emphasize specificupcoming national security concerns, rather than high-level existential risks
Identifying complementary approaches that support US national interests and lay the foundation for safe AI governance may be the best strategy to pass policy. We recommend a careful re-consideration of strategy and messaging to ensure AI safety projects are aligned with these interests.
Some Trends We Expect during a Trump Administration
Potential deregulation and a reluctance to pass AI safety regulation that restricts AI corporations
When Republicans have spoken about AI policies, they have mostly emphasized deregulation and a rollback of Biden’s administration’s policies. They’ve broadly expressed dissatisfaction with the breadth and scope of AI regulation passed by Democrats.
However, it’s not clear how much of this rhetoric is practical vs. political. Republican criticisms of Biden’s AI policies have been typically non-specific and often focused on issues of discrimination and bias. For instance, Trump proposed canceling the EO to “ban the use of AI to censor the speech of American citizens”, and Ted Cruz has argued that the NIST’s safety standards are a “woke…plan to control speech”. Republicans have not specifically said they want to shut down the US AI Safety Institute, though that would be a major side effect of repealing the EO. Politically, it’s quite possible the GOP is motivated to reframe AI regulation as “Republican-driven” but may not have substantive ideological opposition to many existing initiatives.
However, we still expect that Republicans will tend to avoid regulation that imposes limits or requirements on AI corporations, as they repeatedly emphasize speed of innovation and deregulation as key party platforms. We predict an increasing amount of regulatory capture by leading AI labs (in particular, Musk and xAI) and policies that favor industry self-regulation when it comes to AI safety.
International cooperation primarily in the context of promoting US security & economic interests
Trump has demonstrated a continual disinterest in multilateral treaties without immediate and direct benefits for the US, as can be seen by his positions on NAFTA, the WTO, NATO, and many more. Similarly, the Republican platform has emphasized American economic prosperity above global cooperation, criticizing “blind faith in the siren song of globalism”.
Based on the evidence, international treaties focused purely on AI safety seem unlikely to succeed with a Trump administration. AI safety treaties tend to restrict or regulate AI capabilities development, which would have a disproportionate impact on US AI corporations. They don’t offer immediate and direct benefits for US consumers or industry. Many existing initiatives fall into this category, such as PauseAI, A Narrow Path, Multinational AGI Consortium (MAGIC), and various UN AI governance proposals.
Instead, it appears more likely that the US will support treaties or agreements that recognize America’s central role in AI development, establishing frameworks that prioritize US interests and partnerships. For instance, we foresee the expansion of current alliances around semiconductor development that are preventing Chinese access to high-end AI chips. It’s possible that the US may eventually lead a formal international coalition that controls access to AI chips, as suggested by Chips for Peace. We may also see alliances providing global access to cutting-edge AI technologies for defense, as the US has demonstrated with military technologies such as the F-35.
A continuing escalation of export controls on Chinese & foreign access to AI chips
We expect the Trump administration to maintain and escalate this strategy as the implications of AI on geopolitical power become more immediate and obvious. Compute governance policies such as chip registries, rewards for whistleblowers, and KYC requirements for chip distributors seem likely. More capital-intensive strategies such as on-chip mechanisms for location / ownership verification seem plausible, but may face an uphill battle in implementation and aligning with Republican priorities around innovation and deregulation.
A progressive increase in public / private partnerships & policies that support US military and intelligence goals
Upcoming AI systems will have significant implications for national security. Over the next four years, there will almost certainly be an increase in government investment and engagement with private AI labs. Some forms of a public / private partnership will arise to develop AI capabilities specific to military and national intelligence initiatives.
In our article on soft nationalization, we cover a variety of policy levers and strategies that the US government may pursue for such partnerships, including defense contracts, security clearances, board representation / executive appointments, or joint research initiatives.
Beyond direct partnerships with leading AI labs, we also expect to eventually see general regulatory limitations around the development, use cases, and customers of sufficiently powerful AI models. As AI models increase in capabilities, the US government will inevitably determine that certain use-cases (LAWs, cyberwarfare, certain forms of bioweapons research) may compromise national security and should be controlled by regulation. In the long run, the USG will likely restrict the training of models on specific datasets, the usage of models for certain purposes, and the customers who have access to certain types of capabilities. This process may take longer than four years and may not occur in the upcoming administration.
Aligning AI safety initiatives with clear national security objectives may be the most effective method to see funding and engagement from a Republican administration. In particular, the Department of Defense may eventually become a key ally to AI safety initiatives once there is a credible threat of national security risks. Republicans are less likely to advocate for funding cuts to defense than civilian agencies, and the DoD is significantly better funded than the Department of Commerce, which is currently tasked (via BIS) with enforcing US export controls.
Caveats and Final Thoughts
Though our analysis describes some of the existing dynamics around AI safety, certain types of events could rapidly shift the Overton window. For example:
A major stepwise increase in awareness of certain capabilities, such as the demonstration of LAWs or cyberwarfare attacks
A major geopolitical crisis, such as a hostile AI arms race
A major AI safety incident, such as the release of an AI-developed pathogen
AI safety projects that aren’t currently priorities for US national interests could still become politically viable following one of these significant events.
Furthermore, we expect a Trump presidency to be high-variance, and to have generally lower adherence to Republican ideological priorities. Key advisors such as Musk could significantly shift his positions on topics such as AI safety.
Overall, we expect the impact of a Trump presidency on AI safety priorities to be decidedly mixed. Though Republicans have indicated an aversion to AI regulation, they have yet to solidify their stance on most AI safety topics. Discussing systemic AI risks has yet to be heavily politicized. Bipartisan support for various policies aligned with AI safety goals is still quite feasible.
It’s entirely possible that a Trump administration, combined with a rapidly changing technological and societal landscape, could lead to positive outcomes for AI safety. For right now, we’re choosing to be optimistic 😄
We’ve previously written a piece on soft nationalization, discussing the growing importance of US national security on the control and regulation of powerful AI models.
The upcoming shift from Biden’s administration to a Republican one is likely to necessitate some changes in strategy and framing for AI safety initiatives.
US national security interests, closely followed by US economic interests, are likely to dominate any regulation of AI systems in the next four years. A Trump administration is likely to prioritize projecting an image of American innovation, power, and speed regarding AI technologies.
Republicans have indicated that they support initiatives such as pro-business deregulation, military capabilities, and American chip manufacturing. Despite some evidence that Republican policymakers are beginning to raise concerns about existential risks, the incoming administration will likely be hesitant to pass legislation that meaningfully restricts industry interests in pursuit of AI safety.
National security interests that a Trump administration will likely pursue include:
Economic interests that a Trump administration will likely pursue include:
Despite this focus, we think AI safety initiatives are still very much on the table in the upcoming administration. There is still significant overlap between national security interests and AI safety initiatives. For example, strategies to increase monitoring of the semiconductor supply chain could improve both US national security outcomes (by reducing chip smuggling to China) and AI safety outcomes (by capacity building for the USG).
As a result, we think that AI safety projects should prioritize “leaning into” the common ground between US national interests and AI safety goals. We think there are numerous domains of overlap, and that projects in these domains are the most likely to succeed in the upcoming administration.
Finally, for teams working on AI safety, we believe that alignment with US security & economic interests should be a moderate priority when considering new policy proposals. At the very least, new projects should not advocate for interventions explicitly counter to these described interests.
In this article, we’ll discuss a few overarching perspectives that we’re already starting to see emerge, and provide some evidence for this worldview. We’ll mention examples of AI policy initiatives that we see as likely to succeed, as well as initiatives we think are unlikely to make progress in the new administration.
Identifying Commonalities between AI Safety and Republican Interests
Very little of the political discourse in the 2024 election was centered around AI policy. Specific policy proposals from the Republican party are difficult to identify, as language around AI remains high-level and general in both the GOP 2024 platform and Project 2025.
What we do see is that Republicans have set clear messaging principles about how they intend to discuss AI, with a well-established framing. The GOP platform supports “AI Innovation” and “AI Development rooted in Free Speech and Human Flourishing.”, and such language is broadly mirrored by key Republicans.
Republicans have characterized Biden’s AI regulations as “unnecessary and burdensome”, pushed for “industry-led” agencies, and are organizing plans titled “Make America First in AI”. It’s clear that language from the GOP is currently de-emphasizing AI regulation in favor of AI capabilities development and innovation.
However, this doesn’t mean that the Republican party is strictly against AI safety initiatives. There is significant overlap between policies supporting national security interests and reducing existential risk. Examples include stronger restrictions on AI technologies related to CBRN, monitoring supply chains in order to reduce Chinese chip smuggling, and improving cybersecurity for AI labs to protect cutting-edge American IP.
There is also some evidence that despite core Republican priorities and a lack of policy specifics, some Trump advisors do have underlying concerns around potential AI futures. See RFK Jr’s comments on international AI treaties, Tulsi Gabbard discussing the risk of AI arms races, and Vivek Ramaswamy advocating for an AI liability regime.
Most concretely, Elon Musk has recently acquired significant sway with Trump, and has strong ties to AI safety and existential risk efforts. His influence could swing the Trump administration to be more amenable to AI safety efforts, despite conservative headwinds.
For more examples of Republican priorities, see these pieces on AI from Michael Kratsios, who is heading the tech policy transition for the Trump administration.
Consequently, we think that there are actually numerous ways in which AI safety initiatives are complementary with Republican priorities. It’s possible the current lack of specificity could even indicate an openness to novel ideas and proposals around AI safety.
As Republican positions transition from high-level generalities on AI to specifics over the next year, we think that most AI safety projects remain quite plausible - but they need to find and advocate for points of commonality with the evolving Republican AI platform.
For example, it may be more effective for AI safety projects to:
Identifying complementary approaches that support US national interests and lay the foundation for safe AI governance may be the best strategy to pass policy. We recommend a careful re-consideration of strategy and messaging to ensure AI safety projects are aligned with these interests.
Some Trends We Expect during a Trump Administration
Potential deregulation and a reluctance to pass AI safety regulation that restricts AI corporations
When Republicans have spoken about AI policies, they have mostly emphasized deregulation and a rollback of Biden’s administration’s policies. They’ve broadly expressed dissatisfaction with the breadth and scope of AI regulation passed by Democrats.
Trump has repeatedly expressed interest in repealing Biden’s Executive Order on AI, which would remove federal reporting requirements for AI models, dismantle the US AI Safety Institute, and reduce incentives for federal agencies to use or procure AI systems. JD Vance has indicated an interest in looser AI regulations and support for open-source AI. The incoming administration will likely support reduced AI-related antitrust enforcement compared to current FTC commissioner Lina Khan.
However, it’s not clear how much of this rhetoric is practical vs. political. Republican criticisms of Biden’s AI policies have been typically non-specific and often focused on issues of discrimination and bias. For instance, Trump proposed canceling the EO to “ban the use of AI to censor the speech of American citizens”, and Ted Cruz has argued that the NIST’s safety standards are a “woke…plan to control speech”. Republicans have not specifically said they want to shut down the US AI Safety Institute, though that would be a major side effect of repealing the EO. Politically, it’s quite possible the GOP is motivated to reframe AI regulation as “Republican-driven” but may not have substantive ideological opposition to many existing initiatives.
However, we still expect that Republicans will tend to avoid regulation that imposes limits or requirements on AI corporations, as they repeatedly emphasize speed of innovation and deregulation as key party platforms. We predict an increasing amount of regulatory capture by leading AI labs (in particular, Musk and xAI) and policies that favor industry self-regulation when it comes to AI safety.
International cooperation primarily in the context of promoting US security & economic interests
Trump has demonstrated a continual disinterest in multilateral treaties without immediate and direct benefits for the US, as can be seen by his positions on NAFTA, the WTO, NATO, and many more. Similarly, the Republican platform has emphasized American economic prosperity above global cooperation, criticizing “blind faith in the siren song of globalism”.
Based on the evidence, international treaties focused purely on AI safety seem unlikely to succeed with a Trump administration. AI safety treaties tend to restrict or regulate AI capabilities development, which would have a disproportionate impact on US AI corporations. They don’t offer immediate and direct benefits for US consumers or industry. Many existing initiatives fall into this category, such as PauseAI, A Narrow Path, Multinational AGI Consortium (MAGIC), and various UN AI governance proposals.
Instead, it appears more likely that the US will support treaties or agreements that recognize America’s central role in AI development, establishing frameworks that prioritize US interests and partnerships. For instance, we foresee the expansion of current alliances around semiconductor development that are preventing Chinese access to high-end AI chips. It’s possible that the US may eventually lead a formal international coalition that controls access to AI chips, as suggested by Chips for Peace. We may also see alliances providing global access to cutting-edge AI technologies for defense, as the US has demonstrated with military technologies such as the F-35.
A continuing escalation of export controls on Chinese & foreign access to AI chips
As mentioned above, the US is currently leveraging its central role in developing AI chips to restrict Chinese access to cutting-edge semiconductors and the critical manufacturing technologies required to develop them. These export controls have broad bipartisan support. However, there are significant outstanding gaps in the implementation of these controls, and Republican congressmen have been pushing the Bureau of Industry and Security (BIS) to improve their mechanisms to reduce Chinese chip smuggling.
We expect the Trump administration to maintain and escalate this strategy as the implications of AI on geopolitical power become more immediate and obvious. Compute governance policies such as chip registries, rewards for whistleblowers, and KYC requirements for chip distributors seem likely. More capital-intensive strategies such as on-chip mechanisms for location / ownership verification seem plausible, but may face an uphill battle in implementation and aligning with Republican priorities around innovation and deregulation.
A progressive increase in public / private partnerships & policies that support US military and intelligence goals
Upcoming AI systems will have significant implications for national security. Over the next four years, there will almost certainly be an increase in government investment and engagement with private AI labs. Some forms of a public / private partnership will arise to develop AI capabilities specific to military and national intelligence initiatives.
In our article on soft nationalization, we cover a variety of policy levers and strategies that the US government may pursue for such partnerships, including defense contracts, security clearances, board representation / executive appointments, or joint research initiatives.
There are already early examples of such partnerships developing. Anthropic, Palantir, and Amazon are collaborating to leverage AI technologies to improve data analytics for the US military. OpenAI recently appointed former NSA director Paul Nakasone to its board of directors. Meta has opened its Llama open-source AI models for use by the US military.
Beyond direct partnerships with leading AI labs, we also expect to eventually see general regulatory limitations around the development, use cases, and customers of sufficiently powerful AI models. As AI models increase in capabilities, the US government will inevitably determine that certain use-cases (LAWs, cyberwarfare, certain forms of bioweapons research) may compromise national security and should be controlled by regulation. In the long run, the USG will likely restrict the training of models on specific datasets, the usage of models for certain purposes, and the customers who have access to certain types of capabilities. This process may take longer than four years and may not occur in the upcoming administration.
Aligning AI safety initiatives with clear national security objectives may be the most effective method to see funding and engagement from a Republican administration. In particular, the Department of Defense may eventually become a key ally to AI safety initiatives once there is a credible threat of national security risks. Republicans are less likely to advocate for funding cuts to defense than civilian agencies, and the DoD is significantly better funded than the Department of Commerce, which is currently tasked (via BIS) with enforcing US export controls.
Caveats and Final Thoughts
Though our analysis describes some of the existing dynamics around AI safety, certain types of events could rapidly shift the Overton window. For example:
AI safety projects that aren’t currently priorities for US national interests could still become politically viable following one of these significant events.
Furthermore, we expect a Trump presidency to be high-variance, and to have generally lower adherence to Republican ideological priorities. Key advisors such as Musk could significantly shift his positions on topics such as AI safety.
Overall, we expect the impact of a Trump presidency on AI safety priorities to be decidedly mixed. Though Republicans have indicated an aversion to AI regulation, they have yet to solidify their stance on most AI safety topics. Discussing systemic AI risks has yet to be heavily politicized. Bipartisan support for various policies aligned with AI safety goals is still quite feasible.
It’s entirely possible that a Trump administration, combined with a rapidly changing technological and societal landscape, could lead to positive outcomes for AI safety. For right now, we’re choosing to be optimistic 😄