Note: This is an automated crosspost from Anthropic. The bot selects content from many AI safety-relevant sources. Not affiliated with the authors or their organization and not affiliated with LW.


A hand-drawn image of a government building

In response to the White House’s Request for Information on an AI Action Plan, Anthropic has submitted recommendations to the Office of Science and Technology Policy (OSTP). Our recommendations are designed to better prepare America to capture the economic benefits and national security implications of powerful AI systems.

As our CEO Dario Amodei writes in ‘Machines of Loving Grace’, we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:

  • Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering.
  • The ability to navigate all interfaces available to a human doing digital work today, including the ability to process and generate text, audio, and video, the ability to autonomously control technology instruments like mice and keyboards, and the ability to access and browse the internet.
  • The ability to autonomously reason through complex tasks over extended periods—hours, days, or even weeks—seeking clarification and feedback when needed, much like a highly capable employee would.
  • The ability to interface with the physical world; controlling laboratory equipment, robotic systems, and manufacturing tools through digital connections.

Our own recent work adds further evidence to the idea that powerful AI will arrive soon: our recently-released Claude 3.7 Sonnet and Claude Code demonstrate significant capability improvements and increased autonomy, as do systems released by other frontier labs.

We believe the United States must take decisive action to maintain technological leadership. Our submission focuses on six key areas to address the economic and security implications of powerful AI while maximizing benefits for all Americans:

  1. National Security Testing: Government agencies should develop robust capabilities to evaluate both domestic and foreign AI models for potential national security implications. This includes creating standard assessment frameworks, building secure testing infrastructure, and establishing expert teams to analyze vulnerabilities in deployed systems.
  2. Strengthening Export Controls: We advocate for tightening semiconductor export restrictions so as to ensure America and its allies can capitalize on the opportunities of powerful AI systems, and to prevent our adversaries from accessing the AI infrastructure that enables powerful AI. This includes controlling H20 chips, requiring government-to-government agreements for countries hosting large chip deployments, and reducing no-license-required thresholds.
  3. Enhancing Lab Security: As AI systems become critical strategic assets, we recommend establishing classified communication channels between AI labs and intelligence agencies, expedited security clearances for industry professionals, and the development of next-generation security standards for AI infrastructure.
  4. Scaling Energy Infrastructure: To stay at the leading edge of AI development, we recommend setting an ambitious target to build an additional 50 gigawatts of dedicated power by 2027, while streamlining permitting and approval processes.
  5. Accelerating Government AI Adoption: We propose conducting a government-wide inventory of workflows that could benefit from AI augmentation, tasking agency leaders to deliver on programs where AI can deliver significant public benefit.
  6. Preparing for Economic Impacts: To ensure AI benefits are broadly shared throughout society, we recommend modernizing mechanisms for economic data collection, like the Census Bureau's surveys, and preparing for potential large-scale changes to the economy.

These recommendations build on Anthropic's previous policy work, including our advocacy for responsible scaling policies and testing and evaluation. Our aim is to strike a balance—enabling innovation while mitigating serious risks posed by increasingly capable AI systems.

Our full submission, found here, offers further detail into these recommendations and provides practical implementation strategies to help the U.S. government navigate this critical technological transition.

New Comment
2 comments, sorted by Click to highlight new comments since:

Here's my summary of the recommendations:

  • National security testing
    • Develop robust government capabilities to evaluate AI models (foreign and domestic) for security risks
    • Once ASL-3 is reached, government should mandate pre-deployment testing
    • Preserve the AI Safety Institute in the Department of Commerce to advance third-party testing
    • Direct NIST to develop comprehensive national security evaluations in partnership with frontier AI developers
    • Build classified and unclassified computing infrastructure for testing powerful AI systems
    • Assemble interdisciplinary teams with both technical AI and national security expertise
       
  • Export Control Enhancement
    • Tighten semiconductor export restrictions to prevent adversaries from accessing critical AI infrastructure
    • Control H20 chips
    • Require government-to-government agreements for countries hosting large chip deployments
      • As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies, the U.S. should mandate that countries at high-risk for chip smuggling comply with a government-to-government agreement that 1) requires them to align their export control systems with the U.S., 2) takes security measures to address chip smuggling to China, and 3) stops their companies from working with the Chinese military. The “Diffusion Rule” already contains the possibility for such agreements, laying a foundation for further policy development.
    • Review and reduce the 1,700 H100 no-license required threshold for Tier 2 countries
      • Currently, the Diffusion Rule allows advanced chip orders from Tier 2 countries for less than 1,700 H100s —an approximately $40 million order—to proceed without review. These orders do not count against the Rule’s caps, regardless of the purchaser. While these thresholds address legitimate commercial purposes, we believe that they also pose smuggling risks. We recommend that the Administration consider reducing the number of H100s that Tier 2 countries can purchase without review to further mitigate smuggling risks.
    • Increase funding for Bureau of Industry and Security (BIS) for export enforcement
       
  • Lab Security Improvements
    • Establish classified and unclassified communication channels between AI labs and intelligence agencies for threat intelligence sharing, similar to Information Sharing and Analysis Centers used in critical infrastructure sectors
    • Create systematic collaboration between frontier AI companies and intelligence agencies, including Five Eyes partners
    • Elevate collection and analysis of adversarial AI development to a top intelligence priority, as to provide strategic warning and support export controls
    • Expedite security clearances for AI industry professionals
    • Direct NIST to develop next-generation security standards for AI training/inference clusters
    • Develop confidential computing technologies that protect model weights even during processing
    • Develop meaningful incentives for implementing enhanced security measures via procurement requirements for systems supporting federal government deployments.
    • Direct DOE/DNI to conduct a study on advanced security requirements that may become appropriate to ensure sufficient control over and security of highly agentic models

 

  • Energy Infrastructure Scaling
    • Set an ambitious national target: build 50 additional gigawatts of power dedicated to AI by 2027
    • Streamline permitting processes for energy projects by accelerating reviews and enforcing timelines
    • Expedite transmission line approvals to connect new energy sources to data centers
    • Work with state/local governments to reduce permitting burdens
    • Leverage federal real estate for co-locating power generation and next-gen data centers

 

  • Government AI Adoption
    • across the whole of government, the Administration should systematically identify every instance where federal employees process text, images, audio, or video data, and augment these workflows with appropriate AI systems.
    • Task OMB to address resource constraints and procurement limitations for AI adoption
    • Eliminate regulatory and procedural barriers to rapid AI deployment across agencies
    • Direct DoD and Intelligence Community to accelerate AI research, development and procurement
    • Target largest civilian programs for AI implementation (IRS tax processing, VA healthcare delivery, etc.)

 

  • Economic Impact Monitoring
    • Enhance data collection mechanisms to track AI adoption patterns and economic implications
    • The Census Bureau’s American Time Use Survey should incorporate specific questions about AI usage, distinguishing between personal and professional applications while gathering detailed information about task types and systems employed.
    • Update Census Bureau surveys to gather detailed information on AI usage and impacts
    • Collect more granular data on tasks performed by workers to create a baseline for monitoring changes
    • Track the relationship between AI computation investments and economic performance
    • Examine how AI adoption might reshape the tax base and cause structural economic shifts

Well, I'm disappointed.

Everything about misuse risks and going faster to Beat China, nothing about accident/systematic risks. I guess "testing for national security capabilities" is probably in practice code for "some people will still be allowed to do AI alignment work," but that's not enough.

I really would have hoped Anthropic could be realistic and say "This might go wrong. Even if there's no evil person out there trying to misuse AI, bad things could still happen by accident, in a way that needs to be fixed by changing what AI gets built in the first place, not just testing it afterwards. If this was like making a car, we should install seatbelts and maybe institute a speed limit."

Curated and popular this week