Computational power is a key driver of AI progress, and a new report suggests that Google’s upcoming GPT-4 competitor will be trained on unprecedented amounts of compute.
The model, currently named Gemini, may be trained by the end of this year with 5x more computational power than GPT-4. By the end of next year, the report projects that Google will have the ability to train a model with 20x more compute than GPT-4.
For reference, the compute difference between GPT-3 and GPT-4 was 100x. If these projections are true, Google’s new models could create a meaningful spike relative to current AI capabilities.
Google’s position as an AI leader. The recent boom in large language models has been driven by several innovations pioneered at Google. For example, the Transformer architecture used by most advanced language models was first described in a 2017 paper from Google.
OpenAI has led Google in language modeling for several years now. But after the release of ChatGPT, Google significantly increased their AI investments. They merged Google Brain and DeepMind into a single research lab with increased resources, and invested in Anthropic.
Google has tremendous financial resources, with $118 billion cash on hand. In contrast, OpenAI’s last investment round in January raised only $10 billion. Perhaps it’s no surprise that Google can quickly ramp up spending to compete with other leading AI labs.
Yet it seems that Gemini will be only one member of the next generation of frontier models, as Inflection AI CEO Mustafa Suleyman says his company will also soon surpass the compute used to train GPT-4.
Inflection AI CEO on compute growth. Mustafa Suleyman, CEO of the rapidly growing Inflection AI, estimated that his company will be training models “100x larger than the current frontier models in the next 18 months.” Notably, this is not based on predictions that they’ll gain access to new compute, but rather estimated using the compute that Inflection already owns.
Three years from now, he predicts the industry will be “training models that are 1,000x larger than they currently are.” This positions Suleyman as one of the many industry leaders anticipating rapid growth in AI compute and capabilities over the next few years.
US Military Invests in Thousands of Autonomous Drones
The US military announced major investments in autonomous weapons this week, highlighting growing interest by the national security apparatus in AI development.
Replicator: “Thousands of autonomous systems.”Replicator is a new initiative from the U.S. Department of Defense. Within the next two years, it aims to deploy thousands of autonomous military systems, such as uncrewed aircraft and underwater drones.
The military will collaborate on this with groups in academia and industry. It will be directly overseen by Deputy Secretary of Defense Kathleen Hicks, a sign that this will be a high priority for the Department.
We’ve discussed risks from autonomous weapons, including the increasing speed of warfare and the potential for AI accidents to rapidly escalate into war. Beyond direct concerns about autonomous weapons, there may be broader impacts on AI development driven by the military’s interest in developing powerful AI systems.
The military may want to accelerate AI development, making the government less interested in slowing down and regulating commercial AI developers. International coordination could be more difficult to the extent that AI confers direct military advantages. On the other hand, by creating partnerships with academia and industry, the government could build institutional capacity to more effectively govern AI.
Complementary investments in AI trust and evaluation. Alongside their new investments in autonomous weapons, the U.S. Department of Defense is also launching the Center for Calibrated Trust Measurement and Evaluation. The program broadly intends to “operationalize responsible AI” as well as “value alignment.”
The program has received $20 million in funding for the next year. Potential goals include creating training and certification programs for AI and testing and evaluating military AI systems.
United Kingdom Prepares for Global AI Safety Summit
In June, US President Joe Biden and UK Prime Minister Rishi Sunak agreed on a partnership to help establish global AI governance. The UK is now preparing to hold an AI Safety Summit later this year.
The summit will be held in Bletchley Park, near London, on November 1st and 2nd. It will bring together key countries, companies, academics and civil society organizations to discuss AI safety.
A shared understanding of the risks posed by frontier AI and the need for action
A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
Appropriate measures which individual organizations should take to increase frontier AI safety
Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
Showcase how ensuring the safe development of AI will enable AI to be used for good globally
They said the summit will focus on “risks created or significantly exacerbated by the most powerful AI systems, particularly those associated with the potentially dangerous capabilities of these systems.” For example, the announcement lists the possibility that AI could threaten global biosecurity.
This summit will serve as a key opportunity for governments, AI developers, and other stakeholders to work on reducing the risks of advanced AI systems.
Case Studies in AI Policy
Can governments effectively execute AI policies? To answer that question, a new study examines the implementation of three recent AI policies in the United States. The findings indicate that less than 40% of actions required by these policies have been implemented by the relevant federal agencies, revealing shortcomings in the U.S. state capacity for AI governance.
The paper studies three recent AI policies:
AI in Government Act of 2020: Aims to encourage adoption of AI in the federal government, including by providing guidance to agencies on AI usage and establishing a center for government AI adoption within the General Services Administration (GSA).
Executive Order 13,859 (AI Leadership Order): Directs federal agencies to pursue six strategic objectives, including investing in AI R&D and building a competent AI workforce.
Executive Order 13,960 (Trustworthy AI Order): Focuses on harnessing AI to improve government operations, outlining nine principles for the lawful and responsible use of AI by federal agencies.
Limited success in implementing AI policies. Less than half of the 45 legal requirements across these laws were publicly verified as implemented. For example, the Trustworthy AI Order required that federal agencies document their use of AI, but only about half have done so. Many agencies which demonstrably use AI have not submitted the required documentation. Similarly, the AI Leadership Order directed each federal agency to issue AI strategies, but 88% of agencies have failed to do so.
Recommendations for improving state capacity on AI. The report provides a number of recommendations for improving state capacity around AI:
Clarify Mandates: Agencies need explicit guidelines on compliance, what constitutes AI applications under these laws, and how to interpret non-responses.
Resource Allocation: Adequate funding and technical expertise must be provided to agencies to improve their capacity to implement AI policies.
Strong Leadership: A centralized authority or strong senior leadership is crucial for setting strategic AI priorities and ensuring effective implementation.
More broadly, one question is whether a single authority tasked with AI governance might be more effective than the current approach of diffusing responsibility for AI governance over a wide variety of agencies with many different focuses.
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Google DeepMind’s GPT-4 Competitor
Computational power is a key driver of AI progress, and a new report suggests that Google’s upcoming GPT-4 competitor will be trained on unprecedented amounts of compute.
The model, currently named Gemini, may be trained by the end of this year with 5x more computational power than GPT-4. By the end of next year, the report projects that Google will have the ability to train a model with 20x more compute than GPT-4.
For reference, the compute difference between GPT-3 and GPT-4 was 100x. If these projections are true, Google’s new models could create a meaningful spike relative to current AI capabilities.
Google’s position as an AI leader. The recent boom in large language models has been driven by several innovations pioneered at Google. For example, the Transformer architecture used by most advanced language models was first described in a 2017 paper from Google.
OpenAI has led Google in language modeling for several years now. But after the release of ChatGPT, Google significantly increased their AI investments. They merged Google Brain and DeepMind into a single research lab with increased resources, and invested in Anthropic.
Google has tremendous financial resources, with $118 billion cash on hand. In contrast, OpenAI’s last investment round in January raised only $10 billion. Perhaps it’s no surprise that Google can quickly ramp up spending to compete with other leading AI labs.
Yet it seems that Gemini will be only one member of the next generation of frontier models, as Inflection AI CEO Mustafa Suleyman says his company will also soon surpass the compute used to train GPT-4.
Inflection AI CEO on compute growth. Mustafa Suleyman, CEO of the rapidly growing Inflection AI, estimated that his company will be training models “100x larger than the current frontier models in the next 18 months.” Notably, this is not based on predictions that they’ll gain access to new compute, but rather estimated using the compute that Inflection already owns.
Three years from now, he predicts the industry will be “training models that are 1,000x larger than they currently are.” This positions Suleyman as one of the many industry leaders anticipating rapid growth in AI compute and capabilities over the next few years.
US Military Invests in Thousands of Autonomous Drones
The US military announced major investments in autonomous weapons this week, highlighting growing interest by the national security apparatus in AI development.
Replicator: “Thousands of autonomous systems.” Replicator is a new initiative from the U.S. Department of Defense. Within the next two years, it aims to deploy thousands of autonomous military systems, such as uncrewed aircraft and underwater drones.
The military will collaborate on this with groups in academia and industry. It will be directly overseen by Deputy Secretary of Defense Kathleen Hicks, a sign that this will be a high priority for the Department.
We’ve discussed risks from autonomous weapons, including the increasing speed of warfare and the potential for AI accidents to rapidly escalate into war. Beyond direct concerns about autonomous weapons, there may be broader impacts on AI development driven by the military’s interest in developing powerful AI systems.
The military may want to accelerate AI development, making the government less interested in slowing down and regulating commercial AI developers. International coordination could be more difficult to the extent that AI confers direct military advantages. On the other hand, by creating partnerships with academia and industry, the government could build institutional capacity to more effectively govern AI.
Complementary investments in AI trust and evaluation. Alongside their new investments in autonomous weapons, the U.S. Department of Defense is also launching the Center for Calibrated Trust Measurement and Evaluation. The program broadly intends to “operationalize responsible AI” as well as “value alignment.”
The program has received $20 million in funding for the next year. Potential goals include creating training and certification programs for AI and testing and evaluating military AI systems.
United Kingdom Prepares for Global AI Safety Summit
In June, US President Joe Biden and UK Prime Minister Rishi Sunak agreed on a partnership to help establish global AI governance. The UK is now preparing to hold an AI Safety Summit later this year.
The summit will be held in Bletchley Park, near London, on November 1st and 2nd. It will bring together key countries, companies, academics and civil society organizations to discuss AI safety.
The UK recently outlined five objectives for the summit:
They said the summit will focus on “risks created or significantly exacerbated by the most powerful AI systems, particularly those associated with the potentially dangerous capabilities of these systems.” For example, the announcement lists the possibility that AI could threaten global biosecurity.
This summit will serve as a key opportunity for governments, AI developers, and other stakeholders to work on reducing the risks of advanced AI systems.
Case Studies in AI Policy
Can governments effectively execute AI policies? To answer that question, a new study examines the implementation of three recent AI policies in the United States. The findings indicate that less than 40% of actions required by these policies have been implemented by the relevant federal agencies, revealing shortcomings in the U.S. state capacity for AI governance.
The paper studies three recent AI policies:
Limited success in implementing AI policies. Less than half of the 45 legal requirements across these laws were publicly verified as implemented. For example, the Trustworthy AI Order required that federal agencies document their use of AI, but only about half have done so. Many agencies which demonstrably use AI have not submitted the required documentation. Similarly, the AI Leadership Order directed each federal agency to issue AI strategies, but 88% of agencies have failed to do so.
Recommendations for improving state capacity on AI. The report provides a number of recommendations for improving state capacity around AI:
More broadly, one question is whether a single authority tasked with AI governance might be more effective than the current approach of diffusing responsibility for AI governance over a wide variety of agencies with many different focuses.
Links
We’d appreciate your feedback on the newsletter! Please leave your thoughts here.
Subscribe here to receive future versions.