This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
Compute governance is a type of AI governance that focuses on controlling access to the computing hardware needed to develop and run AI. It has been argued that regulating compute is particularly promising compared to regulating other inputs to AI progress, such as data, algorithms, or human talent.
Although compute governance is one of the more frequently proposed strategies for AI governance, as of November 2024, there are few policies in place for governing compute, and much of the research on the topic is exploratory. Currently-enforced measures related to compute governance include US export controls on advanced microchips to China and reporting requirements for large training runs in the US and EU.
According to Sastry et al., compute governance can be used toward three main ends:
Visibility is the ability of policymakers to know what’s going on in AI, so they can make informed decisions. The amount of compute used for a training run can be used as information about the capabilities and risk of the resulting system. Measures to improve visibility could include:
Using public information to estimate compute used.
Requiring AI developers and cloud providers to report large training runs.
Creating an international registry for AI chips.
Designing systems to monitor general workload done by AI chips while preserving privacy about sensitive information.
Allocation refers to policymakers influencing the amount of compute available to different projects. Strategies in this category include:
Making compute available for research toward technologies that increase safety and defensive capabilities, or that substitute for more dangerous alternatives.
Speeding up or slowing down the general rate of AI progress.
Restricting or expanding the range of countries or groups with access to certain systems.
Creating an international megaproject aimed at developing AI technologies — such proposals are sometimes called “CERN for AI”.
Enforcement is about policymakers ensuring that the relevant actors abide by their rules. This could potentially be enabled by the right kind of software or hardware; hardware-based enforcement is likely to be harder to circumvent. Strategies here include:
Restricting networking capabilities to make chips harder to use in very large clusters.
Modifying chips to add cryptographic mechanisms to automatically verify or enforce restrictions on what types of tasks these chips are allowed to be used for.
Designing chips so that they can be controlled multilaterally, similar to “permissive action links” for nuclear weapons.
Restricting access to compute through, for instance, cloud compute providers.
Many of these mechanisms are speculative and would require further research before they could be implemented. They could end up being risky or ineffective. However, many safety researchers think compute governance would help avert major existential risks to humanity.
This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
Compute governance is a type of AI governance that focuses on controlling access to the computing hardware needed to develop and run AI. It has been argued that regulating compute is particularly promising compared to regulating other inputs to AI progress, such as data, algorithms, or human talent.
Although compute governance is one of the more frequently proposed strategies for AI governance, as of November 2024, there are few policies in place for governing compute, and much of the research on the topic is exploratory. Currently-enforced measures related to compute governance include US export controls on advanced microchips to China and reporting requirements for large training runs in the US and EU.
According to Sastry et al., compute governance can be used toward three main ends:
Many of these mechanisms are speculative and would require further research before they could be implemented. They could end up being risky or ineffective. However, many safety researchers think compute governance would help avert major existential risks to humanity.
Further reading:
What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring