It is in the U.S. national interest to closely monitor frontier model capabilities.
You can be ambivalent about the usefulness of most forms of AI regulation and still favor oversight of the frontier labs.
As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it gets.
The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act).
Training compute is an imperfect but robust proxy for model
The risks from developing superintelligent AI are potentially existential, though hard to visualize. A better approach to communicating the risk is to illustrate the dangers from existing systems, and to then imagine how those dangers will increase as AIs become steadily more capable.
An AGI could spontaneously develop harmful goals and subgoals, or a human could simply ask an obedient AGI to do things that are harmful. This second bucket requires many fewer assumptions about arcane results in decision theory. And even if those arcane results are where the real x-risk lies, it’s easier to build an intuition for the risks by working from the bottom-up, as scenarios in which “AGI gets an... (read 378 more words →)