It sounds like you're thinking mostly about voluntary standards. I think legislated standards are a real possibility (as the public gets more freaked out by both powerful nonmagnetic systems like ChatGPT, and less powerful but clearly self-directed systems). I think legislated standards adhere to this tradeoff a bit less. Legislators have much less reason to care how difficult standards are to adhere to. Therefore, standards that sound good to the public are going to be a bigger criteria, and that has only an indirect relationship to both ease of implementation and actual usefulness.
It seems to me like government-enforced standards are just another case of this tradeoff - they are quite a bit more useful, in the sense of carrying the force of law and applying to all players on a non-voluntary basis, and harder to implement, due to the attention of legislators being elsewhere, the likelihood that a good proposal gets turned into something bad during the legislative process, and the opportunity cost of the political capital.
Epistemic status: We think this is a simple and common idea in discussions about AI governance proposals. We don’t expect this to be controversial, but we think it might be useful to put a label on it and think about it explicitly.
Suppose an AGI lab could choose between two safety standards:
All else equal, people prefer restrictions that are less costly than restrictions that are more costly. Standard 2 is less demanding on labs, but it’s also less helpful in reducing x-risk.
Core point: There is often a tradeoff between the feasibility of a proposal (its likelihood of being implemented) and its usefulness (its expected impact on reducing x-risk, conditional on it being implemented).
We commonly observe this tradeoff in various conversations about AI governance. We don’t think this is a new idea, but sometimes having a term for something helps us track the concept more clearly.
We refer to this as the usefulness-feasibility tradeoff.
Additional points