This is the first essay in my series How do we govern AI well? For more on the broader vision, see the introduction.
Abstract
This essay argues that we face a fundamental problem in AI governance: governance mechanisms tend to degrade as AI capabilities advance. This degradation isn't random, it's systematic and predictable. The strongest evidence for this claim is that every governance approach we might consider (liability frameworks, auditing, industry self-regulation, etc.) contains inherent scaling limitations that cause it to fail precisely when it becomes most critical.
The core of my argument is that different governance mechanisms have different "scaling curves"—some degrade quickly with capability advances while others are more robust—and understanding these scaling... (read 7393 more words →)
Most intriguing is this hints at a testable hypothesis: policy markets should show greater inefficiency in domains where feedback loops are longest and most distorted. Would bet FOIAs on national security consistently outperform economic policy FOIAs in information yield despite lower attention