We have written a paper that represents various frameworks for designing safe AGI (e.g., RL with reward modeling, CIRL, debate, etc.) as Causal Influence Diagrams (CIDs), to help us compare frameworks and better understand the corresponding agent incentives.
We would love to get comments, especially on
- Are the depicted frameworks represented accurately?
- Is the CID representation helpful?
- Frameworks we did not include that would be useful to model this way?
The paper's abstract:
Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other. In this paper, we model and compare the most promising AGI safety frameworks using causal influence diagrams. The diagrams show the optimization objective and causal assumptions of the framework. The unified representation permits easy comparison of frameworks and their assumptions. We hope that the diagrams will serve as an accessible and visual introduction to the main AGI safety frameworks.
I really like this layout, this idea, and the diagrams. Great work.
I don't agree that counterfactual oracles fix the incentive. There are black boxes in that proposal, like "how is the automated system not vulnerable to manipulation" and "why do we think the system correctly formally measures the quantity in question?" (see more potential problems). I think relying only on this kind of engineering cleverness is generally dangerous, because it produces safety measures we don't see how to break (and probably not safety measures that don't break).
Also, on page 10 you write that during deployment, agents appear as if they are optimizing the training reward function. As evhub et al point out, this isn't usually true: the objective recoverable from perfect IRL on a trained RL agent is often different (behavioral objective != training objective).
Glad to hear it :)
... (read more)