Epistemic status: The following isn't an airtight argument, but mostly a guess how things play out.
Consider two broad possibilities:
I. In worlds where we are doing reasonably well on alignment, AI control agenda does not have much impact.
II. In worlds where we are failing at alignment, AI control may primarily shift probability mass away from "moderately large warning shots" and towards "ineffective warning shots" and "existential catastrophe, full takeover".
The key heuristic is that the global system already has various mechanisms and feedback loops that resist takeover by a single agent (i.e. it is not easy to overthrow the Chinese government). In most cases where AI control would stop an unaligned AI, the counterfactual is that broader civilizational resistance would have stopped it anyway, but with the important side effect of a moderately-sized warning shot.
I expect moderately sized warning shots to increase the chances humanity as a whole takes serious actions and, for example, steps up efforts to align the frontier labs.
I am skeptical that incidents stopped by AI control would lead to meaningful change. Sharing details of such an event with proper framing could pose existential risk, but for the lab involved. In practice, I anticipate vague, sanitized communications along the lines of "our safety systems performed as designed, preventing bad things". Without clear, compelling evidence of the severity of the averted threat, these incidents are unlikely to catalyze serious action. The incentives for labs to downplay and obscure such events will be strong.
There are additional factors to consider, like AI control likely moves some resources away from alignment, but I don't think this is the dominant effect.
Note that this isn't a general argument against boxing, e.g. boxes based more on formal methods or theory have better chance to generalize.
Typical counter-arguments to this line of reasoning claim seem to be:
- We will extract useful "automated alignment" work from the unaligned AIs inside of the control scheme. I'm sceptical: will cover this in a separate post
- Isn't this general counter-argument to alignment research as well? In my view not, details matter: different strains of alignment research have different generalization profiles.
Note: this text lived in a draft form before John Wentworth posted his Case Against AI Control Research; my original intent was to extend it a bit more toward discussing AI control generalization properties. As this would be redundant now, I'm postin it as it is: there is some non-overlapping part.
Sure, or with properly implemented ~anything related to controlling the AIs behavior. I don't really expect incidents like Sydney in the future, nor do I think Sydney was that helpful in motivating a societal response? So, this doesn't feel like a meaningful representative example.
I agree, by "we caught", I mean "the AI company". Probably a poor choice of language.
Sure, but a large part of my point is that I don't expect public facing accidents (especially not accidents that kill people) until it's too late, so this isn't a very relevant counterfactual.
This feels like a special case of escape to me which would probably cause a minimal response from the world as this only results in some particular fine-tune of an already open weights AI. So, you were probably already getting whatever warning shots you were going to get from the open weights AI.
I don't think so. For every one of those failure modes other than escape, there is no chance of detection in the wild, so the choice is between catch the internal issue or catch nothing. I also think escape is moderately likely to go undetected (if not caught within the AI company). Part of my perspective might be thinking takeoff is faster than you do or focusing more on faster takeoff worlds.
(FWIW, I also think that in relatively desperate scenarios, preventing escape isn't that high of a priority for control, though the possibility of warning shots doesn't factor into this very much.)
Why do you assume this isn't captured by control schemes we're targeting? Feels like a special case to me? I am in practice less worried about this than you seem to be, but I do think we should analyze questions like "could the AIs be leading people astray in costly ways" and it seems pretty doable to improve the default tradeoffs here.