The idea with my framework is punitive damages would only be available to the extent that the most cost-effective risk mitigation measures that the AI system developer/deployer could have to taken to further reduce to likelihood and/or severity of the practically compensable harm would also tend to mitigate the uninsurable risk. I agree that there's a potential Goodhart problem here, which the prospect of liability could give AI companies strong incentives to eliminate warning shots, without doing very much to mitigate the catastrophic risk. For this reason, I think it's really important that the punitive damages formula put heavy weight on the elasticity of the particular practically compensable harm at issue with the associated uninsurable risk.
The version of your concern that I endorse is that this framework wouldn't work very well in worlds where warning shots are (or, more to the point, expected to be rare by the key decision-makers). It can deter large incidents, but only those that are associated with small incidents that are more likely. If the threat models you're most worried about are unlikely to produce such near-misses, then the expectation of liability is unlikely to be a sufficient deterrent. It's not clear to me that there are politically viable policies that would significantly mitigate those kinds of risks, but I plan to address that question more deeply in future work.
I have covid and need to cancel this event (which was scheduled to take place in my home). Sorry!
"I think joint & several liability regimes will resolve this. In the sense that, it's not 100% the companies fault; it'll be shared by the programmers, the operator, and the company."
J&S doesn't do much good if the harm is practically non-compensable because no one is alive to sue or be sued, or the legal system is no longer functioning. Even for harms that are merely financially uninsurable, in only enlarges the maximum practically compensable harm by less than an order of magnitude.
"Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you'll get completely different legal treatment for international AI's. This creates a whole new can of worms that defeats legal certainty and the rule of law."
I encourage experts in other legal systems to conduct similar analyses to mine regarding how liability is likely to attach to AI harms and what doctrinal/statutory levers could be pulled to achieve more favorable rules.