1 min read

25

Just what it says on the tin. Covered most everywhere, but I found the quote in this Reuters article stuck out to me the most:

Newsom said the bill "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data" and would apply "stringent standards to even the most basic functions — so long as a large system deploys it."

Emphasis mine. So the governor says he vetoed it for the exact reason that it would have been a good law, in my view.

New Comment
1 comment, sorted by Click to highlight new comments since:

This feels like a bigger setback than the generic case of good laws failing to pass.

What I am thinking about currently is momentum, which is surprisingly important to the legislative process. There are two dimensions that make me sad here:

  1. There might not be another try. It is extremely common for bills to disappear or get stuck in limbo after being rejected in this way. The kind of bills which keep appearing repeatedly until they succeed are those with a dedicated and influential special interest behind them, which I don't think AI safety qualifies for.
  2. There won't be any mimicry. If SB 1047 had passed, it would have been a model for future regulation. Now it won't be, except where that regulation is being driven by the same people and orgs behind SB 1047.

I worry that the failure of the bill will go as far as to discredit the approaches it used, and will leave more space for more traditional laws which are burdensome, overly specific, and designed with winners and losers in mind.

We'll have to see how the people behind SB 1047 respond to the setback.