In 2022 and early 2023, I remember having some conversations with folks about whether or not industry groups would support regulation. (At the time, the thought of governments becoming concerned enough about risks from advanced AI to consider regulation was one of those things that seemed like it wouldn't happen for a long time, if ever).
Perhaps I and some others were naive. But I genuinely bought the idea that if governments ended up getting more concerned about AI risks, then some of the industry players would end up supporting common-sense regulations. Especially industry players that had acknowledged AI risks (e.g., Anthropic, OpenAI, DeepMind). I think this was also a fairly commonly-held opinion within AI safety circles (and indeed was part of the story for why people concerned about AI safety should support Anthropic, OpenAI, and DeepMind).
In some ways, it's very sad to see how wrong these takes were. The "maybe industry will support regulations or maybe even proactively push for regulations" takes have not aged well.
In other ways, I think the SB1047 saga has mobilized a lot of AI governance/policy folks and caused them to realize where industry stands. There is a bit of poetic Litany of Gendlin energy here– if industry is not going to support even light-touch regulation, it is worth swallowing this difficult pill and orienting to the landscape accordingly.
(That said, I am interested in seeing how Anthropic reacts to the amendments. I think Anthropic has done a lot less for comms/policy efforts than many folks had predicted it would, and I think many are uncertain if its comms/policy efforts will differ meaningfully from its competitors.)
(I also think that as evidence of risk increases, some industry players might either change their minds or feel compelled to support regulation out of fear of looking super unreasonable.)
If you enjoy this, please consider subscribing to my Substack.
My latest reporting went up in The Nation yesterday:
It’s about the tech industry’s meltdown in response to SB 1047, a California bill that would be the country’s first significant attempt to mandate safety measures from developers of AI models more powerful and expensive than any yet known.
Rather than summarize that story, I’ve added context from some past reporting as well as new reporting on two big updates from yesterday: a congressional letter asking Newsom to veto the bill and a slate of amendments.
The real AI divide
After spending months on my January cover story in Jacobin on the AI existential risk debates, one of my strongest conclusions was that the AI ethics crowd (focused on the tech’s immediate harms) and the x-risk crowd (focused on speculative, extreme risks) should recognize their shared interests in the face of a much more powerful enemy — the tech industry:
And here’s how I ended that story:
This was true at the time I published it, but honestly, it felt like momentum was on the side of the AI safety crowd, despite its huge structural disadvantages (industry has way more money and armies of seasoned lobbyists).
Since then, it’s become increasingly clear that meaningful federal AI safety regulations aren’t happening any time soon. The Republican Majority Leader Steve Scalise promised as much in June. But it turns out Democrats would have also likely blocked any national, binding AI safety legislation.
The congressional letter
Yesterday, eight Democratic California Members of Congress published a letter to Gavin Newsom, asking him to veto SB 1047 if it passes the state Assembly. There are serious problems with basically every part of this letter, which I picked apart here. (Spoiler: it's full of industry talking points repackaged under congressional letterhead).
Many of the signers took lots of money from tech, so it shouldn’t come as too much of a surprise. I’m most disappointed to see that Silicon Valley Representative Ro Khanna is one of the signatories. Khanna had stood out to me positively in the past (like when he Skyped into The Intercept’s five year anniversary party).
The top signatory is Zoe Lofgren, who I wrote about in The Nation story:
Later from the same story:
When he was shepherding the net neutrality bill, Wiener told me that he experienced similar industry resistance: “the telecoms and cable companies kept shouting, ‘no, no, no. This should be handled at the federal level.’” But, he pointed out that these “are the same corporate actors that are making it impossible for Congress to” do so.
Overall, this congressional letter makes me very skeptical of the prospect of national AI safety regulations any time soon. Whenever a lobbyist says they prefer a federal law, keep in mind that the Representatives in their pocket are standing in the way of that.
The turning tides
In May, Politico reported:
You don’t have to be a Marxist to see that this was always the way things would shake out. The AI industry is barely subject to regulation right now, as I wrote in The Nation:
They’ve been able to write their own rules with no punishment for breaking them. They want to continue to do so because they think it will make them more money.
Some from the AI ethics crowd have argued that the industry is pushing the x-risk narrative to defer and control regulations. From my Jacobin story:
But SB 1047 has strong support from the AI safety community (and it was crafted with their input — the Center for AI Safety’s lobbying arm is a co-author). The bill has also faced fierce and nearly uniform opposition from the AI industry.
I think there are a few reasons for this:
I also don’t really see how SB 1047 hurts efforts to regulate AI’s immediate dangers. The best case I can think of is that it’s spending political capital that could be used on other efforts. But many of the opponents of the bill have favorably highlighted many other pieces of legislation that target immediate harms from AI.
Sneha Revanur, founder of Encode Justice and a bill co-author, put it well in our conversation:
I reported something similar in Jacobin:
Capitalism vs. democracy
SB 1047 has flown through the state legislature nearly unopposed. The bill could pass the state Assembly with overwhelming support, but still die with a veto from Newsom.
This is clearly what industry is counting on.
But couldn’t the legislature override the veto? Technically, yes. But that hasn’t happened since 1979, and supporters don’t expect that to change here.
So industry is training its guns on the Governor, using Members of Congress from his state and party to rehash their talking points, in the hopes that Newsom vetoes a bill that has had extremely strong support in the legislature, as well as strong state-wide support in three public opinion polls.
Yesterday’s amendments to SB 1047
Fearing this veto, supporters have watered down the bill in the hopes of softening industry opposition. Yesterday, the SB 1047 team published summaries of the latest round of amendments.
Overall, the changes further narrow the scope of the bill to address a lot of specific concerns. The “reasonable assurance” standard was changed to the weaker “reasonable care.” The proposed new regulatory agency, the Frontier Model Division, is gone. One change expanded the scope of when the California Attorney General (AG) can seek an injunctive relief (i.e. a court order to halt).
Many eyes are now on Anthropic, which is the only major AI company that may actually support the bill. From my reporting in The Nation:
No other major AI companies took the position: “support if amended.”
Anthropic also appears to have gotten most of the changes they requested.
Their biggest request — to remove pre-harm enforcement — seems to have been mostly met. The state AG can now only seek civil penalties if a model has caused catastrophic harm or poses an imminent risk to public safety. Previously, the AG could seek penalties if a covered developer didn’t comply with the safety measures required by the bill, even if their model didn’t harm anyone.
I am not a lawyer, but I think this change sounds like it weakens the bill more than it actually does. I think this analysis from AI safety researcher Michael Cohen is right:
Cohen also thinks it’s “jaw-dropping” that Anthropic’s letter included this line:
According to Cohen, the main things they asked for and didn’t get were:
A spokesperson for Anthropic wrote to me that, "We are reviewing the new bill language as it becomes available."
I don’t expect any other major AI company to support the bill, even in its amended form.
The amendments have changed at least some minds.
Samuel Hammond, a fellow at a centrist think tank, once opposed the bill. But in response to the amendments, he wrote that, “All these changes are great. This has shaken out into a very reasonable bill.”
Ethereum creator Vitalik Buterin also praised the amendments and wrote that they further addressed his original top two concerns.
In his thread analyzing the amendments, Cohen wrote:
This gets to the heart of the problem with self-regulation. If it’s faster/cheaper to build powerful AI systems unsafely, that’s what racing actors will do. The incentives only get stronger as AI systems get more powerful (and profitable).
Self-regulation is already showing itself to be insufficient, as I wrote in The Nation:
According to the AI’s leading industrialists, the stake couldn’t be higher:
I’ll leave you with my favorite quote I got for the story: