Supposing SB1047 passes, what are the main ways in which you think it will contribute to (increasing or decreasing) existential risks?
Idea #1: Transparency through SSPs
It seems to be like the main way it could reduce AI risk is by making it more likely that governments and/or members of the public are able to notice (a) signs of dangerous capabilities or (b) problems in comapnies' safety and security plans.
One counter to this is that companies will be able to write things that sound good but don't actually give governments or members of the public the ability to meaningfully understand what's going on. For example, there are ways to write "we have an AI that is able to do novel AI R&D tasks" in ways. that underplay the extent or nature of the risks.
The counter to that seems to me like "well yes, we still have to deal with the fact that companies can try to make vague/unhelp SSPs [CC the discourse about RSPs] and we have to deal with the fact that governments will need to have sufficient risk context to understand the SSPs and understand when they need to intervene. You can't expect one bill in 2024 to solve all of the major challenges, but it pushes in the right direction and on the margin more transparency seems good."
Idea #2: Whistleblower mechanisms
The whistleblower protections seem great– and perhaps even better than "whistleblower protections" are "whistleblower mechanisms." I am worried about the amount of friction RE whistleblowing– who do you go to, what channels do you use, what are you supposed to say, etc etc. The fact that companies would have to provide "a reasonable internal process" and employees are explicitly told they can use this channel to disclose "misleading statements related to its safety and security protocol" or "failure to disclose known risks to employees" seems excellent.
The counter is that a whistleblower could say "I'm worried about X" and the most likely outcome is that the government ignores X because it doesn't have enough time/attention/technical capacity/risk context to understand why it should care or what it should do. But of course the counter to that counter is once again "yes, this doesn't fix everything, but surely we are better off in a world where clear whistleblower mechanisms exist, and hopefully there will be other efforts to make sure the government is equipped with the personnel/resources/processes it needs to notice and prepare for AI risks."
(Meta: After the political battles around SB1047 end, I hope more people end up writing about if/how they think SB1047 will/would affect AI risks. So much of the discourse got [perhaps rightfully] hijacked by political fights about claims that seem pretty ridiculous. It's plausible that this was the right call– political battles need to be fought sometimes– but on the margin I wish there was more "Here's the affirmative/positive case for why SB1047 is good", "Here are some concerns I have about SB1047 from the perspective of reducing existential risk, not from the perspective of responding to industry talking points", and things like "SB1047 seems to help but only if governments also develop X and Y in the upcoming years, and here are some thoughts on how they could do that."
Similarly, his ideas of things like ‘a truth seeking AI would keep us around’ seem to me like Elon grasping at straws and thinking poorly, but he’s trying.
The way I think about Elon is that he's very intelligent but essentially not open to any new ideas or capable of self-reflection if his ideas are wrong, except on technical matters: if he can't clearly follow the logic himself, on the first try, or there's a reason it would be uncomfortable or difficult to ignore it initially then he won't believe you, but he is smart.
Essentially, he got one good idea about AI risk into his head 10+ years ago and therefore says locally good things and isn't simply lying when he says them, but it doesn't hang together in his head in a consistent way (e.g. if he thought international stability and having good AI regulation was a good idea he wouldn't be supporting the candidate that wants to rethink all US alliances and would impair the federal government's ability to do anything new and complicated, with an e/acc as his running mate). In general, I think one of the biggest mental blind spots EA/rationalist types have is overestimating the coherence of people's plans for the future.
The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm ‘quasi-religious’ without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That’s actually it.
I find this especially strange almost to the point that I'm willing to call it knowingly bad faith. The Economist in the past has sympathetically interviewed Helen Toner, done deep dive investigations into mechanistic interpretability research that are at a higher level of analysis than I've seen from any other mainstream news publication, ran articles which acknowledge the soft consensus among tech workers and AI experts on the dangers, which included survey results on these so it's doubly difficult to dismiss is as "too speculative" or "scifi".
To state without elaboration that the risk is "quasi-religious" or "science fictional" when their own journalists have consistently said the opposite and provided strong evidence that the AI world generally agrees makes me feel like someone higher up changed their mind for some reason regardless of what their own authors think.
The one more concrete reference they gave was to very near term (like the next year) prospect of AI systems being used to assist in terrorism, which has indeed been slightly exaggerated by some, but to claim that there's no idea whatsoever about where these capabilities could be in 3 years is absurd given what they themselves have said in previous articles.
Without some explanation as to why they think genuine catastrophic misuse concerns are not relatively near term and relatively serious (e.g. explaining why they think we won't see autonomous agents that could play a more active role in terrorism if freely available) it just becomes the classic "if 2025 is real why isn't it 2025 now" fallacy.
The short argument I've been using is:
If you want to oppose the bill you as a matter of logical necessity have to believe some combination of,
- No significant near-term catastrophic AI risks exist that warrant this level of regulation.
- Significant near-term risks exist, but companies shouldn't be held liable for them (i.e. you're an extremist ancap)
- Better alternatives are available to address these risks.
- The bill will be ineffective or counterproductive in addressing these risks.
The best we get is vague hints that (1) is true from some tech leaders but e.g. google, OpenAI definitely doesn't believe (1), or we get vague pie in the sky appeals to (3) as if the federal government is working efficiently on frontier tech issues right now, or claims for (4) that either lie about the content of the bill, e.g. claiming it applies to small startups and academics, or say fearmongering towards (4) like every tech company in California will up-sticks and leave, or it will so impair progress that China will inevitably win, so the bill will not achieve its stated aims.
- Internet hosting platforms are responsible for ensuring indelible watermarks.
The bill requires that "Generative AI hosting platforms shall not make available a generative AI system that does not allow a GenAI provider, to the greatest extent possible and either directly providing functionality or making available the technology of a third-party vendor, to apply provenance data to content created or substantially modified by the system"
This means that sites running GenAI models need to allow the GenAI systems to implement their required watermarking, not that hosting providers (imgur, reddit, etc.) need to do so. Less obviously good, but still importantly, it also doesn't require the GenAI hosting provider the ensure the watermark is indelible, just that they include watermarking via either the model, or a third-party tool, when possible.
OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it
Do you have a source for this?
Which the old version certainly would have done. The central thing the bill intends to do is to require effective watermarking for all AIs capable of fooling humans into thinking they are producing ‘real’ content, and labeling of all content everywhere.
OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it, because it would hurt their business – people want to turn in essays and write emails, and would rather the other person not know that ChatGPT wrote them.
As far as we know, no other company has similar technology. It makes sense that they would want to mandate watermarking everywhere.
Is watermarking actually really difficult? The overall concept seems straightforward, the most obvious ways to do it doesn't require any fiddling with model internals, (so you don't need to have AI expertise to do, or do expensive human work for your specific system like RLHF), and Scott Aaronson claims that a single OpenAI engineer was able to build a prototype pretty quickly.
I imagine if this becomes law some academics can probably hack together an open source solution quickly. So I'm skeptical that the regulatory capture angle could be particularly strong.
(I might be too optimistic about the engineering difficulties and amount of schlep needed, of course).
If the academics can hack together an open source solution why haven't they? Seems like it would be a highly cited, very popular paper. What's the theory on why they don't do it?
Just spitballing, but it doesn't seem theoretically interesting to academics unless they're bringing something novel (algorithmically or in design) to the table, and practically not useful unless implemented widely, since it's trivial for e.g. college students to use the least watermarked model.
Two responses.
One, even if no one used it, there would still be value in demonstrating it was possible - if academia only develops things people will adapt commercially right away then we might as well dissolve academia. This is a highly interesting and potentially important problem, people should be excited.
Two, there would presumably at minimum be demand to give students (for example) access to a watermarked LLM, so they could benefit from it without being able to cheat. That's even an academic motivation. And if the major labs won't do it, someone can build a Llama version or what not for this, no?
Yeah, I think the simplest thing for image generation is for model hosting providers to use a separate tool - and lots of work on that already exists. (see, e.g., this, or this, or this, for different flavors.) And this is explicitly allowed by the bill.
For text, it's harder to do well, and you only get weak probabilistic identification, but it's also easy to implement an Aaronson-like scheme, even if doing it really well is harder. (I say easy because I'm pretty sure I could do it myself, given, say, a month working with one of the LLM providers, and I'm wildly underqualified to do software dev like this.)
This is the endgame. Very soon the session will end, and various bills either will or won’t head to Newsom’s desk. Some will then get signed and become law.
Time is rapidly running out to have your voice impact that decision.
Since my last weekly, we got a variety of people coming in to stand for or against the final version of SB 1047. There could still be more, but probably all the major players have spoken at this point.
So here, today, I’m going to round up all that rhetoric, all those positions, in one place. After this, I plan to be much more stingy about talking about the whole thing, and only cover important new arguments or major news.
I’m not going to get into the weeds arguing about the merits of SB 1047 – I stand by my analysis in the Guide to SB 1047, and the reasons I believe it is a good bill, sir.
I do however look at the revised AB 3211. I was planning on letting that one go, but it turns out it has a key backer, and thus seems far more worthy of our attention.
The Media
I saw two major media positions taken, one pro and one anti.
Neither worried itself about the details of the bill contents.
The Los Angeles Times Editorial Board endorses SB 1047, since the Federal Government is not going to step up, and using an outside view and big picture analysis. I doubt they thought much about the bill’s implementation details.
The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm ‘quasi-religious’ without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That’s actually it.
OpenAI Opposes SB 1047
The first half of the story is that OpenAI came out publicly against SB 1047.
They took four pages to state its only criticism in what could have and should have been a Tweet: That it is a state bill and they would prefer this be handled at the Federal level. To which, I say, okay, I agree that would have been first best and that is one of the best real criticisms. I strongly believe we should pass the bill anyway because I am a realist about Congress, do not expect them to act in similar fashion any time soon even if Harris wins and certainly if Trump wins, and if they pass a similar bill that supersedes this one I will be happily wrong.
Except the letter is four pages long, so they can echo various industry talking points, and echo their echoes. In it, they say: Look at all the things we are doing to promote safety, and the bills before Congress, OpenAI says, as if to imply the situation is being handled. Once again, we see the argument ‘this might prevent CBRN risks, but it is a state bill, so doing so would not only not be first bet, it would be bad, actually.’
They say the bill would ‘threaten competitiveness’ but provide no evidence or argument for this. They echo, once again without offering any mechanism, reason or evidence, Rep. Lofgren’s unsubstantiated claims that this risks companies leaving California. The same with ‘stifle innovation.’
In four pages, there is no mention of any specific provision that OpenAI thinks would have negative consequences. There is no suggestion of what the bill should have done differently, other than to leave the matter to the Feds. A duck, running after a person, asking for a mechanism.
My challenge to OpenAI would be to ask: If SB 1047 was a Federal law, that left all responsibilities in the bill to the USA AISI and NIST and the Department of Justice, funding a national rather than state Compute fund, and was otherwise identical, would OpenAI then support? Would they say their position is Support if Federal?
Or, would they admit that the only concrete objection is not their True Objection?
I would also confront them with AB 3211, but hold that thought.
My challenge to certain others: Now that OpenAI has come out in opposition to the bill, would you like to take back your claims that SB 1047 would enshrine OpenAI and others in Big Tech with a permanent monopoly, or other such Obvious Nonsense?
I think this is generous. OpenAI did not explain how not to regulate AI, other than that it should not be by California. I couldn’t find a single thing in the bill OpenAI would not want the Federal Government to do they were willing to name?
Two former OpenAI employees point out some obvious things about OpenAI deciding to oppose SB 1047 after speaking of the need for regulation. To be fair, Rohit is very right that any given regulation can be bad, but again they only list one specific criticism, and do not say they would support if that criticism were fixed.
OpenAI Backs AB 3211
For SB 1047, OpenAI took four pages to say essentially this one sentence:
So presumably that would mean they oppose all state-level regulations. They then go on to note they support three federal bills. I see those bills as a mixed bag, not unreasonable things to be supporting, but nothing in them substitutes for SB 1047.
Again, I agree that would be the first best solution to do this Federally. Sure.
For AB 3211, they… support it? Wait, what?
You’re supposed to be able to request such things. I have been trying for several days to get a copy of the support letter, getting bounced around by several officials. So far, I got them to say they got my request, but no luck on the actual letter, so we don’t get to see their reasoning, as the article does not say. Nor does it clarify if they offered this support before or after recent changes. The old version was very clearly a no good, very bad bill with a humongous blast radius, although many claim it has since been improved to be less awful.
OpenAI justifies this position as saying ‘there is a role for states to play’ in such issues, despite AB 3211 very clearly being similar to SB 1047 in the degree to which it is a Federal law in California guise. It would absolutely apply outside state lines and impose its rules on everyone. So I don’t see this line of reasoning as valid. Is this saying that preventing CBRN harms at the state level is bad (which they actually used as an argument), but deepfakes don’t harm national security so preventing them at the state level is good? I guess? I mean, I suppose that is a thing one can say.
The bill has changed dramatically from when I looked at it. I am still opposed to it, but much less worried about what might happen if it passed, and supporting it on the merits is no longer utterly insane if you have a different world model. But that world model would have to include the idea that California should be regulating frontier generative AI, at least for audio, video and images.
There are three obvious reasons why OpenAI might support this bill.
The first is that it might be trying to head off other bills. If Newsom is under pressure to sign something, and different bills are playing off against each other, perhaps they think AB 3211 passing could stop SB 1047 or one of many other bills – I’ve only covered the two, RTFB is unpleasant and slow, but there are lots more. Probably most of them are not good.
The second reason is if they believe that AB 3211 would assist them in regulatory capture, or at least be easier for them to comply with than for others and thus give them an advantage.
Which the old version certainly would have done. The central thing the bill intends to do is to require effective watermarking for all AIs capable of fooling humans into thinking they are producing ‘real’ content, and labeling of all content everywhere.
OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it, because it would hurt their business – people want to turn in essays and write emails, and would rather the other person not know that ChatGPT wrote them.
As far as we know, no other company has similar technology. It makes sense that they would want to mandate watermarking everywhere.
The third reason is they might actually think this is a good idea, in which case they think it is good for California to be regulating in this way, and they are willing to accept the blast radius, rather than actively welcoming that blast radius or trying to head off other bills. I am… skeptical that this dominates, but it is possible.
What we do now know, even if we are maximally generous, is that OpenAI has no particular issue with regulating AI at the state level.
Anthropic Says SB 1047’s Benefits Likely Exceed Costs
Anthropic sends a letter to Governor Newsom regarding SB 1047, saying its benefits likely exceed its costs. Jack Clark explains.
Jack Clack’s description seems accurate. While the letter says that benefits likely exceed costs, it expresses uncertainty on that. It is net positive on the bill, in a way that would normally imply it was a support letter, but makes clear Anthropic and Dario Amodei technically do not support or endorse SB 1047.
So first off, thank you to Dario Amodei and Anthropic for this letter. It is a helpful thing to do, and if this is Dario’s actual point of view then I support him saying so. More people should do that. And the letter’s details are far more lopsided than their introduction suggests, they would be fully compatible with a full endorsement.
Details of Anthropic’s Letter
The letter is a bit too long to quote in full but consider reading the whole thing. Here’s the topline and the section headings, basically.
They say the main advantages are:
And these are their remaining concerns:
They also offer principles on regulating frontier systems:
They see three elements as essential:
As you might expect, I have thoughts.
I would challenge Dario’s assessment that this is only ‘halfway.’ I analyzed the bill last week to compare it to Anthropic’s requests, using the public letter. On major changes, I found they got three, mostly got another two and were refused on one, the KYC issue. On minor issues, they fully got 5, they partially got 3 and they got refused on expanding the reporting time of incidents. Overall, I would say this is at least 75% of Anthropic’s requests weighted by how important they seem to me.
I would also note that they themselves call for ‘very adaptable’ regulation, and that this request is not inherently compatible with this level of paranoia about how things will adapt. SB 1047 is about as flexible as I can imagine a law being here, while simultaneously being this hard to implement in damaging fashion. I’ve discussed those details previously, my earlier analysis stands.
I continue to be baffled by the idea that in a world where AGI is near and existential risks are important, Anthropic is terrified of absolutely any form of pre-harm enforcement. They want to say that no matter how obviously irresponsible you are being, until something goes horribly wrong, we should count purely on deterrence. And indeed, they even got most of what they wanted. But they should understand why that is not a viable strategy on its own.
And I would take issue with their statement that SB 1047 drew so much opposition because it was ‘insufficiently clean,’ as opposed to the bill being the target of a systematic well-funded disinformation campaign from a16z and others, most of whom would have opposed any bill, and who so profoundly misunderstood the bill they successfully killed a key previous provision that purely narrowed the bill, the Limited Duty Exception, without (I have to presume?) realizing what they were doing.
To me, if you take Anthropic’s report at face value, they clear up that many talking points opposing the bill are false, and are clearly saying to Newsom that if you are going to sign an AI regulation bill with any teeth whatsoever, that SB 1047 is a good choice for that bill. Even if they’d, if given the choice, prefer it with even less teeth.
Another way of putting this is that I think it is excellent that Anthropic sent this letter, that it accurately represents the bill (modulo the minor ‘halfway’ line) and I presume also how Anthropic leadership is thinking about it, and I thank them for it.
I wish we had a version of Anthropic where this letter was instead disappointing.
I am grateful we do have at least this version of Anthropic.
Elon Musk Says California Should Probably Pass SB 1047
You know who else is conflicted but ultimately decided SB 1047 should probably pass?
Notice Elon Musk noticing that this will cost him social capital, and piss people off, and doing it anyway, while also stating his nuanced opinion – a sharp contrast with his usual political statements. A good principle is that when someone says they are conflicted (which can happen in both directions, e.g. Danielle Fong here saying she opposes the bill about at the level Anthropic is in favor of it) it is a good bet they are sincere even if you disagree.
OK, I’ve got my popcorn ready, everyone it’s time to tell us who you are, let’s go.
As in, who understands that Elon Musk has for a long time cared deeply about AI existential risk, and who assumes that any such concern must purely be a mask for some nefarious commercial plot? Who does that thing where they turn on anyone who dares disagree with them, and who sees an honest disagreement?
People can support bills for reasons under than their own narrow self-interest?
Perhaps he might care about existential risk, as evidenced by him talking a ton over the years about existential risk? And that being the reason he helped found OpenAI? From the beginning I thought that move was a mistake, but that was indeed his reasoning. Similarly, his ideas of things like ‘a truth seeking AI would keep us around’ seem to me like Elon grasping at straws and thinking poorly, but he’s trying.
Here we have some fun not-entirely-unfair meta-chutzpah given Elon’s views on government and California otherwise, suddenly calling out Musk for doing xAI despite thinking AI is an existential risk (which is actually a pretty great point), and a rather bizarre theory of future debates about regulatory paths.
That is such a great encapsulation of the a16z mindset. Everything is a con, everyone has an angle, Musk must be out there trying to hurt his enemies. That must be it. Beff Jezos went with the same angle.
xAI is, of course, still in California.
This is an excellent point. Whichever side you are on, you should be very happy the issue remains non-partisan. Let’s all work to keep it that way.
Another excellent point and a consistent pattern. Watch who has clearly RTFB (read the bill) especially in its final form, and who has not.
Negative Reactions to Anthropic’s Letter, Attempts to Suppress Dissent
We also have at least one prominent reaction (>600k views) from a bill opponent calling for a boycott of Anthropic, highlighting the statement about benefits likely exceeding costs and making Obvious Nonsense accusations that the bill is some Anthropic plot (I can directly assure you this is not true, or you could, ya know, read the letter, or the bill), confirming how this is being interpreted. To his credit, even Brian Chau noticed this kind of hostile reaction made him uncomfortable, and he warns about the dangers of purity spirals.
Meanwhile Garry Tan (among others, but he’s the one Chau quoted) is doing exactly what Chau warns about, saying things like ‘your API customers will notice how decelerationist you are’ and that is absolutely a threat and an attempt to silence dissent against the consensus. The message, over and over, loud and clear, is: We tolerate no talk that there might be any risk in the room whatsoever, or any move to take safety precautions or encourage them in others. If you dare not go with the vibe they will work to ensure you lose business.
(And of course, everyone who doesn’t think you should go forward with reckless disregard, and ‘move fast and break things,’ is automatically a ‘decel,’ which should absolutely be read in-context the way you would a jingoistic slur.)
It should not be underestimated the extent to which, in the VC-SV core, dissent is being suppressed, with people and companies voicing the wrong support or the wrong vibes risking being cut off from their social networks and funding sources. When there are prominent calls for even the lightest of all support for acting responsibly – such as a non-binding letter saying maybe we should pay attention to safety risks that was so harmless SoftBank signed it – there are calls to boycott everyone in question, on principle.
The thinness of skin is remarkable. They fight hard for the vibes.
I like the refreshing clarity of Aaron’s first sentence. He says we should not ‘create the template to slow things down,’ on principle. As in, we should not only not slow things down in exchange for other benefits, we should intentionally not have the ability to, in the future, take actions that might do that. The second sentence then goes on to make a concrete counterfactual claim, also a good thing to do, although I strongly claim that the second sentence is false, such a bill would have done very little.
If you’re wondering why so many in VC/YC/SV worlds think ‘everyone is against SB 1047,’ this kind of purity spiral and echo chamber is a lot of why. Well played, a16z?
Positions In Brief
Yoshua Bengio is interviewed by Shirin Ghaffary of Bloomberg about the need for regulation, and SB 1047 in particular, warning that we are running out of time. Bloomberg took no position I can see, and Bengio’s position is not new.
Dan Hendrycks offers a final op-ed in Time Magazine, pointing out that it is important for the AI industry that it prevent catastrophic harms. Otherwise, it could provoke a large negative reaction. Another externality problem.
Here is a list of industry opposition to SB 1047.
Nathan Lebenz had a full podcast, featuring both the pro (Nathan Calvin) and the con (Dean Ball) sides.
In the Atlantic, bill author Scott Weiner is interviewed about all the industry opposition, insisting this is ‘not a doomer bill’ or focused on ‘science fiction risks.’ He is respectful towards bill most opponents, but does not pretend that a16z isn’t running a profoundly dishonest campaign.
I appreciated this insightful take on VCs who oppose SB 1047.
Indeed I have. At least here they tell you they’re saying no. Now you want them to tell you why and how you can change their minds? Good luck with that.
Lawrence Chan does an RTFB, concludes it is remarkably light touch and a good bill. He makes many of the usual common sense points – this covers zero existing models, will never cover anything academics do, and (he calls it a ‘spicy take’) if you cannot take reasonable care doing something then have you considered not doing it?
Mike Knoop, previously having opposed SB 1047 because he does not think AGI progress is progressing and that anything slowing down AGI progress would be bad, updates to believing it is a ‘no op’ that doesn’t do anything but it could reassure the worried and head off worse other actions. But if the bill actually did anything, he would oppose. This is a remarkably common position, that there is no cost-benefit analysis to be done when building things smarter than humans. They think this is a situation where no amount of safety is worth any amount of potentially slowing down if there was a safety issue, so they refuse to talk price. The implications are obvious.
Aidan McLau of Topology AI says:
Notice how much the online debate has always been between libertarians and more extreme libertarians. Everyone involved hates regulation. The public, alas, does not.
Witold Wnuk makes the case that the bill is sufficiently weak that it will de facto be moral license for the AI companies to go ahead and deal with the consequences later, and the blame when models go haywire will thus also be on those who passed this bill, and that this does nothing to solve the problem. As I explained in my guide, I very much disagree and think this is a good bill. And I don’t think this bill gives anone ‘moral license’ at all. But I understand the reasoning.
Stephen Casper notices that the main mechanism of SB 1047 is basic transparency, and that it does not bode well that industry is so vehemently against this and it is so controversial. I think he goes too far in terms of how he describes how difficult it would be to sue under the bill, he’s making generous (to the companies) assumptions, but the central point here seems right.
Postscript: AB 3211 RTFBC (Read the Bill Changes)
One thing California does well is show you how a bill has changed since last time. So rather than having to work from scratch, we can look at the diff.
We’ll start with a brief review of the old version (abridged a bit for length). Note that some of this was worded badly in ways that might backfire quite a lot.
All right, let’s see what got changed and hopefully fixed, excluding stuff that seems to be for clarity or to improve grammar without changing the meaning.
There is a huge obvious change up front: Synthetic content now only includes images, videos and audio. The bill no longer cares about LLMs or text at all.
A bunch of definitions changed in ways that don’t alter my baseline understanding.
Large online platform no longer includes internet website, web application or digital application. It now has to be either a social media platform, messaging platform, advertising network or standalone search engine that displays content to viewers who are not the creator or collaborator, and the threshold is up to 2 million monthly unique California users.
Generative AI providers have to make available to the public a provenance detection tool or permit users to use one provided by a third party, based on industry standards, that detects generative AI content and how that content was created. There is no minimum size threshold for the provider before they must do this.
Summaries of testing procedures must be made available upon requests to academics, except when that would compromise the method.
A bunch of potentially crazy disclosure requirements got removed.
The thing about audio disclosures happening twice is gone.
Users of platforms need not label every piece of data now, the platform scans the data and reports any provenance data contained therein, or says it is unknown if none is found.
There are new disclosure rules around the artist, track and copyright information on sound recordings and music videos, requiring the information be displayed in text.
I think that’s the major changes, and they are indeed major. I am no longer worried AB 3211 is going to do anything too dramatic, since at worst it applies only to audio, video and images, and the annoyance levels involved are down a lot, and standards for compliance are lower, and compliance in these other formats seems easier than text.
My new take on the new AB 3211 is that this is a vast improvement. If nothing else, the blast radius is vastly diminished.
Is it now a good bill?
I wouldn’t go that far. It’s still not a great implementation. I don’t think deepfakes are a big enough issue to motivate this level of annoyance, or the tail risk that this is effectively a much broader burden than it appears. But the core thing it is attempting to do is no longer a crazy thing to attempt, and the worst dangers are gone. I think the costs exceed the benefits, but you could make a case, if you felt deepfake audio and video were a big short term deal, that this bill has more benefits than costs.
What you cannot reasonably do is support this bill, then turn around and say that California should not be regulating AI and should let the Federal government do it. That does not make any sense, and I have confidence the Federal government will if necessary deal with deepfakes, and that we could safely react after the problem gets worse and being modestly ‘too late’ to it would not be a big deal.