See the event page here.

Hello Californians!

We need you to help us fight for SB 1047, a landmark bill to help set a benchmark for AI safety, decrease existential risk, and promote safety research. This bill has been supported by some of the world’s leading AI scientists and the Center of AI Safety, and is extremely important for us to pass. As Californians, we have a unique opportunity to inspire other states to follow suit.

SB 1047 has a hearing in the Assembly Appropriations Committee scheduled for August 15th. Unfortunately, due to misinformation and lobbying by big tech companies, the bill risks getting watered down or failing to advance. This would be a significant blow against safety and would continue the “race to the bottom” in AI capabilities without any guardrails.

We need you to do the following to save the bill. This will take no more than 5 minutes:

This document has additional information about the bill and other ways to help. [But some of the dates are wrong! This post is up-to-date.]

Please try to get this done as soon as possible, and let us know if you need any help. Your voice matters, and it is urgent that we push this before it’s too late.

Thank you so much for your support!

New Comment
24 comments, sorted by Click to highlight new comments since:

Unfortunately, due to misinformation and lobbying by big tech companies, SB 1047 is currently stalled in the Assembly Appropriations Committee.

This is extremely misleading. Any bill that would have non-negligible fiscal impact (the threshold is only $150,000 https://apro.assembly.ca.gov/welcome-committee-appropriations/appropriations-committee-rules) must be put in the Appropriation Committee “Suspense File” until after the budget is prepared. That is the status of SB 1047 and many many other bills. It has nothing to do with misinformation or lobbying, it is a part of the standard process. I believe all the bills that make it out of the Suspense File will be announced at the hearing this Thursday.

More on this: https://calmatters.org/newsletter/california-bills-suspense-file/

Sad that this kind of nitpicking is so wildly upvoted compared to the main post. It's really non-central to the point.

The important information is:

  • The most important piece of legislation on AI Safety in history is currently under consideration in the California legislature.
  • Tech companies and AI labs are lobbying against it.
  • You can help it pass by contacting some people.

Whether or not this affects the Suspense File in particular is irrelevant.

[To be clear this criticism is not directed at cfoster0 - it's perfectly fine to correct mistakes - but at the upvoters who I imagine are using this to deflect responsibility with the "activism dumb" reflex, while not doing the thing that would actually reduce x-risk. Hopefully this is purely in my imagination and all those upvoters subsequently went and contacted the people - or at least had some other well considered reason to do nothing, which they chose not to share.]

OK, in case this wasn't clear: if you are a Californian and think this bill should become law, don't let my comment excuse you from heeding the above call to action. Contacting your representatives will potentially help move the needle.

My guess is that it's more out a "dishonesty bad" reflex than an "activism dumb" reflex.

Why is "dishonesty" your choice of words here? Our mistake cut against our goal of getting people to call at an impactful time. It wasn't manipulative. It was merely mistaken. I understand holding sloppiness against us but not "dishonesty". 

I think the lack of charity is probably related to "activism dumb".

It seemed like a pretty predictable direction in which to make errors. I don't think we have great language about this kind of stuff, but I think it makes sense to call mistakes which very systematically fall along certain political lines "dishonest". 

Again, I think the language that people have here is a bit messy and confusing, but given people's potential for self-deception, and selective error-correction, I think it's important to have language for that kind of stuff, and most of what people usually call deception falls under this kind of selective error-correction and related biases.

I suggest "Conveniently misleading"

The bill is in danger of not passing Appropriations because of lobbying and misinformation. That's what calling helps address. Calling does not make SB 1047 cheaper, and therefore does not address the Suspense File aspects of what it's doing in Appropriations. 

I feel some sort of "ugh, I don't want to be the Language Police" vibe, but here's my two cents:

  • I think I would've called this "misleading" or "inaccurate" but I think "dishonest" should be reserved for stronger violations. 
    • I also like Ben’s "conveniently misleading" or maybe even something like "inaccurate in a way that serves the interests of the OP.")
  • I think we should probably reserve terms like "dishonest" for more egregious forms of lying/manipulation. 
    • Outside of LW, I think "dishonest" often has a conscious/intentional/deliberate/premediated connotation. In many circles, dishonesty is a "charged" term that implies a higher degree of wrongness than we usually associate with things like imprecision, carelessness, or self-deception.
  • Separately, I do think it's important for those involved in advocacy to hold themselves to high standards of precision/accuracy and be "extra careful" to avoid systematically deceiving oneself or others. But I also think there are ways that the community could levy critiques in kinder and more productive ways, though. 
  • I think we would like to avoid worlds where advocacy people walk away with some sense of "ugh, LW people are mean and rude and call me dishonest and manipulative whenever I make minor mistakes" while still preserving the thoughtful/conscientious/precise/truth-seeking norms.

I think "misleading" seems also marginally better for these kinds of things. It still has some of the "well, I notice a correlation in your errors" dimension, but without being as judgmental about the details.

Outside of LW, I think "dishonest" often has a conscious/intentional/deliberate/premediated connotation.

FWIW, I don't really believe this. I've been following how people use terms like "dishonest" in public very closely since 2022, and mostly people use it when people seem to say contradictory things, and the eternal back and forth between "these errors sure seem correlated and this person is saying contradictory things to different people" and "are you saying this person sat down and with full conscious awareness decided to lie to people?" seems to be a universal component of talking about honesty. 

Other people don't really have more agreement on the definitions of "dishonesty" or "lying", and I think that reflects an underlying complexity in the territory. There are different levels of self-awareness, and in the end it's also not really clear how much it matters if someone has a homunculus in their brain that does notice how they are saying different things to different people, vs. they are just doing it on instinct. 

in the end it's also not really clear how much it matters if someone has a homunculus in their brain that does notice how they are saying different things to different people, vs. they are just doing it on instinct. 

I think from a purely "assess the consequences/predict the behavior" perspective this makes sense. I do think that many people view it as more "wrong" to do the intentional homunculus thing and would be more upset & feel more attacked if someone accused them of this. 

Put differently, I think "Alice, you were misleading there" will reliably evoke a different response from Alice compared to "Alice, you were dishonest." To get more fine-grained:

  • "Alice, I think you were misleading"– low aggro//most kind
  • "Alice, I think you [deliberately] lied to me– high aggro//least kind
  • "Alice, I think you were [deliberately? accidentally?] dishonest"– ambiguous. Could be easily interpreted as the high aggro//least kind version.

If “It's really non-central to the point” then it should be quick and easy to have the OP correct the misleading claim and issue an apology to anyone who may have taken it at face value?

It was corrected.

[-]Raemon179

I've previously messaged Buffy Wicks asking to make sure a few specific bits got preserved. (I unfortunately didn't save a copy of it before sending into their letter-form-submission). I plan to do more calling/writing Wicks/Weiner and later the governor.

I'm hoping to say more things about it soon. It seems pretty important. I've talked to some people with concerns about various aspects of it, which seem reasonable to be concerned about  (I agreed with most of the takes in RobertM's post about it). But it still seems like the right overall call to me to support the bill. I'd argue more about the specifics of it, but I expect to see a new version of the bill soon and it seems probably better to argue about it then.

Here's the comment I sent using the contact form on my representative's website.

Dear Assemblymember Grayson:

I am writing to urge you to consider voting Yes on SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. How our civilization handles machine intelligence is of critical importance to the future of humanity (or lack thereof), and from what I've heard from sources I've trust, this bill seems like a good first step: experts such as Turing Award winners Yoshua Bengio and Stuart Russell support the bill (https://time.com/7008947/california-ai-bill-letter/), and Eric Neyman of the Alignment Research Center described it as "narrowly tailored to address the most pressing AI risks without inhibiting innovation" (https://x.com/ericneyman/status/1823749878641779006). Thank you for your consideration. I am,

Your faithful constituent,
Zack M. Davis

[-]kave4-2

I've been thinking about calling to support this bill, but haven't because I'm worried that as a resident who can't vote, they don't want to hear from me. My understanding is that if you tell the Californian rep you don't have the right to vote (e.g. because you're on a visa), they will ignore you. And that you can probably mislead without lying, but it will be necessary to mislead.

Anyone know better?

When I saw someone call them yesterday they didn't ask any questions or any name (the call took literally 10 seconds), and I didn't get any indication they cared about citizenship and residence.

[-]kave40

I guess if I'm worried that this is important to them I can just proactively bring it up

[-]kave100

I did this. They noted down my support! Though they also didn't really give me a sign they understood what I was saying (and I did a pretty poor job of explaining)

Rep: Hello, office of Buffy Wicks.
kave: Oh, uh ... hi. I'm calling ... about SB-1047 ... and ... I guess I wanted to check if I can register my support given that I live in Buffy Wicks' area but I can't vote ...
Rep: OK I'll note that down as support. Have a great day!

The Bill has passed the appropriations committee and will now move onto the Assembly floor.  There were some changes made to the Bill. From the press release

Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.

Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.

Adjusting legal standards - The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.

New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.

Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.

I just did my calls for today. (Seemed like today is the last day before the next assembly voting and particularly worthwhile to call during?)

A thing I am wondering about: My understanding is it's not very useful to call with a nuanced statement. Basically they just keep a ticker-tally of people supporting the bill. 

But, it seems to me that if like 300 people called asking for a particular change, or supporting a particular clause not-getting-removed, this might be helpful? If so, I think it might be a failure of the x-risk community to not have gotten consensus on things-important-to-keep.