I feel (mostly from observing an omission (I admit I have not yet RTFB)) that the international situation is not correctly countenanced here. This bit is starting to grapple with it:
Plan for preventing use, access and reverse engineering in places that lack adequate AI safety legislation.
Other than that, it seems like this bill basically thinks that America is the only place on Earth that exists and has real computers and can make new things????
And even, implicitly in that clause, the worry is "Oh no! What if those idiots out there in the wild steal our high culture and advanced cleverness!"
However, I expect other countries with less legislation to swiftly sweep into being much more "advanced" (closer to being eaten by artificial general super-intelligence) by default.
It isn't going to be super hard to make this stuff, its just that everyone smart refuses to work on it because they don't want to die. Unfortunately, even midwits can do this. Hence (if there is real danger) we probably need legislative restrictions.
That is: the whole point of the legislation is basically to cause "fast technological advancement to reliably and generally halt" (like we want the FAISA to kill nearly all dramatic and effective AI innovation (similarly to how the FDA kills nearly all dramatic and effective Drug innovation, and similar to how the Nuclear Regulatory Commission killed nearly all nuclear power innovation and nuclear power plant construction for decades)).
If other countries are not similarly hampered by having similar FAISAs of their own, then they could build an Eldritch Horror and it could kill everyone.
Russia didn't have an FDA, and invented their own drugs.
France didn't have the NRC, and built an impressively good system of nuclear power generation.
I feel that we should be clear that the core goal here is to destroy innovative capacity, in AI, in general, globally, because we fear that innovation has a real chance, by default, by accident, of leading to "automatic human extinction".
The smart and non-evil half of the NIH keeps trying to ban domestic Gain-of-Function research... so people can just do that in Norway and Wuhan instead. It still can kill lots of people, because it wasn't taken seriously in the State Department, and we have no global restriction on Gain-of-Function. The Biological Weapons Convention exists, but the BWC is wildly inadequate on its face.
The real and urgent threat model here is (1) "artificial general superintelligence" arises and (2) gets global survive and spread powers and then (3) thwarts all human aspirations like we would thwart the aspirations of ants in our kitchen.
You NEED global coordination to stop this EVERYWHERE or you're just re-arranging who, in the afterlife, everyone will be pointing at to blame them for the end of humanity.
The goal isn't to be blameless and dead. The goal is the LIVE. The goal is to reliably and "on purpose" survive and thrive, in humanistically delightful ways, in the coming decades, centuries, and millennia.
If extinction from non-benevolent artificial superintelligence is a real fear, then it needs international coordination. If this is not a real fear, then we probably don't need the FAISA in the US.
So where is the mention of a State Department loop? Where is the plan for diplomacy? Where are China or Russia or the EU or Brazil or Taiwan or the UAE or anyone but America mentioned?
Two obvious points:
Rather than have America hope to "set a fashion" (that would obviously (to my mind) NOT be "followed based on the logic of fashion") in countries that hate us, like North Korea and so on...
I would prefer to reliably and adequately cover EVERY base that needs to be covered and I think this would work best if people in literally every American consulate in every country (and also at least one person for every country with no diplomatic delegation at all) were tracking the local concerns, and trying to get a global FAISA deal done.
If I might rewrite this a bit:
The goal isn't FOR AMERICA to be blameless and EVERYONE to be dead. The goal is for ALL HUMANS ON EARTH to LIVE. The goal is to reliably and "on purpose" survive and thrive, on Earth, in general, even for North Koreans, in humanistically delightful ways, in the coming decades, centuries, and millennia.
The internet is everywhere. All software is intrinsically similar to a virus. "Survive and spread" capabilities in software are the default, even for software that lacks general intelligence.
If we actually believe that AGI convergently heads towards "not aligned with Benevolence, and not aligned with Natural Law, and not caring about humans, nor even caring about AI with divergent artificial provenances" but rather we expect each AGI to head toward "control of all the atoms and joules by any means necessary"... then we had better stop each and every such AGI very soon, everywhere, thoroughly.
@Zach Stein-Perlman I'm not really sure why you gave a thumbs-down. Probably you're not trying to communicate that you think there shouldn't be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn't be surprised if you provided one).
Suppose you can take an action that decreases net P(everyone dying) but increases P(you yourself kill everyone), and leaves all else equal. I claim you should take it; everyone is better off if you take it.
I deny "deontological injunctions." I want you and everyone to take the actions that lead to the best outcomes, not that keep your own hands clean. I'm puzzled by your expectation that I'd endorse "deontological injunctions."
This situation seems identical to the trolley problem in the relevant ways. I think you should avoid letting people die, not just avoid killing people.
[Note: I roughly endorse heuristics like if you're contemplating crazy-sounding actions for strange-sounding reasons, you should suspect that you're confused about your situation or the effects of your actions, and you should be more cautious than your naive calculations suggest. But that's very different from deontology.]
I think I have a different overall take than Ben here, but, the frame I think makes sense here is to be like: "Deontological injuctions are guardrails. There are hypothetical situations (and, some real situations) where it's correct to override them, but the guardrail should have some weight and for more important guardrails, you need a clearer reasoning for why avoiding it actually helps."
I don't know what I think about this in the case of a country passing laws. Countries aren't exactly agents. Passing novel laws is different than following existing laws. But, I observe:
You should be looking for moral reasoning that makes you simple to reason about, and that perform well in most cases. That's a lot of what deontology is for.
My thoughts:
a) Some of the penalties seemed too weak
b) Uncertain whether we want license appeals decided by judges. I would the approval to be decided on technical grounds, but for judges to intervene to ensure that the process is fair. Or maybe a committee that is mostly technical, but that which contains a non-voting legal expert to ensure compliance.
c) I would prefer a strong stand against dangerous open-weight models.
We only have people who cry wolf all the time. I love that for them, and thank them for their service, which is very helpful. Someone needs to be in that role, if no one is going to be the calibrated version. Much better than nothing. Often their critiques point to very real issues, as people are indeed constantly proposing terrible laws.
The lack of something better calibrated is still super frustrating.
This mental (or emotional) move here, where you manage to be grateful for people doing a highly imperfect job while also being super frustrated that no one is doing a genuinely good job: how are you doing that?
I see this often in rationalist spaces, and I'm confused about how people learn to do this. I would probably end up complaining about the failings of the best (highly inadequate) strategies we've got without the additional perspective of "how would things be if we didn't even have this?"
For people who remember learning how to do this, how did you practice?
My guess is that different people do it differently, and I am super weird.
For me a lot of the trick is consciously asking if I am providing good incentives, and remembering to consider what the alternative world looks like.
I think this might be a little too harsh on CAIP (discouragement risk). If shit hits the fan, they'll have a serious bill ready to go for that contingency.
Seriously writing a bill-that-actually-works shows beforehand that they're serious, and the only problem was the lack of political will (which in that contingency would be resolved).
If they put out a watered-down bill designed to maximize the odds of passage then they'd be no different from any other lobbyists.
It's better in this case to instead have a track record for writing perfect bills that are passable (but only given that shit hits the fan), than a track record for successfully pumping the usual garbage through the legislative process (which I don't see them doing well at; playing to your strengths is the name of the game for lobbying and "turning out to be right" is CAIP's strength).
I don't see this response as harsh at all? I see it as engaging in detail with the substance, note the bill is highly thoughtful overall, with a bunch of explicit encouragement, defend a bunch of their specific choices, and I say I am very happy they offered this bill. It seems good and constructive to note where I think they are asking for too much? While noting that the right amount of 'any given person reacting thinks you went too far in some places' is definitely not zero.
The rulemaking authority procedures are anything but "standard issue boilerplate." They're novel and extremely unusual, like a lot of other things in the draft bill.
Section 6, for example, creates a sort of one-way ratchet for rulemaking where the agency has basically unlimited authority to make rules or promulgate definitions that make it harder to get a permit, but has to make findings to make it easier. That is not how regulation usually works.
The abbreviated notice period is also really wild.
I think the draft bill introduces a lot of interesting ideas, and that's valuable, but as actual proposed legislative language I think it's highly unrealistic and would almost certainly do more harm than good if anyone seriously tried to enact it.
For every "wow, this has never been done before in the history of federal legislation" measure in the bill--and there are at least 50 or so--there's almost certainly going to be a pretty good reason why it hasn't been done before. In my opinion, it's not wise to try and do 50 incredibly daring new things at once in a single piece of legislation, because it creates far too many failure points. It's like following a baking recipe--if you try to make one or two tweaks to the recipe that seem like good ideas, you can then observe the effect on the finished product and draw conclusions from the results you get. If you try to write your own recipe from scratch, and you've never written a recipe before, you're going to end up with a soggy mess and no real lessons will have been learned about any of the individual elements that you tried out.
A New Bill Offer Has Arrived
Center for AI Policy proposes a concrete actual model bill for us to look at.
Here was their announcement:
I discovered this via Cato’s Will Duffield, whose statement was:
To which my response was essentially:
We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions. I want someone who will tell me ‘Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerce clause is super broad, Bill 1004 is legal as written but the implementation won’t work, and so on. Bonus points for probabilities, and double bonus points if they tell you how likely each bill is to pass so you know when to care.
Unfortunately, we do not have that. We only have people who cry wolf all the time. I love that for them, and thank them for their service, which is very helpful. Someone needs to be in that role, if no one is going to be the calibrated version. Much better than nothing. Often their critiques point to very real issues, as people are indeed constantly proposing terrible laws.
The lack of something better calibrated is still super frustrating.
RTFC: Read the Bill
So what does this particular bill actually do if enacted?
There is no substitute for reading the bill.
I am going to skip over a bunch of what I presume is standard issue boilerplate you use when creating this kind of apparatus, like the rulemaking authority procedures.
There is the risk that I have, by doing this, overlooked things that are indeed non-standard or otherwise worthy of note, but I am not sufficiently versed in what is standard to know from reading. Readers can alert me to what I may have missed.
Each bullet point has a (bill section) for reference.
Basics and Key Definitions
The core idea is to create the new agency FAISA to deal with future AI systems.
There is a four-tier system of concern levels for those systems, in practice:
As described later, the permit process is a holistic judgment based on a set of ruberics, rather than a fixed set of requirements. A lot of it could do with better specification. There is a fast track option when that is appropriate to the use case.
Going point by point:
Oh the Permits You’ll Need
The core idea is to divide AI into four use cases: Hardware, Training, Model Weights and Deployment. You need a distinct permit for each one, and a distinct permit for each model or substantial model change for each one, and you must reapply each time, again with a fast track option when the situation abides that.
Each application is to be evaluated and ‘scored,’ then a decision made, with the criteria updated at least yearly. We are given considerations for the selection process, but mostly the actual criteria are left fully unspecified even initially. The evaluation process is further described in later sections.
There are three core issues raised, which are mostly discussed in later sections.
As always, we have a dilemma of spirit of the rules versus technical rule of law.
To the extent the system works via technical rules, that is fantastic, protecting us in numerous ways. If it works. However, every time I look at a set of technical proposals, my conclusion is at best ‘this could work if they abide by the spirit of the rules here.’ Gaming any technical set of requirements would be too easy if we allowed rules lawyering (including via actual lawyering) to rule the day. Any rules that worked against adversarial labs determined to work around the rules and labs that seem incapable of acting wisely, that are not allowed to ask whether the lab is being adversarial or unwise, will have to be much more restrictive overall to compensate for that, to get the same upsides, and there are some bases that will be impossible to cover in any reasonable way.
To the extent we enforce the spirit of the rules, and allow for human judgment and flexibility, or allow trusted people to adjust the rules on the fly, we can do a lot better on many fronts. But we open ourselves up to those who would not follow the spirit, and force there to be those charged with choosing who can be trusted to what extent, and we risk insider favoritism and capture. Either you can ‘pick winners and losers’ in any given sense or level of flexibility, or you can’t, and we go to regulate with the government we have, not the one we wish we had.
The conclusion of this section has some notes on these dangers, and we will return to those questions in later sections as well.
Again, going point by point:
Rubrics for Your Consideration
What are the considerations when evaluating a safety plan? There are some details here that confuse me, but also this is thought out well enough that we can talk details on that level at all.
The broader concern is the idea of this being organized into a scoring system, and how one should holistically evaluate an application. I do think the rubrics themselves are a great start.
Open Model Weights Are Unsafe And Nothing Can Fix This
What about open source models?
Well, how exactly do you propose they fit into the rhuberics we need?
Extremely High Concern Systems
What about those ‘extremely’ high concern systems? What to do then? What even are they? Can the people writing these documents please actually specify at least a for-now suggested definition, even if no one is that close to hitting it yet?
This is a formal way of saying exactly that. There is a set of thresholds, to be defined later, beyond which no, you are simply not going to be allowed to create or deploy an AI system any time soon.
The problem is that this a place one must talk price, and they put a ‘TBD’ by the price. So we need to worry the price could be either way too high, or way too low, or both in different ways.
The Judges Decide
The actual decision process is worth highlighting. It introduces random judge selection into the application process, then offers an appeal, followed by anticipating lawsuits. I worry this introduces randomness that is bad for both business and risk, and also that the iterated process is focused on the wrong type of error. You want this type of structure when you worry about the innocent getting punished, whereas here our primary concern about error type is flipped.
Several Rapid-Fire Final Sections
There is some very important stuff here. Any time anyone says ‘emergency powers’ or ‘criminal penalties’ you should snap to attention. The emergency powers issues will get discussed in more depth when I handle objections.
Overall Take: A Forceful, Flawed and Thoughtful Model Bill
I think it is very good that they took the time to write a full detailed bill, so now we can have discussions like this, and talk both price and concrete specific proposals.
What are the core ideas here?
I strongly agree with #1, #2, #3, #4, #5, #6 and #10. As far as I can tell, these are the core of any sane regulatory regime. I believe #9 is correct if we find the right price. I am less confident in #7 and #8, but do not know what a superior alternative would be.
The key, as always, is talking price, and designing the best possible mechanisms and getting the details right. Doing this badly can absolutely backfire, especially if we push too hard and set unreasonable thresholds.
I do think we should be aware of and prepared for the fact that, at some point in the future there is a good chance that the thresholds and requirements will need to be expensive, and impose real costs, if they are to work. But that point is not now, and we need to avoid imposing any more costs than we need to, going too far too fast will only backfire.
The problem is both that the price intended here seems perhaps too high too fast, and also that it dodges much talking of price by kicking that can to the new agency. There are several points in this draft (such as the 10^24 threshold for medium-concern) where I feel that the prices here are too high, in addition to places where I believe implementation details need work.
There is also #9, civil liability, which I also support as a principle, where one can fully talk price now, and the price here seems set maximally high, at least within the range of sanity. I am not a legal expert here but I sense that this likely goes too far, and compromise would be wise. But also that is the way of model bills.
That leaves the hard questions, #7, #8 and #11.
On #7, I would like to offer more guidance and specification for the new agency than is offered here. I do think the agency needs broad discretion to put up temporary barriers quickly, set new thresholds periodically, and otherwise assess the current technological state of play in a timely fashion. We do still have great need for Congressional and democratic oversight, to allow for adjustments and fixing of overreach or insider capture if mistakes get made. Getting the balance right here is going to be tricky.
On #8, as I discuss under objections, what is the alternative? Concretely, if the President decides that an AI system poses an existential risk (or other dire threat to national security), and that threat is imminent, what do you want the President to do about that? What do you think or hope the President would do now? Ask for Congress to pass a law?
We absolutely need, and I would argue already have de facto, the ability to in an emergency shut down an AI system or project that is deemed sufficiently dangerous. The democratic control for that is periodic elections. I see very clear precedent and logic for this.
And yes, I hate the idea of states of emergency, and yes I have seen Lisa Simpson’s TED Talk, I am aware that if you let the government break the law in an emergency they will create an emergency in order to break the law. But I hate this more, not less, when you do it anyway and call it something else. Either the President has the ability to tell any frontier AI project to shut down for now in an actual emergency, or they don’t, and I think ‘they don’t’ is rather insane as an option. If you have a better idea how to square this circle I am all ears.
On #11, this was the one big objection made when I asked someone who knows about bills and the inner workings of government and politics to read the bill, as I note later. They think that the administrative, managerial, expertise and enforcement burdens would be better served by placing this inside an existing agency. This certainly seems plausible, although I would weigh it against the need for a new distinctive culture and the ability to move fast, and the ability to attract top talent. I definitely see this as an open question.
In response to my request on Twitter, Jules Robins was the only other person to take up reading the bill.
This mostly was the measured, but otherwise the opposite of the expected responses from the usual objectors. Jules saw that this bill is making a serious attempt to accomplish its mission, but that there are still many ways it could fail to work, and did not focus on the potential places there could be collateral damage or overreach of various kinds.
Indeed, there are instead concerns that the checks on power that are here could interfere, rather than worrying about insufficient checks on power. The right proposal should raise concerns in both directions.
But yes, Jules does notice that if this exact text got implemented, there are some potential overreaches.
The spirit of the rules point is key. Any effort without the spirit of actually creating safety driving actions is going to have a bad time unless you planned to route around that, and this law does not attempt to route around that.
I did notice the Google paper referenced here, and I am indeed worried that we could in time lose our ability to monitor compute in this way. If that happens, we are in even deeper trouble, and all our options get worse. However I anticipate that the distributed solution will be highly inefficient, and difficult to scale on the level of actually dangerous models for some time. I think for now we proceed anyway, and that this is not yet our reality.
I definitely thought about the model purpose loophole. It is not clear that this would actually get you much of a requirement discount given my reading, but it is definitely something we will need to watch. The EU’s framework is much worse here.
The Usual Objectors Respond: The Severability Clause
The bill did gave its critics some soft rhetorical targets, such as the severability clause, which I didn’t bother reading assuming it was standard until Matt Mittlesteadt pointed it out. The provision definitely didn’t look good when I first read it, either:
Here is the clause itself, in full:
Then I actually looked at the clause and thought about it, and it made a lot more sense.
The first clause is a statement of intent and an observation of fact. The usual suspects will of course treat it as scaremongering but in the world where this Act is doing good work this could be very true.
The second clause is actually weaker than a standard severability clause, in a strategic fashion. It is saying, sever, but only sever if that would help reduce major security risks. If severing would happen in a way that would make things worse than striking down more of the law, strike down more on that basis. That seems good.
The third clause is saying that if a clause is found unconstitutional, then rather than strike even that clause, they are authorized to modify that clause to align with the rest of the law as best they can, given constitutional restrictions. Isn’t that just… good? Isn’t that what all laws should say?
So, for example, there was a challenge to the ACA’s individual mandate in 2012 in NFIB v. Sebelius. The mandate was upheld on the basis that it was a tax. Suppose that SCOTUS had decided that it was not a tax, even though it was functionally identical to a tax. In terms of good governance, the right thing to do is to say ‘all right, we are going to turn it into a tax now, and write new law, because Congress has explicitly authorized us to do that in this situation in the severability provision of the ACA.’ And then, if Congress thinks that is terrible, they can change the law again. But I am a big fan of ‘intent wins’ and trying to get the best result. Our system of laws does not permit this by default, but if legal I love the idea of delegating this power to the courts, presumably SCOTUS. Maybe I am misunderstanding this?
So yeah, I am going to bite the bullet and say this is actually good law, even if its wording may need a little reworking.
The Usual Objectors Respond: Inception
Next we have what appears to me to be an attempted inception from Jeremiah Johnson, saying the bill is terrible and abject incompetence that will only hurt the cause of enacting regulations, in the hopes people will believe this and make it true.
I evaluated this claim by asking someone I know who works on political causes not related to AI, with a record of quietly getting behind the scenes stuff done, to read the bill without giving my thoughts, to get a distinct opinion.
The answer came back that this was that this was indeed a very professionally drafted, well thought out bill. Their biggest objection was that they thought it was a serious mistake to make this a new agency, rather than put it inside an existing one, due to the practical considerations of logistics, enforcement and ramping up involved. Overall, they said that this was ‘a very good v1.’
Not that this ever stops anyone.
Claiming the other side is incompetent and failing and they have been ‘destroyed’ or ‘debunked’ and everyone hates them now is often a highly effective strategy. Even I give pause and get worried there has been a huge mistake, until I do what almost no one ever does, and think carefully about the exact claims involved and read the bill. And that’s despite having seen this playbook in action many times.
Notice that Democrats say this about Republicans constantly.
Notice that Republicans say this about Democrats constantly.
So I do not expect them to stop trying it, especially as people calibrate based on past reactions. I expect to hear this every time, with every bill, of any quality.
The Usual Objectors Respond: Rulemaking Authority
Then we have this, where Neil Chilson says:
You know you need better critics when they pull out ‘regulate math’ and ‘government jobs program’ at the drop of a hat. Also, this is not how the Overton Window works.
But I give him kudos for both making a comparative claim, and for highlighting the actual text of the bill that he objects to most, in a section I otherwise skipped. He links to section 6, which I had previously offloaded to Gemini.
Here is what he quotes, let’s check it in detail, that is only fair, again RTFB:
So as I understand it, normally any new rule requires a 60 day waiting period before being implemented under 5 U.S.C. §801(a)(3), to allow for review or challenge. This is saying that, if deemed necessary, rules can be changed without this waiting period, while still being subject to the review and potentially be paired back.
Also my understanding is that the decision here of ‘major security risk’ is subject to judicial review. So this does not prevent legal challenges or Congressional challenges to the new rule. What it does do is it allows stopping activity by default. That seems like a reasonable thing to be able to do in context?
This is very much pushing it. I don’t like it. I think here Neil has a strong point.
I do agree that rules that appear similar can indeed not be substantially similar, and that the same rule rejected before might be very different now.
But changing a ‘penalty’ by 20% and saying you changed the rule substantially? That’s clearly shenanigans, especially when combined with (1) above.
The parties involved should not need such a principle. They should be able to decide for themselves what ‘substantially similar’ means. Alas, this law did not specify how any of this works, there is no procedure, it sounds like?
So there is a complex interplay involved, and everything is case-by-case and courts sometimes intervene and sometimes won’t, which is not ideal.
I think this provision should be removed outright. If the procedure for evaluating this is so terrible it does not work, then we should update 5 U.S.C. § 802(b)(2) with a new procedure. Which it sounds like we definitely should do anyway.
If an agency proposes a ‘substantially similar’ law to Congress, here or elsewhere, my proposed new remedy is that it should need to be noted in the new rule that it may be substantially similar to a previous proposal that was rejected. Congress can then stamp it ‘we already rejected this’ and send it back. Or, if they changed their minds for any reason, an election moved the majority or a minor tweak fixes their concerns, they can say yes the second time. The law should spell this out.
If you think we have the option to go back to Congress as the situation develops to make detailed decisions on how to deal with future general-purpose AI security threats, either you do not think we will face such threats, or you think Congress will be able to keep up, you are fine not derisking or you have not met Congress.
That does not mean we should throw out rule of law or the constitution, and give the President and whoever he appoints unlimited powers to do what they want until Congress manages to pass a law to change that (which presumably will never happen). Also that is not what this provision would do, although it pushes in that direction.
Does this language rub us all the wrong way? I hope so, that is the correct response to the choices made here. It seems expressly designed to give the agency as free a hand as possible until such time as Congress steps in with a new law.
The question is whether that is appropriate.
Yes, yes, ignore.
Finally we have this:
That doesn’t sound awesome. Gemini thinks that courts would actually respect this clause, which initially surprised me. My instinct was that a judge would laugh in its face.
I do notice that this is constructed narrowly. This is specifically about changing strictness of definitions towards being more strict. I am not loving it, but also the two clauses here to still allow review seem reasonable to me, and if they go too far the court should strike whatever it is down anyway I would assume.
Conclusion
The more I look at the detailed provisions here, the more I see very thoughtful people who have thought hard about the situation, and are choosing very carefully to do a specific thing. The people objecting to the law are objecting exactly because the bill is well written, and is designed to do the job it sets out to do. Because that is a job that they do not want to see be done, and they aim to stop it from happening.
There are also legitimate concerns here. This is only a model bill, as noted earlier there is still much work to do, and places where I think this goes too far, and other places where if such a bill did somehow pass no doubt compromises will happen even if they aren’t optimal.
But yes, as far as I can tell this is a serious, thoughtful model bill. That does not mean it or anything close to it will pass, or that it would be wise to do so, especially without improvements and compromises where needed. I do think the chances of this type of framework happening very much went up.