The text is now available, here: https://www.congress.gov/bill/117th-congress/senate-bill/4488
This bill does seem very important. It is hard to know what will help or hinder the political process, so I recommend that folks in the EA and LW community don't try to do a public coordinated effort try to influence the content or outcome of this proposed bill - at least for now.
My understanding is that the people involved in drafting this bill are aware of the EA and LW community, so they know they can reach out when and if they think that would be helpful.
How well have these types of inter-agency committees tended to work in the past? Is this a good way to actually get things done or does it just add more bureaucracy?
Good question. I'm not sure about this types of committees in particular, but:
One reason this might not go terribly is that unlike many issues government deals with, there probably aren't a mess of competing interests they'll have to cater to in this case. Voters don't feel strongly about obscure catastrophic risks, and I can't think of any powerful companies who would be investing in lobbying around this (they mostly care about short-term issues).
So if the senators care about this issue and have good guidance on it, they will be relatively unencumbered to follow their experts' advice. They won't have to, e.g. contort their plans to sound good to their constituents and then hollow them out to please their campaign donors.
It's interesting the the term 'abused' was used with respect to AI. It makes me wonder if the bill has misalignment risks in mind at all or only misuse risks.
I would be very surprised if they had anything like the Yudkowskian paradigm in mind when they were thinking of this.
Why? ~All the other gov stuff I'm aware of that talks about "GCR" or that talks about AI in the context of "high-consequence [catastrophic] events, regardless of the low probability" cites Bostrom, MIRI, Ord, or Stuart Russell.
(But I agree they're likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm -- eg, Bostrom is big on paperclippers and foom.)
Bostrom and MIRI being cited is pretty cool. I would have thought they'd be outside the Overton window. EDIT: Do you know when the earliest citations occurred?
It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
A separate press release says, "It is important that the federal government prepare for unlikely, yet catastrophic events like AI systems gone awry" (emphasis added), so my sense is they have misalignment risks in mind.
Two US Senators have introduced a bipartisan bill specifically focused on x-risk mitigation, including from AI. From the post on Senate.gov (bold mine):
It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
I haven't been able to locate the text of the bill yet. If someone finds it, please share in the comments.
Cross-posted to EA Forum. Credit to Jacques Thibodeau for posting a link on Slack that made me aware of this.