🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
At around 9 AM, on June 25, at a committee hearing titled “Authoritarians and Algorithms: Why U.S. AI Must Lead” (at the 11-minute, 50-second mark in the video), Congressman Raja Kirshnamoorthi, a democrat of Illinois’s 8th congressional district, Announced to the committee room “I'm working on a new bill [he hasn’t introduced it yet], ‘the AGI Safety Act’ that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.” The hearing continued with substantive discussion from congress members of both parties(!) of AI safety and the need for policy to prevent misaligned AI. This is a rare, high-leverage juncture: a member of Congress is actively writing a bill that could (potentially) fully stop the risk of unaligned AGI from US labs. If successful, in just a few months, you might not have to worry about the alignment problem as much, and we can help him with this bill. Namely, after way too long, I (and others) finally finished up on a full thing explaining the AGI Safety Act: & here's the explanation of the 8 ways folks can be a part of it! 1. Mail 2. Talking to Congress 3. Mail, but to over a thousand congress folk, and in only 5 minutes 4. Talking to congress, part 2: How to literally meet with congress folk and talk to them literally in person 5. Come up with ideas that might be put in the official AGI safety act 6. Getting AI labs to, by law, be required to test if their AI is risky & tell everyone if it turns out to be risky 7. And most importantly, Parallel Projects! I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuf
Yea, it'd be a bonus to convince/inform folks that, if this works out, other people won't be evil,
& if we don't do that then some folks still might do bad things bc they think other folks are bad,
But as long as one doesn't see a way this idea makes things actively worse, It's still a good idea!
Thanks for pointing that out tho. Will add that ("that" being "Making sure folks understand that, if this idea is implemented, other folks won't be as evil, and you can stop being as bad to them") to the idea.
Thanks!