I volunteer as Pause AI software team lead and confirm this is basically correct. Many members and origins in common between the global Pause AI movement and Pause AI US, but some different emphases mostly for good specialism reasons. The US org has Washington connections and more protests focussed on the AI labs themselves. We work closely.
Neither has more than a few paid employees and truly full-time volunteers. As per OP, anyone who agrees activism and public engagement remain a very under-leveraged value-add way to help AI safety has massive opportunity here for impact through time, skill or money.
Pause AI has a lot of opportunity for growth.
Especially the “increase public awareness” lever is hugely underfunded. Almost no paid staff or advertising budget.
Our game plan is simple but not naive, and is most importantly a disjunct, value-add bet.
Please help us execute it well: explore, join, talk with us, donate whatever combination of time, skills, ideas and funds makes sense
(Excuse dearth of kudos, am not a regular LW person, just an old EA adjacent nerd who quit Amazon to volunteer full-time for the movement.)
It's plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation
...or by a further twist on liability.
Gabriel Well explored such an idea in https://axrp.net/episode/2024/04/17/episode-28-tort-law-for-ai-risk-gabriel-weil.html
The core is punitive damages for expected harms rather than those that manifested. When a non-fatal warning shot causes harm, then as well as suing for those damages that occurred, one assesses how much worse o...
What We’re Not Doing ... We are not investing in grass-roots advocacy, protests, demonstrations, and so on. We don’t think it plays to our strengths, and we are encouraged that others are making progress in this area.
Not speaking for the movement, but as a regular on Pause AI this makes sense to me. Perhaps we can interact more, though, and in particular I'd imagine we might collaborate on testing the effectiveness of content in changing minds.
...Execution ... The main thing holding us back from realizing this vision is staffing. ... We hope to hire more writ
I understand it went well.
Where can we find recordings of presentations and other outputs? Not yet seeing anything on https://www.aisafety.camp or in the MAISU Google doc homepage.