See the event page here. Hello Californians! We need you to help us fight for SB 1047, a landmark bill to help set a benchmark for AI safety, decrease existential risk, and promote safety research. This bill has been supported by some of the world’s leading AI scientists and the...
Holly is an independent AI Pause organizer, which includes organizing protests (like this upcoming one). Rob is an AI Safety YouTuber. I (jacobjacob) brought them together for this dialogue, because I've been trying to figure out what I should think of AI safety protests, which seems like a possibly quite...
Tomorrow, PauseAI and collaborators are putting on the largest AI Safety protest to date, across 7 locations in 6 countries. All are eagerly welcomed! Your presence at this protest is a rare impact opportunity when in-person volunteering is not fungible with money or intellectual support-- showing up is how we...
Meta’s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models – which will have more dangerous capabilities – we call on Meta to take responsible release seriously and stop irreversible proliferation....
Seeking the biology-ignorant! Come and do your part! I am working on a post with Beth Barnes that will explain meiotic drive to AI safety people so they can determine if there are any useful analogies to gradient hacking. If you don't know what "meiotic drive" is, you might be...
Subtitle: Costly virtue signaling is an irreplaceable source of empirical information about character. The following is cross-posted from my blog, which is written for a more general audience, but I think the topic is most important to discuss here on LW. We all hate virtue signaling, right? Even “virtue” itself...