AI Safety x Physics Grand Challenge
Join us for the AI Safety x Physics Grand Challenge, a research hackathon designed to engage physicists in technical AI safety research. While we expect LessWrong community members with both technical AI safety and physics expertise to benefit most from this event, we encourage anyone interested in exploring this intersection to sign up. Dates: July 25th to July 27th (this weekend) Location: Remote, with in-person hubs in several locations Prizes: $2,000 total prize money Apart Research is running the hackathon, in collaboration with PIBBSS and Timaeus. Hackathon speakers include Jesse Hoogland (Timaeus), Paul Riechers (Simplex), and Dmitry Vaintrob (PIBBSS). Participants will get research support from mentors with expertise spanning the physics and AI safety space, including Martin Biehl, Jesse Hoogland, Daniel Kunin, Andrew Mack, Eric Michaud, Garrett Merz, Paul Riechers, Adam Scherlis, Alok Singh, Logan Smith, and Dmitry Vaintrob. Vision In an effort to diversify the AI safety research landscape, we aim to leverage a physics perspective to explore novel approaches and identify blind spots in current work. In particular, we think this could make significant progress in narrowing the theory-practice gap in AI safety, which is currently large. Work in this direction is timely, since there are signs that AI safety (interpretability especially) is in need of strong theory to support the wealth of empirical efforts that have so far been leading the field. We think that physics, which uses math but is not math, is our best bet for meeting this need. Our Approach As a scientific practice with strong theoretical foundations, physics has deep ties with other mathematically founded disciplines, including computer science. These fields progress largely in parallel, and we see high value in uncovering, re-expressing, and linking established ideas across disciplines and in new contexts. By connecting physicists with AI safety and ML experts, the goal of this Hackathon