Jeffrey Ladish, Caleb Parikh, and I are running the AI security forum, a 1-day event on Thursday, August 8th '24 in Las Vegas, the day before DEFCON. This is a continuation of last year's X-infosec Forum.
We thought that many of you reading this might be interested in attending. If you're interested, please apply here by July 20th. If you can’t attend but have recommendations for other people to invite, please let us know here.
We think that there’s a significant chance transformative AI will be developed within the next 10 years, and that securing those systems is on the critical path to preventing a global catastrophe. It’s not clear that we’re on track to do that in time, so we are convening researchers, engineers, and policymakers with the goal of significantly accelerating AI security. We're aiming to do that by establishing common knowledge about the state of AI security and AI progress, mobilising talent to fill the most pressing gaps, and fostering collaborations.
We're inviting a similar group of people to last year, which had (roughly):
~30-40% from AI labs and research orgs (e.g., Anthropic, OpenAI, Google DeepMind)
~20-25% from academic institutions (e.g., Harvard, MIT, Stanford)
~15-20% from government agencies and think tanks (e.g., CISA, RAND)
~15% from other tech companies (e.g., Google, Microsoft, Intel)
~10-15% independent researchers, consultants, and representatives from nonprofits and funding organizations.
Please see our website for more information. Thanks!
Jeffrey Ladish, Caleb Parikh, and I are running the AI security forum, a 1-day event on Thursday, August 8th '24 in Las Vegas, the day before DEFCON. This is a continuation of last year's X-infosec Forum.
We thought that many of you reading this might be interested in attending. If you're interested, please apply here by July 20th. If you can’t attend but have recommendations for other people to invite, please let us know here.
We think that there’s a significant chance transformative AI will be developed within the next 10 years, and that securing those systems is on the critical path to preventing a global catastrophe. It’s not clear that we’re on track to do that in time, so we are convening researchers, engineers, and policymakers with the goal of significantly accelerating AI security. We're aiming to do that by establishing common knowledge about the state of AI security and AI progress, mobilising talent to fill the most pressing gaps, and fostering collaborations.
We're inviting a similar group of people to last year, which had (roughly):
Please see our website for more information. Thanks!