I am sharing this call from the EU AI Office for organizations involved in evaluation. Please take a close look: among the selection criteria, organizations must be based in Europe, or their leader must be European. If these criteria pose challenges for some of you, feel free to reach out to me at tom@prism-eval.ai. We can explore potential ways to collaborate through PRISM Eval. I believe it’s crucial that we support one another on these complex and impactful issues.
The AI office is collecting contributions from experts to feed into the workshop on general-purpose AI models and systemic risks.
The European AI Office is hosting an online workshop on 13 December 2024 (only for specialists), focusing on the evaluation of general-purpose AI models with systemic risk. This is an opportunity for organisations and research groups to showcase their expertise and contribute to shaping the evaluation ecosystem under the EU AI Act.
The event will bring together leading evaluators and the AI Office to exchange insights on state-of-the-art evaluation methodologies for general-purpose AI models. Selected participants will present their approaches, share best practices, and discuss challenges in assessing systemic risks associated with advanced AI technologies.
This initiative aims to foster collaboration and advance the science of general-purpose AI model evaluations, contributing to the development of robust frameworks for ensuring the safety and trustworthiness of these models.
Call for submissions
The AI Office invites evaluators to submit abstracts of previously published papers on the evaluation of general-purpose AI models with systemic risk. Key topics include:
CBRN Risks: Risks related to chemical, biological, radiological, and nuclear threats
Cyber Offense: Risks associated with offensive cyber capabilities
Major Accidents: Risks of large-scale disruptions or infrastructure interference
Loss of Control: Concerns about oversight and alignment of autonomous AI models
Discrimination: Risks of generating discriminatory outcomes
Privacy Infringements: Risks involving privacy breaches or data misuse
Disinformation: Risks tied to the propagation of false or harmful information
Other Systemic Risks: Additional risks affecting public health, safety, democratic processes, or fundamental rights
Eligible applicants must be registered organisations or university-affiliated research groups with demonstrated experience in general-purpose AI model evaluations. Submissions will be evaluated based on technical quality, relevance, and alignment with the AI Office's mission.
Key dates
Submission Deadline: 8 December 2024 (End of Day, Anywhere on Earth) We encourage early submissions.
Invitation notification: 11 December 2024
Workshop date: 13 December 2024 (14:00 CET)
Background
The AI Act establishes rules to ensure general-purpose AI models are safe and trustworthy, particularly those posing systemic risks such as facilitating biological weapons development, loss of control, or large-scale harm like discrimination or disinformation. Providers of these models must assess and mitigate risks, conduct adversarial testing, report incidents, and ensure cybersecurity of the model.
The European AI Office enforces these requirements, conducting evaluations, investigating systemic risks, and imposing fines when necessary. It can also appoint independent experts to carry out evaluations on its behalf.
As the science of systemic risk evaluation is still developing, the AI Office is fostering collaboration with evaluators to advance methodologies and establish best practices. Workshops, like the upcoming December 2024 event, support this effort, building a foundation for safe and responsible AI oversight.
The AI office is collecting contributions from experts to feed into the workshop on general-purpose AI models and systemic risks.
The European AI Office is hosting an online workshop on 13 December 2024 (only for specialists), focusing on the evaluation of general-purpose AI models with systemic risk. This is an opportunity for organisations and research groups to showcase their expertise and contribute to shaping the evaluation ecosystem under the EU AI Act.
The event will bring together leading evaluators and the AI Office to exchange insights on state-of-the-art evaluation methodologies for general-purpose AI models. Selected participants will present their approaches, share best practices, and discuss challenges in assessing systemic risks associated with advanced AI technologies.
This initiative aims to foster collaboration and advance the science of general-purpose AI model evaluations, contributing to the development of robust frameworks for ensuring the safety and trustworthiness of these models.
Call for submissions
The AI Office invites evaluators to submit abstracts of previously published papers on the evaluation of general-purpose AI models with systemic risk. Key topics include:
Follow the link to take part of the call. Find more information on the application procedure (PDF).
Eligibility and selection
Eligible applicants must be registered organisations or university-affiliated research groups with demonstrated experience in general-purpose AI model evaluations. Submissions will be evaluated based on technical quality, relevance, and alignment with the AI Office's mission.
Key dates
Background
The AI Act establishes rules to ensure general-purpose AI models are safe and trustworthy, particularly those posing systemic risks such as facilitating biological weapons development, loss of control, or large-scale harm like discrimination or disinformation. Providers of these models must assess and mitigate risks, conduct adversarial testing, report incidents, and ensure cybersecurity of the model.
The European AI Office enforces these requirements, conducting evaluations, investigating systemic risks, and imposing fines when necessary. It can also appoint independent experts to carry out evaluations on its behalf.
As the science of systemic risk evaluation is still developing, the AI Office is fostering collaboration with evaluators to advance methodologies and establish best practices. Workshops, like the upcoming December 2024 event, support this effort, building a foundation for safe and responsible AI oversight.