TLDR: “Key Phenomena in AI Risk” is an 8-week-long, facilitated reading group. It is aimed at people interested in conceptual AI alignment research, in particular from fields such as philosophy, systems research, biology, cognitive and social sciences. We ran it once and are repeating it now.
The program will run between November 2023 and January 2024. Sign up hereby Sunday, October 29th.
What?
The “Key Phenomena in AI risk” reading curriculum provides an extended introduction to some key ideas in AI risk, in particular risks from misdirected optimization or 'consequentialist cognition'. As such, it aims to remain largely agnostic of solution paradigms. It includes 90' minutes of facilitated discussion, and requires at least 2 hours of reading per session. It is virtual and free.
See the old post here for a short overview of the curriculum; here for a more extensive summary; and here for the full curriculum (which will be updated in minor ways in the following weeks).
What Changed?
Thanks to the feedback from participants and facilitators in the last iteration, the program has improved. Now, it is is an 8-week-long program (with one week added at the end for reflection). Readings have been made more focused, and we will be adding more technical optional readings.
For Who?
The curriculum is primarily aimed at people interested in conceptual research in AI risk and alignment.
It is designed to be accessible to audiences in, among others, philosophy (of agency, knowledge, power, etc.) and systems research (e.g. biological, cognitive, information-theoretic, social systems, etc.).
When?
The reading groups will be taking place November 2023 through January 2024.
We expect to run 6 groups of 4-8 participants (including 1 facilitator). Each group will be led by a facilitator with substantive knowledge of AI risk.
The application consists of one stage, where we ask you to fill in a form with
Your CV
Your motivation for participating in the program
Your prior exposure to AI risk/alignment to date
We select people based on our best understanding of their motivation to contribute to AI alignment and how much they would counterfactually benefit from participating in the program.
If you have any questions, feel free to leave a comment below or contact us at contact@pibbss.ai
Hello Gabriel! We plan to run this group ~3 times a year, so you should be able to apply for next round, around January/February, which would start in Feb/March. (not confirmed, just estimates).
TLDR: “Key Phenomena in AI Risk” is an 8-week-long, facilitated reading group. It is aimed at people interested in conceptual AI alignment research, in particular from fields such as philosophy, systems research, biology, cognitive and social sciences. We ran it once and are repeating it now.
The program will run between November 2023 and January 2024. Sign up here by Sunday, October 29th.
What?
The “Key Phenomena in AI risk” reading curriculum provides an extended introduction to some key ideas in AI risk, in particular risks from misdirected optimization or 'consequentialist cognition'. As such, it aims to remain largely agnostic of solution paradigms. It includes 90' minutes of facilitated discussion, and requires at least 2 hours of reading per session. It is virtual and free.
See the old post here for a short overview of the curriculum; here for a more extensive summary; and here for the full curriculum (which will be updated in minor ways in the following weeks).
What Changed?
Thanks to the feedback from participants and facilitators in the last iteration, the program has improved. Now, it is is an 8-week-long program (with one week added at the end for reflection). Readings have been made more focused, and we will be adding more technical optional readings.
For Who?
The curriculum is primarily aimed at people interested in conceptual research in AI risk and alignment.
It is designed to be accessible to audiences in, among others, philosophy (of agency, knowledge, power, etc.) and systems research (e.g. biological, cognitive, information-theoretic, social systems, etc.).
When?
The reading groups will be taking place November 2023 through January 2024.
We expect to run 6 groups of 4-8 participants (including 1 facilitator). Each group will be led by a facilitator with substantive knowledge of AI risk.
Sign up
Sign up here by October 29th.
About the application
The application consists of one stage, where we ask you to fill in a form with
We select people based on our best understanding of their motivation to contribute to AI alignment and how much they would counterfactually benefit from participating in the program.
If you have any questions, feel free to leave a comment below or contact us at contact@pibbss.ai