I am preparing resources for an equivalent of the mlab camp, organized by EffiSciences,  that will be held in France in a month to train students to become prosaic alignment researchers.

3/4 of the participants are already aligned with existential risks related to AI, but 1/4 of the students have been selected for their technical skills and epistemologies, but are not yet familiar with the AI safety issue.

We have the intuition that one of the main reasons why they do not consider this issue is their lack of knowledge about the very recent capabilities of AI.

If you had 30 minutes to impress them with the advances in deep learning and the speed at which the field is progressing, what resources would you recommend? What should I present? How would you pitch this?

New Answer
New Comment
2 comments, sorted by Click to highlight new comments since:

One thing I would emphasize is the progress of AI, which would involve demonstrating the state of the art from ~30 years ago, ~20 years ago, ~10 years ago, and today.

Are you looking for particular examples of AI doing impressive stuff (like PaLM explaining jokes), or do you already have enough examples of that sort to draw from? One thing I would emphasize to your students is how easy it is to underestimate a system that is near us in certain abilities but not very human-like psychologically: for example, see this recent discussion about GPT-3.