This is for a person with no ML background. He is 55 years old, he liked the sequences and I recently managed to convince him that AI risk is serious by recommending a bunch of Lesswrong posts on it, but he still thinks it's astronomically unlikely that AGI is <80 years away.
There are a lot of other people like this, so I think it's valuable to know what the best explainer is, more than just in my case.
Probably best not to skip to List of Lethalities. But then again that kind of approach was wrong for politics is the mind killer where it turned out to be best to just have the person dive right in.