This is for a person with no ML background. He is 55 years old, he liked the sequences and I recently managed to convince him that AI risk is serious by recommending a bunch of Lesswrong posts on it, but he still thinks it's astronomically unlikely that AGI is <80 years away.
There are a lot of other people like this, so I think it's valuable to know what the best explainer is, more than just in my case.
Yes, I think this is the most important question. It's one thing to not be aware of progress in AI and so not have an idea that it might be soon. General resources are fine for updating this sort of mindset.
It's also a thing to be aware of current progress but think it might or probably will longer. That's fine, I think it might take longer, and can certainly understand having reasons to believe that it's less likely than not to happen this century even if I don't hold them myself.
It's very different if someone is aware of current developments but has extremely strong views against AGI happening any time this century. Do they actually mean the same thing by "AGI" as most other people do? Do they think it is possible at all?