Hm. I'm sure plenty of people could do a fine job, myself included. But if every such person jumped in, it would be a mess. I assume that if Stuart Russell was the right person for the job, the job would already be over. Plausibly ditto Eliezer.
Rob Miles might be the obvious person for explaining things well. I totally endorse him doing attention-getting things I wouldn't endorse for people like me.
Also probably fine would be people optimized a little more for AI work than explaining things. Paul Christiano may be the Schelling-point tip of the iceberg of people-kinda-doing-Paul-like-things, or trading off even more for AI, it looks like Yoshua Bengio might be a solid choice.
A framing I've been thinking about recently is AutoGPT. Obviously it's not very good at navigating the world, but my point is actually about humans: the first thing people asked AutoGPT was simple tests like "fix this code" or "make a plan for an ad campaign." Soon after, the creator told it to "help humanity." A few days after that, someone else told it to "destroy humanity." I think this is a good way of dividing up the discussion of whether AI poses an existential threat. Taken backwards:
https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin
Andrew Ng writes:
In the attached video, he states that he respects many of the people who signed the letter a lot, and will reach out to people whom he thinks have a thoughtful perspective. But he is also interested in further suggestions for whom to talk to.
Given that Andrew Ng is one of the top AI scientists in the world, it seems valuable for someone to think of a way to connect to him.