Scott Aaronson is a computer scientist at the University of Texas in Austin, whose research mainly focuses on quantum computing and complexity theory. He's at least very adjacent to the Rationalist/LessWrong community. After some comments on his blog and then coversations with Jan Leike, he's decided work for one year on AI safety at OpenAI.
To me this is a reasonable update that people who are sympathetic to AI safety can be convinced to actually do direct work.
Aaronson might be one of the easier people to induce to do AI safety work, but I imagine there are also other people who are worth talking to about doing direct work on AI safety.
I always assumed that "Why don't we give Terence Tao a million dollars to work on AGI alignment?" was using Tao to refer to a class of people. Your comment implies that it would be especially valuable for Tao specifically to work on it.
Why should we believe that Tao would be especially likely to be able to make progress on AGI alignment (e.g. compared to other recent fields medal winners like Peter Scholze)?