I think the choice between two options is a limited action space that's unlikely to contain the best thing you could be doing. I wrote about why a limited action space can remove the vast majority of your impact here. Have you considered
It's also possible to do harm if you advance AI capabilities more than safety, so any plan to go into AI research has to have a story for how you differentially advance safety.
Thank you very much for your comment. Without delving into the details, some of these routes seem unfeasible right now, but others don't. You have furthermore provided me with some useful ideas and resources I hadn't considered or read about yet.
I've finished two Bachelor's in Maths and Physics with moderately good grades, but a fairly advanced Thesis (and advanced learning) in Mathematical Logic. I've recently learned about the ethical urgency of AI Safety research. With the prospect of getting into that field in the near future (probably along the lines of a Theoretical Research Lead), I now have to face a career decision, and I'd be really thankful if anyone familiar with the academic field of AI research could share their thoughts. My two options are the following:
My initial idea was the Master's might be a better choice for ending up in AI Safety since it leaves open the possibility of later undertaking a PhD closer to AI, and I'd probably be considered a valuable student if I obtain excellent grades. But on the contrary, obtaining a PhD in pure maths being as young as I am might provide even more status. Furthermore, I'm not certain the best way to enter AI academia is necessarily by undertaking an AI-related PhD.