I've finished two Bachelor's in Maths and Physics with moderately good grades, but a fairly advanced Thesis (and advanced learning) in Mathematical Logic. I've recently learned about the ethical urgency of AI Safety research. With the prospect of getting into that field in the near future (probably along the lines of a Theoretical Research Lead), I now have to face a career decision, and I'd be really thankful if anyone familiar with the academic field of AI research could share their thoughts. My two options are the following:

  1. Taking the Master of Pure and Applied Logic in Barcelona. This Master's is almost exclusively pure maths, but I'm pretty sure I'd be able to obtain excellent grades.
  2. Undertaking a 3-year PhD in Mathematical Logic in Vienna. The research would again be in some specific areas of pure maths (Recursion Theory, Proof Theory, Set Theory), and I'm pretty certain (but not as much) that I'd be able to obtain good results.

My initial idea was the Master's might be a better choice for ending up in AI Safety since it leaves open the possibility of later undertaking a PhD closer to AI, and I'd probably be considered a valuable student if I obtain excellent grades. But on the contrary, obtaining a PhD in pure maths being as young as I am might provide even more status. Furthermore, I'm not certain the best way to enter AI academia is necessarily by undertaking an AI-related PhD.

New Answer
New Comment

1 Answers sorted by

Thomas Kwa

97

I think the choice between two options is a limited action space that's unlikely to contain the best thing you could be doing. I wrote about why a limited action space can remove the vast majority of your impact here. Have you considered

  • careers outside of academia
  • programs outside of Europe
  • PhD programs in computer science
  • taking a gap year to learn the basics of some subfield of alignment, possibly doing some independent research, then doing a PhD in alignment at somewhere like CHAI
  • developing aptitudes other than math ability, such that you can become the Pareto-best in the world
  • gathering more information on your comparative advantage before committing to a large career decision
    • internships
    • doing something like Cambridge AGISF to see what theoretical problems you fit best at
    • testing your skill at machine learning
    • distilling some papers as a cheap test of your technical writing skill

It's also possible to do harm if you advance AI capabilities more than safety, so any plan to go into AI research has to have a story for how you differentially advance safety.

Thank you very much for your comment. Without delving into the details, some of these routes seem unfeasible right now, but others don't. You have furthermore provided me with some useful ideas and resources I hadn't considered or read about yet.