I broadly agree with this take, though my guess is the tractability is quite low for most people, even for top 25 CS schools like Yale/Penn*/UMich as opposed to the best 4-5 schools. For example, it's probably not the case that the average Berkeley or MIT CS PhD can become a professor at a top 25 school, and that's already a very selected cohort. There are a lot of grad students (Berkeley has ~50 AI PhDs and ~100 CS PhDs in my cohort, for example!), of which maybe half would want to be professors if they thought it was tractable. Schools just don't hire that many professors every year, even in booming fields like CS!
That being said, if the actual advice is: if you're doing an ML PhD, you should seriously consider academia, I do fully agree with this.
It might be relatively tractable and high-value to be a CS professor somewhere with a CS department that underperforms but has a lot of potential. An ideal university like this would be wealthy, have a lot of smart people, and have a lot of math talent yet underperforms in CS and is willing to spend a lot of money to get better at it soon.
Another large advantage you get being at a top research university in the US (even one that's mediocre at CS) is you end up with significantly more talented undergrads for research assistants (as most undergrads don't pick schools based on their major, or switch majors midway). I think the main disadvantage to going to a lower-tier CS program is that it become significantly harder to recruit good grad students.
=======
*That being said, Penn is great and Philadelphia is wonderful; would recommend even though the number of undergrads who want to do research is quite low! (Disclaimer: I went to Penn for undergrad).
Thanks! One thing I'll add is that there's a chance that someone at a school without normally a ton of great grad students might be able to get some good applicants to apply via the AI safety community.
I'll just comment on my experience as an undergrad at Yale in case it's useful.
At Yale, the CS department, particularly when it comes to state of the art ML, is not very strong. There are a few professors who do good work, but Yale is much stronger in social robotics and there is also some ML theory. There are a couple AI ethics people at Yale, and there soon will be a "digital ethics" person, but there aren't any AI safety people.
That said, there is a lot of latent support for AI safety at Yale. One of the global affairs professors involved in the Schmidt Program for Artificial Intelligence, Emerging Technology, and National Power is quite interested in AI safety. He invited Brian Christian and Stuart Russell to speak and guest teach his classes, for example. The semi-famous philosopher L.A. Paul is interested in AI safety, and one of the theory ML professors had a debate about AI safety in one of his classes. One of the professors most involved in hiring new professors specifically wants to hire AI safety people (though I'm not sure he really knows what AI safety is).
I wouldn't really recommend Yale to people who are interested in doing very standard ML research and want an army of highly competent ML researchers to help them. But for people whose work interacts with sociotechnical considerations like policy, or is more philosophical in nature, I think Yale would be a fantastic place to be, and in fact possibly one of the best places one could be.
This is great thanks. It seems like someone wanting a large team of existing people with technical talent is a reason to not work somewhere like Yale. But what are the chances that the presence of lots of money and smart people would make this possible in the future? Is Yale working on strengthening its cs department? One of my ideas behind this post is that being the first person doing certain work in a department that has potential might have some advantages compared to being the 5th in a department that has already realized it’s potential. An ai safety professor at Yale might get invited to a lot of things, have little competition for advisees, be more uniquely known within Yale, and provide advocacy for ai safety in a way that counterfactually would not happen otherwise at the university.
I think this is all true, but also since Yale CS is ranked poorly the graduate students are not very strong for the most part. You certainly have less competition for them if you are a professor, but my impression is few top graduate students want to go to Yale. In fact, my general impression is often the undergraduates are stronger researchers than the graduate students (and then they go on to PhDs at higher ranked places than Yale).
Yale is working on strengthening its CS department and it certainly has a lot of money to do that. But there are a lot of reasons that I am not that optimistic. There is essentially no tech scene in New Haven, New Haven is not that great in general, the Yale CS building is extremely dingy (I think this has an actual effect on people), and it's really hard to affect the status quo. However, I'm more optimistic that Yale will successfully forge a niche of interdisciplinary research, which is really a strength of the university.
As a nitpick: I think the USNews ranking of CS Graduate programs is better than the rankings you're currently using:
https://www.usnews.com/best-graduate-schools/top-science-schools/computer-science-rankings?_sort=rank-asc
The “canonical” rankings that CS academics care about would be csrankings.org (also not without problems but the least bad).
That list seems really off to me - I don't think UCSD and UMich should rank above both Stanford and Berkeley.
EDIT: This is probably because csrankings.org calculates ranking based on a normalized count of unweighted faculty publications, as oppose to weighing pubs by impact.
I think this list is the least bad of any I've seen so far: https://drafty.cs.brown.edu/csopenrankings/
This is an interesting way to look at it. I'm not sure it makes total sense, because if some university that's (relatively) bad at CS is bad because it doesn't care as much, and accepts students who don't care much either, then I don't think you get a benefit out of going there just because they're high-ranked overall. (E.g. maybe all the teaching faculty at U of M teach premeds more than future AI researchers, and don't get support for pet projects)
In other words, you still have to evaluate cultural fit. I'm not even sure that relatively low ranking on CS is correlated with good cultural fit rather than anticorrelated.
I'm not sure it makes total sense, because if some university that's (relatively) bad at CS is bad because it doesn't care as much,
My guess is that the better universities are generally better b/c of network effects: better faculty want to be there, which means you get better grad students and more funding, which means you get better faculty, etc. Many of the lower tier CS departments at rich research universities still have a lot of funding and attention. My impression is also that almost no large research university "wants" to be bad at CS, it's just pretty hard to overcome the network effects.
Also, in terms of research funding, the majority of it comes from outside grants anyways. And a good AI Alignment Professor should not have that much difficulty securing funding from EA.
Nice. I've previously argued similarly that if going for tenure, AIS researchers might places that are strong in departments other than their own, for inter-departmental collaboration. This would have similar implications to your thinking about recruiting students from other departments. But I also suggested we should favour capital cities, for policy input, and EA hubs, to enable external collaboration. But tenure may be somewhat less attractive for AIS academics, compared to usual, in that given our abundant funding, we might have reason to favour Top-5 postdocs over top-100 tenure.
My impression is that the majority of the benefit from having professors working on AI safety is in mentorship to students who are already interested in AI safety, rather than recruitment. For example, I have heard that David Krueger's lab is mostly people who went to Cambridge specifically to work on AI safety under him. If that's the case, there's less value in working at a school with generally talented students but more value in schools with a supportive environment.
In general it's good to recognize that what matters to AI safety professors is different than what matters to many other CS professors and that optimizing for the same thing other PhD students are is suboptimal. However, as Lawrence pointed out, it's already a rare case to have offers from multiple top schools, and even rarer not have one offer dominate the others under both sets of values. It's a more relevant consideration for incoming PhD students, where multiple good offers is more common.
I also like that your analysis can flow in reverse. Not all AI safety professors are in their schools CS faculties, with Jacob Steinhardt and Victor Veitch coming to mind as examples in their schools' statistics faculties. For PhD students outside CS, the schools you identified as overachievers make excellent targets. On a personal note, that was an important factor in deciding to do my PhD.
Stephen Casper, scasper@mit.edu
TL;DR: being a professor in AI safety seems cool. It might be tractable and high impact to be one at a university that has lots of money, smart people, and math talent but an underperforming CS department – especially if that university is actively trying to grow and improve its CS capabilities.
--
Epistemic status: I have thought about this for a bit and just want to share a perspective and loft some ideas out there. This is not meant to be a thorough or definitive take.
Right now, there are relatively few professors at research universities who put a strong emphasis on AI safety and avoiding catastrophic risks from it in their work. But being a professor working in AI safety might be impactful. It offers the chance to cultivate a potentially large number of students’ interests and work via classes, advising, research, and advocacy. Imagine how different things might be if we had dozens more people like Stuart Russell, Max Tegmark, or David Kruguer peppered throughout academia.
I’m currently a grad student, but I might be interested in going into academia after I graduate. And I have been thinking about how to do it. This has led me to some speculative ideas about what universities might be good to try to be a professor at. I’m using this post to loft out some ideas in case anyone finds them interesting or wants to start a discussion.
One idea for being a professor in AI safety with a lot of impact could be to work at MIT, Berkeley, or Stanford, be surrounded by top talent, and add to the AI safety-relevant work these universities put out. For extremely talented and accomplished people, this is an excellent strategy. But there might be lots of great but low-hanging fruit in other places.
It might be relatively tractable and high-value to be a CS professor somewhere with a CS department that underperforms but has a lot of potential. An ideal university like this would be wealthy, have a lot of smart people, and have a lot of math talent yet underperforms in CS and is willing to spend a lot of money to get better at it soon. This offers an aspiring professor the chance to hit a hiring wave, get established in a growing department, and hopefully garner a lot of influence in the department and at the university.
I went to topuniversities.com and found the universities that were ranked 1st-60th in the world for being the best overall, CS, and math universities. There are major caveats here because rankings are not that great in general and CS rankings don’t reflect the fact that some types of CS and math research are more AI safety relevant than others. But to give a coarse first glance, below are plotted CS ranks against the overall and math ranks. Note that for anything that was ranked 61st or above, I just set the value to 61, so both plots have walls at x=61 and y=61.
Some universities seem to underperform in CS and might be good to look for professorships at! These include:
And on the other side of the coin, some universities clearly overachieve in CS.
Obviously there are tons more factors to consider when thinking about where to search for professorship, but I hope this is at least useful food for thought.
What do you think?
How might you improve on this analysis?