If anyone's interested in doing an even less formal version of this, I think it would be really useful for me to have semi-regular chats with other people in the alignment space. This could be anything from "you mentor me for an hour a week at the Lightcone office" to "we chat for 15 minutes on zoom every few weeks". I feel reasonably connected to the community, but I think I would strongly benefit from more two-way real-time interaction.
(More info about me: I'm currently doing full-time independent alignment research, but just on my own, with no structure. I majored in math & physics ten years ago, but dropped out. I'm quite comfortable generating my own ideas to pursue, but I'm still a baby in some domains (e.g. this week I'm finally learning what a Markov process formally is). I'm way more agent-foundations-flavored than ML flavored; definitely interested in ML, but you might end up teaching me stuff the whole time.)
If it's easier for you, we can already facilitate that through M&M. Like we said, as long as both parties agree, you can do whatever makes sense for you :) But the program might make finding other people easier.
Does AI policy count, or is this more technical AI safety? I do a specific niche in AI policy so my knowledge is somewhat lopsided, and I have the credentials to prepare people for AI policy but I'd definitely like to meet people and get industry advice.
Really excited about this!
I applied before reading this post (was linked via Twitter DM). I indicated 3 - 6 hours of time commitment (I'm a full-time theoretical computer science masters student with a precarious personal finance situation).
I can probably double the time commitment if I was given financial support. I was not aware of financial support being an option when applying. Is there a way to edit existing applications?
Just send in a new application. We have a couple of new mentors but they are quite busy at the moment so I can't promise that we'll find a match soon :( Sorry for that
Executive summary
Brief overview
Motivation
There are many great programs to get people deeper into AI safety, e.g. AGISF fundamentals, SERI MATS, AI safety scholars, or the AI safety camp. There are also multiple options to get individual grants (scroll down here for an overview) and there are multiple guides to doing career exploration in AI safety, e.g. how to pursue a career in technical alignment, leveling up in AI safety research engineering, or the AI safety starter pack. These resources are great and we don’t intend to replace them. However, we think there are some niches that can still be filled.
To evaluate whether our reasoning makes sense we started a pilot phase for the program at the beginning of June 2022 with a grant from the Long-term Future Fund and organizational support from AI safety support (see evaluation below).
Details of the program
The program is very flexible and as long as the mentor and mentee agree on the terms, they can pretty much do whatever makes the most sense for them. However, we provide a rough frame as the default plan.
The program is on a voluntary basis but we do offer to pay the mentees for the time they spend on the program in some cases. For example, some previous mentees would not have been able to participate in the first place if we didn’t provide funding because they had to work a second job. The exact funding amount is based on the needs of the participant but our default value is $30/h which is roughly 2x of what a teaching assistant would make in Germany. Per default, the program is 3-5 months but it can be shorter or longer if people want that. Mentors can also get paid if they want to! So far, most mentors have done it for free because they have a stable income.
One key design component of the M&M program is that the overhead is very small for both mentors and mentees. The goal is that the logistical overhead for the entire program is less than 10 minutes for every mentee. So far, the overhead consists of providing your bank data in case you want to be paid for the time and filling out the final survey in the end. Everything else will be taken care of. The overhead for the mentors is even smaller. They basically just have to fill in a row into a spreadsheet whenever they mentor a new mentee. Everything else is taken care of by AI safety support or Marius. If we expand the program, we might try to get help for some of the organizational overhead.
Longterm role of M&M
Currently, we are uncertain about what role M&M should take in the long run. We expect it to be something like “filling gaps in other existing programs” by having lots of flexibility.
We don’t expect to replace other programs. In fact, most of the mentee’s schedules so far were created by picking and choosing parts of the AGISF schedule (101 and 201 curricula) and other AIS posts. We also expect to send many applicants directly to other programs in case that makes the most sense for them.
We are currently unsure how much we should scale the program. This is mostly because we are unsure about how much value it provides compared to its costs (e.g. mentor time). We will try to monitor this over time and expand or reduce the size of the program accordingly.
An additional benefit of M&M is that the mentors can provide a reference for future programs or jobs since they have a more detailed assessment of the mentee than in most other programs.
The bottleneck is mentorship
Ultimately, the bottleneck is and will probably stay mentorship. Many people would profit from 1-on-1 mentorship but this comes at the cost of the mentor’s time. Investing 30-60 minutes every two weeks might not sound like much but it can quickly grow with more mentees and take up headspace in other ways (e.g. answering messages between the meetings).
There are some egoistic reasons to be a mentor, e.g. building a network and getting more mentoring experience, but we ultimately expect most mentors to do it for altruistic reasons.
We continuously evaluate the program and keep up with our mentors to see if they think their time investment is justified by the value they provide. However, we are cautiously optimistic since the mentorship seems more valuable and requires less time investment than we initially expected.
In case you think you might make a good mentor, please consider reaching out. We are especially looking for members of underrepresented communities, e.g. women and people from developing countries. We think the ideal candidates for mentorship are early-stage professionals working in AI safety or an adjacent field (e.g. Ph.D. students, industry researchers/engineers or independent contributors).
Evaluation of the pilot phase
We received funding for a pilot phase at the beginning of June 2022 from the Long-term Future Fund. There are currently 5 mentors and 10-15 mentees in the program.
How do we know we failed/succeeded?
Ultimately we want to get more full-time AI safety researchers/engineers. Thus, we think the main two metrics are
Both of these are obviously hard to evaluate because we have to estimate counterfactuals. Currently, we estimate these quantities mostly by asking the mentees themselves and using the subjective experiences of the mentors. However, we are interested in finding more accurate approximations of these quantities and are interested in suggestions.
However, we want to point out that rather small changes in these metrics could already justify the program. For example, if we invested 5-15 mentor-hours per mentee, a 10 percentage point increase in p(full-time work) or cutting the time to get there by 3 months would already be a good outcome.
Further evidence for the success or failure of the program include
Evidence from the pilot phase
We evaluate the 3 mentees who have finished or nearly finished their run through the program (most mentees have not started in June).
These three mentees:
The five mentors:
Conclusions
We think the idea to match mentors with mentees is pretty simple, probably useful and has seen lots of success before, e.g. in SERI MATS. We’re not sure if the particular way we do that in M&M is optimal and intend to evaluate this regularly to make sure we’re not wasting otherwise valuable time. For now, we open up the program for applications to mentors and mentees but we intend to stay relatively small until we get more info about the program's impact. Feedback is very welcome. If you want to help with or take over some of the design and operational aspects of the program, please reach out to Marius.