Assuming this is serious, have you reached out to them?
The salary offer is high enough that any academic would at least take the call. If they're not interested themselves, you might be able to produce an endowment to get their lab working on your problems, or at a bare minimum, get them to refer one or more of their current/former students.
The salary is not that high. If Costis or Nina earn less than $150,000 USD/year, I will eat my hat. $200k is more likely. Also, their job comes with tenure (and access to the world's top graduate students), and you're unlikely to get them to quit it.
(It is true that they might refer some of the open problems to their graduate students, though.)
Academics not willing to leave their jobs might still be interested in working on a problem part-time. One could imagine that the right researcher working part-time might be more effective than the wrong researcher full time.
Please feel free to repost this elsewhere, and/or tell people about it.
And if there is anyone interested in this type of job, but is currently still in school or for other reasons is unable to work full time at present, we encourage them to apply and note the circumstances, as we may be able to find other ways to support their work, or at least collaborate and provide mentorship.
But even in the case of still being in school, one would require the background of having proved non-trivial original theorems? All this sounds exactly like the research agenda I'm interested in. I have a BS in math and am working on an MS in computer science. I have a good math background, but not at that level yet. Should I consider applying or no?
For this position, we are looking for people already able to contribute at a very high level. If you're interested in working on the agenda to see if you'd be able to do this in the future, I'd be interested in chatting separately and looking at whether some form of financial support or upskilling would be useful, and look at where to apply for funding.
I have a BS in mathematics and MS in data science, but no publications. I am very interested in working on the agenda and it would be great if you could help me find funding! I sent you a private message.
How does this relate to this job offer? Is this a second job or the same job with requirements clarified? Should I give up on this job now if I don't have publications?
It is a completely different job, with different requirements, different responsibilities and even different employers (the other job is at MIRI, this job is at ALTER).
When do applications close?
When are applicants expected to begin work?
How long would such employment last?
When do applications close?
There is no particular deadline, it will be my judgment call based on distribution of applications over time and quality. I expect the position to remain open for no less than 2 weeks and no more than 6 months, but it's hard to say anything more specific atm.
When are applicants expected to begin work?
We are flexible about this: if an applicant needs several months to complete other commitments, it is perfectly acceptable.
How long would such employment last?
Until we either solve AI alignment or the AI apocalypse comes :)
(Or, the employment is terminated because one of the parties is unsatisfied, or we run out of funding, hopefully neither will happen.)
If someone wanted to work out if they might be able to develop the skills to work on this sort of thing in the future, is there anything you would point to?
If you're interested, I'd start here: https://www.alignmentforum.org/posts/YAa4qcMyoucRS2Ykr/basic-inframeasure-theory and go through the sequence. (If you're not comfortable enough with the math involved, start here first: https://www.lesswrong.com/posts/AttkaMkEGeMiaQnYJ/discuss-how-to-learn-math )
And if you've gone through the sequence and understand it, I'd suggest helping developing the problem sets that are mentioned in one of the posts, or reaching out to me.
UPDATE: The position is now closed. My thanks to everyone who applied, and also to those who spread the word.
The Association for Long Term Existence and Resilience (ALTER) is a new charity for promoting longtermist[1] causes based in Israel. The director is David Manheim, and I am a member of the board. Thanks to a generous grant by the FTX Future Fund Regranting Program, we are recruiting a researcher to join me in working on the learning-theoretic research agenda[2]. The position is remote and suitable for candidates in most locations around the world.
Apply here.
Requirements
Job Description
The researcher is expected to make progress on open problems in the learning-theoretic agenda. They will have the freedom to choose any of those problems to work on, or come up with their own research direction, as long as I deem the latter sufficiently important in terms of the agenda's overarching goals. They are expected to achieve results with minimal or no guidance. They are also expected to write their results for publication in academic venues (and/or informal venues such as the alignment forum), prepare technical presentations et cetera. (That said, we rate researchers according to the estimated impact of their output on reducing AI risk, not according to standard academic publication metrics.)
Here are some open problems from the agenda, described very briefly:
Terms
The position is full-time, and the candidate must be available to start working in 2022. The salary is between 60,000 USD/year to 180,000 USD/year, depending on the candidate's prior track record. The work can be done from any location. Further details depend on the candidate's country of residence.
Personally, I don't think the long-term future should override every other concern. And, I don't consider existential risk from AI especially "long term" since it can plausibly materialize in my own lifetime. Hence, "longtermist" is better understood as "important even if you only care about the long-term future" rather than "important only if you care about the long-term future". ↩︎
The linked article in not very up-to-date in terms of the open problem, but is still a good description on the overall philosophy and toolset. ↩︎