The Fund for Alignment Research is a new organization to help AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. They're a group of researchers affiliated with the Center for Human-Compatible AI at UC Berkeley and other labs like Jacob Steinhardt's at UC Berkeley and David Krueger's at Cambridge. They are hiring for:
If you have any questions about the role, please contact them at hello@alignmentfund.org.
Appreciate the recommendation. Around April 1st I decided that the "work remotely for an alignment org" thing probably wouldn't work out the way I wanted it to, and switched to investigating "on-site" options - I'll write up a full post on that when I've either succeeded or failed on that score.
On a mostly unrelated note, every time I see an EA job posting that pays at best something like 40-50% of what qualified candidates would get in the industry, I feel that collide with the "we are not funding constrained" messaging. I understand that ther...
We (the Center for Effective Altruism) are hiring Full-Stack Engineers. We are a remote first team, and work on tools which (we hope) better enable others to work on AI alignment, including collaborating with the LessWrong team on the platform you used to ask this question :)
Anthropic will want you to be in their office at California for at least 25% or so of the time (based on one discussion with them, please correct me if you learn otherwise)
Have you considered CEA? Not a perfect fit, but they're remote-first and I personally think they help with alignment research indirectly by building the EA community and improving lesswrong.com as well (they use the same code). It's really important, I think, for these places to be (1) inviting, (2) promote good complicated (non toxic) discussions, and (3) connect people to relevant orgs/people, including to AI Safety orgs.
Again, not sure this is what you're looking for. It resonates with me personally
I'm curious why you think Ought doesn't count as "an organization that works either directly on AI alignment, or a 'meta' org that e.g. better enables others to work on AI alignment". More on Ought
It might be worth a shot to quickly apply to speak with 80,000 Hours and see if they have any suggestions.
Fathom Radiant, an ML hardware supplier, is also hiring remotely. Their plan is apparently to offer differential pricing for ML hardware based on the safety practices, in order to help incentivize safer practices and help safety research. I'm not totally sold but my 80,000 Hours adviser seemed like a fan. You can speak with Fathom Radiant to learn more about their theory of change.
I'm not particularly sold on how Ought's current focus (Elicit) translates to AI alignment. I'm particularly pessimistic about the governance angle, but I also don't see how an automated research assistant is moving the needle on AI alignment research (as opposed to research in other domains, where I can much more easily imagine it being helpful).
This is possibly a failure of my understanding of their goals, or just of my ability to imagine helpful ways to use an automated research assistant (which won't be as usable for research that advances ...
tl;dr: qualified software engineer considering what their next job might be; now thinking about direct work as a serious option.
Previous plan was something like:
For a variety of reasons, I'm not a huge fan of this plan anymore.
New plan:
I didn't find anything looking at the job pages of the AI alignment orgs that I'm familiar with, and 80000 Hours didn't bring up anything that fit the bill either, so here we are.
Me:
You:
Does anyone know of any orgs that I might have missed?
Most of the other orgs I'm familiar with seem to be doing differently-targeted work (i.e. Ought), or are doing work which seems to boil down to "capabilities advancement", but I'm open to arguments here if I've misjudged one or more of them.