If you are interested in working on AI alignment, and might do full or part time work given funding, consider submitting a short application to funding@ai-alignment.com.
Submitting an application is intended to be very cheap. In order to keep the evaluations cheap as well, my process is not going to be particularly fair and will focus on stuff that I can understand easily. I may have a follow-up discussion before making a decision, and I'll try not to favor applications that took more effort.
As long as you won't be offended by a cursory rejection, I encourage you to apply.
If there are features of this funding that make it unattractive, but there are other funding structures that could potentially cause you to work on AI alignment, I'm curious about that as well. Feel free to leave a comment or send an email to funding@ai-alignment.com (I probably won't respond, but it may influence my decisions in the future).
I had been thinking about metrics for measuring progress towards shared agreed outcomes as a method of co-ordination between potentially competitive powers to avoid arms races.
I passed around the draft to a couple of the usual suspects in the ai metrics/risk mitigation in hopes of getting collaborators. But no joy. I learnt that Jack Clark of OpenAI is looking at that kind of thing as well and is a lot better positioned to act on it, so I have hopes around that.
Moving on from that I'm thinking that we might need a broad base of support from people (depending upon the scenario) so being able to explain how people could still have meaningful lives post AI is important for building that support. So I've been thinking about that.