If you are interested in working on AI alignment, and might do full or part time work given funding, consider submitting a short application to funding@ai-alignment.com.
Submitting an application is intended to be very cheap. In order to keep the evaluations cheap as well, my process is not going to be particularly fair and will focus on stuff that I can understand easily. I may have a follow-up discussion before making a decision, and I'll try not to favor applications that took more effort.
As long as you won't be offended by a cursory rejection, I encourage you to apply.
If there are features of this funding that make it unattractive, but there are other funding structures that could potentially cause you to work on AI alignment, I'm curious about that as well. Feel free to leave a comment or send an email to funding@ai-alignment.com (I probably won't respond, but it may influence my decisions in the future).
Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.
Apply especially if all of 1), 2) and 3) hold:
1) you want to solve AI alignment
2) you think your cognition is pwned by Moloch
3) but you wish it wasn't
There are probably no fire alarms for "nice AI designs" either, just like there are no fire alarms for AI in general.
Why should we expect people to share "nice AI designs"?