In general, for donating to alignment work, I think that the best approach is to focus on local grants, because those are the ones that won't be picked up by bigger funders, who have a lot of money right now. By "local" I mean things like: if you meet someone who seems promising, fund their flights to visit an alignment hub; or fund them buying textbooks, getting tutoring in ML; etc.
Couldn't these people just apply to the long-term future fund for those kinds of things and the LTFF could be better at recognising who is promising amongst the people who apply?
My guess is that the Long-Term Future Fund is the best you can do. (I'm a fund manager on a different EA fund.)
Larks' 2021 AI Alignment Literature Review and Charity Comparison is a good summary of the organizations and their funding situations.
I'm actually about to announce an AI Safety microgrant initiative for people who want to are looking to commit at least $1000USD for every year they choose to be involved. The post will be out in the next few days, let me know if you want me to link you when it's ready.
Could you please link me when grant applications are open instead?
Considering AI Safety as a career and would go for it if I can have some time where I'm not worried about rent.
There may not be an open round, as we may find connections through our networks. However, if there is, we will post it on the forum.
I don't understand this field well, I'm hoping you guys can help me out.