IMO "major donors won't fund this kind of thing" is a pretty compelling reason to look into it, since great opportunities which are illegible or structurally-hard-to-fund definitely exist (as do illegible-or-etc terrible options; do your diligence). On the other hand I'm pretty nervous about the community dynamics that emerge when you're granting money and also socially engaged with and working in the field. Caveat donor!
Agreed, I think people should apply a pretty strong penalty when evaluating a potential donation that has or worsens these dynamics. There are some donation opportunities that still have the "major donors won't [fully] fund it" and "I'm advantaged to evaluate it as an AIS professional" without the "I'm personal friends with the recipient" weirdness, though -- e.g. alignment approaches or policy research/advocacy directions you find promising that Open Phil isn't currently funding that would be executed thousands of miles away.
I work on Open Philanthropy’s AI Governance and Policy team, but I’m writing this in my personal capacity – several senior employees at Open Phil have argued with me about this!
This is a brief-ish post addressed to people who are interested in making high-impact donations and are already concerned about potential risks from advanced AI. Ideally such a post would include a case that reducing those risks is an especially important (and sufficiently tractable and neglected) cause area, but I’m skipping that part for time and will just point you to this 80,000 Hours problem profile for now.
Edited to add a couple more concrete ideas for where to donate:
First, a meta point: I think people sometimes accept the above considerations “on vibes.” But, for people who agree that reducing AI risks is the most pressing cause (as in, the most important, neglected, and tractable) and with my earlier argument that there are good giving opportunities in AI risk reduction at current margins, especially for people who work in that field, their views imply that their donation is a decision with nontrivial stakes. They might actually be giving up a lot of prima facie impact in exchange for more worldview diversification, signaling, and morale. I know this does not address the above considerations, and it could still be a good trade; I’m basically just saying, those considerations have to turn out to be valid and pretty significant in order to outweigh the consequentialist advantages of AI risk donations.
Second, I think it’s coherent for individual people to be uncertain that AI risk is the best thing to focus on (on both empirical and normative levels) while still thinking it’s better to specialize, including in one’s donations. That’s because worldview diversification seems to me like it makes more sense at larger scales, like the EA movement or Open Philanthropy’s budget, and less at the scale of individuals and small donors. Consider the limits in either direction: it seems unlikely that individuals should work multiple part-time jobs in different cause areas instead of picking one in which to develop expertise and networks, and it seems like a terrible idea for all of society to dedicate their resources to a single problem. There’s some point in between where the costs of scaling an effort, and the diminishing returns of more resources thrown at the problem, start to outweigh the benefits of specialization. I think individuals are probably on the “focus on one thing” side of that point.