In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.
I've also heard that AI X-risk is much more viral in India than EA in general (in comparative terms, relative to the West).
And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective.
Progress studies? Not that they are necessarily right-leaning themselves but if you integrate support for [progress-in-general and doing a science of it] over the intervals of the political spectrum, you might find that center-right-and-righter supports it more than center-left-and-lefter (though low confidence and it might flip if you ignore the degrowth crowd).
There is the sacred, there is the mundane and there is rent. In a civilization with decent epistemology, you will find mundane problems with obvious solutions will likely already be addressed, save for those a rent-extractor has some moat around. And even those can fall when things get shaken up. But what is sacred is not so easily thought about. So a good source of underfunded interventions will likely be those that impinge on the sacred.
Consider a civilization where hands are sacred and washing them is considered a sin. Wiping hands on a dry towel, though shameful, is allowed in private. But anything more is an insult to god, and gloves considered a barrier between man and the world God created for him. Standard sanitation becomes difficult - surgery an invitation to sepsis.
Here we have a world with a lot of cheap utils up for grabs. And Earth's effective altruists would have obvious interventions to fund. But let's imagine EA culture (at least what I see in the modern EA Forum) is a child of this world and of this dry-handed culture. This is approximately how I would expect them to react to an intervention that impinges on this sacred topic.
There was a satirical post I wrote for the EA Forum when it first started - which I never bothered publishing as it was slightly mean-spirited. I had been reading Mormon history at the time and I was impressed with the power of starting a cult. And it struck me that if EAs continued tithing, ritualized somewhat, and enforced fecundity norms, the expected impact was likely enormous. The fact that this idea was actually surprisingly strong and seemed maximally disgusting to the type of person interested in EA was amusing to me.
Despite not posting, I never doubted that if I did a good job and wrote it well, I would not be massively downvoted for posting such a disgusting idea. To use a cringe term, there was much "low decoupler" nature in EA Forum back then, and I would have expected counterarguments not downvotes provided my post was intelligent. This is now mostly dead.
habryka summarizes the areas Open Phil will blacklist an organization for funding:
And again, this is a blacklist not just a funding preference. It casts a pall on any organization that funds multiple projects and wants Open Phil funding for at least one of them.
With the exception of avoiding rationalists (and can we really blame Moskovitz for that?) the Open Phil blacklist is a list of things that impinge on the sacred:
Digital minds impinges on our intuitions of having an immaterial soul that is still powerful even in secular western culture.
Wild animal welfare impinges on the sacred notion of a benevolent mother nature.
Human genetic engineering impinges on the sacred notion of human equality.
And "anything that is politically right-leaning" impinges on the sacred notion that Ezra Klein is correct about everything.
Disincentivizing research into the welfare of digital minds alone can undo any good many times over. There are consequences to lobotomizing one of the few cultures with good enough epistemology to think critically about sacred issues. And much good can be undone. It's plausible to me enough good can be undone that one would have been better off buying yachts.
But regardless of Moskovitz's desire to keep his hands dirty, we are still left with the question of how one funds taboo-but-effective interventions given the obvious reputational risks.
I think there may be a sort of geographical reputational arbitrage that is under-explored. Starting with a less-controversial example, east Asian countries seem to have less-parochial notions of human and machine consciousness. And it plausibly has less political valence there. Raising or deploying funds in Japan and Korea and perhaps even China if possible might be worth investigating.
In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.
As for wild-animal welfare, I don't have any ideas about similar arbitrages but I think it may be worth some smarter minds' time to think over this question for five minutes.
And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective. And one might focus on propaganda campaigns to try to make right-coded-but-good ideas into the left-coded ideas instead. There is an obvious redistributional case for genetic engineering (what is a high-IQ if not unearned genetic privilege?) which can maybe be framed in a left-wing manner, for example.