There is the sacred, there is the mundane and there is rent. In a civilization with decent epistemology, you will find mundane problems with obvious solutions will likely already be addressed, save for those a rent-extractor has some moat around. And even those can fall when things get shaken up. But what is sacred is not so easily thought about. So a good source of underfunded interventions will likely be those that impinge on the sacred. 

Consider a civilization where hands are sacred and washing them is considered a sin. Wiping hands on a dry towel, though shameful, is allowed in private.  But anything more is an insult to god, and gloves considered a barrier between man and the world God created for him. Standard sanitation becomes difficult - surgery an invitation to sepsis. 

Here we have a world with a lot of cheap utils up for grabs. And Earth's effective altruists would have obvious interventions to fund. But let's imagine EA culture (at least what I see in the modern EA Forum) is a child of this world and of this dry-handed culture. This is approximately how I would expect them to react to an intervention that impinges on this sacred topic.

 

There was a satirical post I wrote for the EA Forum when it first started - which I never bothered publishing as it was slightly mean-spirited. I had been reading Mormon history at the time and I was impressed with the power of starting a cult. And it struck me that if EAs continued tithing, ritualized somewhat, and enforced fecundity norms, the expected impact was likely enormous. The fact that this idea was actually surprisingly strong and seemed maximally disgusting to the type of person interested in EA was amusing to me. 

Despite not posting, I never doubted that if I did a good job and wrote it well, I would not be massively downvoted for posting such a disgusting idea. To use a cringe term, there was much "low decoupler" nature in EA Forum back then, and I would have expected counterarguments not downvotes provided my post was intelligent. This is now mostly dead.

habryka summarizes the areas Open Phil will blacklist an organization for funding:

You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt 

The current list of areas that I know about are: 

  • Anything to do with the rationality community ("Rationality community building")
  • Anything to do with moral relevance of digital minds
  • Anything to do with wild animal welfare and invertebrate welfare
  • Anything to do with human genetic engineering and reproductive technology
  • Anything that is politically right-leaning

There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible: 

  • Most forms of broad public communication about AI (where you would need to align very closely with OP goals to get any funding)
  • Almost any form of macrostrategy work of the kind that FHI used to work on (i.e. Eternity in Six Hours and stuff like that)
  • Anything about acausal trade of cooperation in large worlds (and more broadly anything that is kind of weird game theory)

And again, this is a blacklist not just a funding preference. It casts a pall on any organization that funds multiple projects and wants Open Phil funding for at least one of them.

If you "withdraw from a cause area" you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn't withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP. 

With the exception of avoiding rationalists (and can we really blame Moskovitz for that?) the Open Phil blacklist is a list of things that impinge on the sacred: 

Digital minds impinges on our intuitions of having an immaterial soul that is still powerful even in secular western culture.

Wild animal welfare impinges on the sacred notion of a benevolent mother nature.

Human genetic engineering impinges on the sacred notion of human equality.

And "anything that is politically right-leaning" impinges on the sacred notion that Ezra Klein is correct about everything.

Disincentivizing research into the welfare of digital minds alone can undo any good many times over. There are consequences to lobotomizing one of the few cultures with good enough epistemology to think critically about sacred issues. And much good can be undone. It's plausible to me enough good can be undone that one would have been better off buying yachts. 

But regardless of Moskovitz's desire to keep his hands dirty, we are still left with the question of how one funds taboo-but-effective interventions given the obvious reputational risks. 

I think there may be a sort of geographical reputational arbitrage that is under-explored. Starting with a less-controversial example, east Asian countries seem to have less-parochial notions of human and machine consciousness. And it plausibly has less political valence there. Raising or deploying funds in Japan and Korea and perhaps even China if possible might be worth investigating. 

In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.

As for wild-animal welfare, I don't have any ideas about similar arbitrages but I think it may be worth some smarter minds' time to think over this question for five minutes.

And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective. And one might focus on propaganda campaigns to try to make right-coded-but-good ideas into the left-coded ideas instead. There is an obvious redistributional case for genetic engineering (what is a high-IQ if not unearned genetic privilege?) which can maybe be framed in a left-wing manner, for example.

New Comment
3 comments, sorted by Click to highlight new comments since:

In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.

I've also heard that AI X-risk is much more viral in India than EA in general (in comparative terms, relative to the West).

And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective.

Progress studies? Not that they are necessarily right-leaning themselves but if you integrate support for [progress-in-general and doing a science of it] over the intervals of the political spectrum, you might find that center-right-and-righter supports it more than center-left-and-lefter (though low confidence and it might flip if you ignore the degrowth crowd).

With the exception of avoiding rationalists (and can we really blame Moskovitz for that?)

care to elaborate?

I was joking.

Curated and popular this week