Right now in Effective Altruism (or Rationality), we have a few donor funds with particular focus areas. In this post I propose a new type of fund that’s instead focused on maximizing the combined utility functions of its particular donors. The fund goals would be something like, “Maximize the combined utility of our donors, adjusted for donation amount, in any way possible, abiding by legal and moral standards.” I think that this sort of fund structure is highly theoretical, but in theory could some particular wants that aren’t currently being met.

For this document, I call these funds "Contribution-Adjusted Utility Maximization Funds", or CAUMFs. This name is intentionally long; this idea is early, and I don't want to pollute the collective namespace.

This fund type has two purposes.

  1. It’s often useful for individuals to coordinate on non-charitable activities. For example, research into best COVID risk measures to be used by a particular community.
  2. These funds should help make it very clear that donations will be marginally valuable for the preferences of the donor. Therefore, donating to these funds should be safe on the margin. Hopefully this would result in more total donations.

You can picture these funds as somewhere between bespoke nonprofit advising institutions, cooperatives, and small governments. If AI and decision automation could cut down on labor costs, related organizations might eventually be much more exciting.

New Comment