Effective Altruism presents itself as an objective arbiter of where marginal resources can do the most good. Its framework emphasizes cause-neutrality and cost-effectiveness, aiming to direct support to whatever opportunities are most impactful at the current margin. Yet EA is also a movement with its own institutions, career paths, and organizational interests. This dual identity—as both impartial judge and institutional player—creates a fundamental tension that grows more acute as the movement expands.

This is best observed by zooming in on neglectedness. When EA is small, thinking on the margin poses no problem—individual EAs can freely direct their resources to whatever cause seems most compelling at the current margin. But as EA grows, any cause it champions becomes significantly less neglected. In theory, this is not a problem but a sign of success—solving problems quickly and effectively frees up bandwidth to look for the next problem to solve.. But as the movement grows larger, it will likely stand in the way of its own stated objectives.

Consider the fact that EA jobs are currently significantly oversubscribed.. Operations roles at EA organizations that pay well under $100,000 receive several hundreds, if not thousands of applications. EA orgs can afford long drawn-out evaluation processes spanning 3-6 months. Some in EA defend this oversubscription with arguments about power laws - claiming these roles are so much higher impact that the marginal value of an additional applicant doesn't diminish even with thousands of applicants.

But this seems implausible for most roles with bounded autonomy, even in exceptionally impactful organizations. The exceptions are high leverage roles like leading organizations or specialized technical positions. For a marketing manager or operations coordinator, it's hard to make the case that from a pool of 2000 qualified applicants, the delta between the best and second-best candidate justifies this insistence on working for an EA organization.

More importantly, this rationalization illustrates precisely how institutional forces can warp even well-intentioned movements. Even if we grant the power law argument, we should be wary of how conveniently it justifies the existing institutional structure. It's a perfect example of how EA organizations, despite their commitment to cause neutrality, can develop self-perpetuating logics that resist external scrutiny.

This points to a deeper challenge that EA faces through the lens of public choice theory. EA is not just a handful of grantmakers trying to allocate resources but also the social and intellectual capital of the movement. Consider how a substantial portion of EA's intellectual capital is now building careers in AI safety. These individuals, whether motivated by career advancement or genuine belief in the cause's importance, aren't immune to institutional incentives.

When you build a career in a specific domain, you naturally become slower to update downward on its relative importance. You see more arguments for its significance, develop deeper understanding of its complexities, and become better positioned to articulate why it matters. Even with purely altruistic motives, expertise breeds advocacy: "I understand this deeply now, so I need to make sure others appreciate its importance

This creates a form of intellectual and institutional lock-in. When EA identifies a cause area and invests in it, it's not just allocating money - it's creating careers, expertise, and institutional infrastructure. Any movement sufficiently large and invested in specific causes will face pressure to maintain these structures, potentially at the expense of pure cause neutrality.

One potential solution is to transform EA into a movement that primarily focuses on raising and allocating capital, rather than providing subsidized labor to "important causes." Under this model, EA would leverage market mechanisms and incentives to achieve its goals, with movement-building efforts centered on earning to give.

While some might object that ambitious EA projects require high-trust, value-aligned teams since impact can't be tracked purely through metrics, this argument deserves more scrutiny. Yes, corporations at the highest level have a clearer optimization target in profits, but at each lower level of hierarchy, they face the same challenges of incentive alignment and goodharting that EA organizations do. Despite this, good companies manage to build effective hierarchies and get important things done. EA could similarly harness incentives and competitive dynamics to its advantage.

The challenge facing EA isn't just theoretical—it's structural. As long as EA tries to be both an objective arbiter of impact and a builder of permanent institutions, it will face increasingly difficult tensions between these roles. The movement's future effectiveness may depend on choosing a clearer path: either embracing its role as an institution-builder with all the path dependencies that entails, or transforming into a lean capital allocator that can remain truly neutral about where resources should flow.

New Comment
4 comments, sorted by Click to highlight new comments since:

Good post, did you also cross post to the forum? Also do you have any thoughts on what to do differently in order to enable more exploration and less lock in?

I just did. 

I'm not sure I have one that folks within EA would find palatable. The solution, in my mind, is for Effective Altruism to become a movement that mostly focuses and raising and allocating capital - one that uses markets to get things done downstream of that. I think EA should get out of the business of providing subsidized labor to the "most important causes". Instead, allocate capital and use incentives and markets to get what you want. This would mean all movement building efforts focus on earning to give. If you want someone smart to found a charity, pay to incentivize that. 

One response I anticipate from EAs is that ambitious projects often require teams that have high trust (or in EA parlance - value aligned) since impact can't often be tracked purely through metrics and incentives. I'm not sure I buy this. It's true that corporations, at the highest level, have something far more legible that the leadership team can optimize for.  But at each lower level of hierarchy, corporations also face the same problems of Goodharting and incentives alignment. They don't always make the best decisions but good companies do manage to do this well enough at most levels to get important thigns done. What makes me even more suspicious is that people don't even want to try this. 

I guess the solution that you're more generally pointing at here is something like ensuring a split in the incentives of the people within the specific fields and EA itself as a movement. Almost a bit like making that part of EA only be global priorities research and something like market allocation? 

I have this feeling that there might be other ways to go about doing this with like programs or incentives for making people be more open to taking any type of impactful job? Something like having reoccuring reflection periods or other types of workshops/programs? 

I don't think it's great to tell most people to. keep switching fields based on updated impact calculations. There are advantages to building focussed careers - increasing returns to effort within the same domain. The exception would be founder-types and some generalist type talent. I'm not sure why we start with the premise that EA has to channel people into specific career paths based on impact calculations. It has a distortionary effect on the price of labor. Just as I'd prefer tax dollars being channeled into direct cash payments as welfare, i'd prefer if EAs made as much money as possible and donated it, so they can pay for whoever is best qualified to do what needs to be done. 

Curated and popular this week