Effective altruism exists at the intersection of other social and intellectual movements, and communities. Some but not all of the organizations part of these communities focused on various focus areas of EA, such as existential risk reduction, identify as part of effective altruism as a movement. Such organizations are typically labeled as "EA-aligned organizations."

New Answer
New Comment

2 Answers sorted by

Ben Pace

110

I can't answer your question properly, in part because I am not BERI. I'll just share some of my thoughts that seems relevant for this question:

  • I expect everything BERI supports and funds to always be justified in terms of x-risk. It will try to support all the parts of EA that are focused on x-risk, and not the rest. For example, their grant to EA Sweden is described as "Effective Altruism Sweden will support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals."
  • I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don't think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).
  • I think it will have indirect effects on other EA work. For example BERI supports FHI and this will give FHI a lot more freedom to take actions in the world, and FHI does some support of other areas of EA (e.g. Owen Cotton-Barratt advises the CEA, and probably that trades off against his management time on the RSP programme). I expect BERI does not count this in their calculations on whether to help out with some work, but I'm not confident.

I would call it an x-risk org and not an EA-aligned org in its work, though I expect its staff all care about EA more broadly.

I think it would be correct to classify it entirely as an x-risk org and not as an EA org. I don't think it does any EA-style analysis of what it should work on that is not captured under x-risk analysis, and I think that people working to do things like, say, fight factory farming, should never expect support from BERI (via the direct work BERI does).

I think it's worth noting that an org can be an EA org even if it focuses exclusively on one cause area, such as x-risk reduction. What seems to matter is (1) that such a focus was chosen because in... (read more)

What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.

This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don't identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes 'the most impactful,' 'the greatest welfare,' and/or 'rigorous reasoning.' So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.

Also, this definition seems to be a prescriptive definition of "EA organizations," as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I'm not convinced they're sufficient to capture all the characteristics of... (read more)

2Pablo
I said that the belief must be reached from welfarist premises and rigorous reasoning, not from what the organization believes are welfarist premises and rigorous reasoning. I'm not sure what you mean by this. And it seems clear to me that lots of nonprofit orgs would not classify as EA orgs given my proposed criterion (note the clarification above).
4Ben Pace
Fair.

Ben Pace

100

Why do you care? Can't this be cached out into what you can actually expect of the org?

  • If I come to them with an x-risk project to support via their university help, will they seriously consider supporting it? Probably yes.
  • If I come to them with a global poverty project to support via their university help, will they seriously consider supporting it? Probably no.
  • Do their hires primarily come from people who have followed places like the EA Forum and attended EA Global conferences in past years and worked on other non-profit projects with people who've done the same? I think so.
  • When they used to run their grants programme, did they fund non-x-risk things? Largely not. I mean, of the less obvious ones, they funded LessWrong, and CFAR, and 80k, and REACH, and Leverage, which are each varying levels of indirect, but I expect funded them all out of the effects they thought they'd have on x-risk.

Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I'm asking this as part of a framework for a broader project of analysis I'm currently pursuing.

2Ben Pace
Pardon for being so challenging, you know I’m always happy to talk with you and answer your questions Evan :) Am just a bit irritated, and let that out here. I do think that “identity” and “brand” mustn’t become decoupled from what actually gets done - if you want to talk meaningfully about ‘EA’ and what’s true about it, it shouldn’t all be level 3/4 simulacra. Identity without substance or action is meaningless, and sort of not something you get to decide for yourself. If you decide to identify as ‘an EA‘ this causes no changes in your career or your donations, has the average EA donations suddenly gone down? Has EA actually grown? It’s good to be clear on the object level and whether the proxy actually measures anything, and I’m not sure I should call that person and EA’ despite their speech acts to the contrary. (Will go and read your longer comment now.)

Summary:

I'm working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.

Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.

I expect that the EA community will want to know to what extent various orgs... (read more)

2Ben Pace
nods I think I understand your motivation better. I'll leave a different top-level answer.