1. Two diversity problems

Here are two concerns sometimes raised in Effective Altruist circles:

  1. Effective Altruists are not very diverse—they are disproportionately male, white, technically minded, technically employed, located in a small number of rich places, young, smart, educated, idealistic, inexperienced. Furthermore, because of this lack of diversity, the community as a whole will fail to know about many problems in the world. For instance, problems that are mostly salient if you have spent many years in the world outside of college, if you live in India, if you work in manufacturing, if you have normal human attitudes and norms.
  2. When new people join the Effective Altruism community and want to dedicate a lot of their efforts to effectively doing good, there is not a streamlined process to help them move from any of a wide range of previous activities to an especially effectively altruistic project, even if they really want to. And it gets harder the further away the person begins from the common EA backgrounds: a young San Franciscan math PhD working in tech and playing on the internet can move into research or advocacy or a new startup more easily than an experienced high school principal with a family in Germany can, plus it’s not even clear that the usual activities are a good use of that person’s skills. So less good is done, and furthermore less respect is accorded to such people than they arguably deserve, and so more forbearance is required on their part to stick around (possibly leading to problem 1).

2. Two divergent evaluations of diversity

These concerns are kind of opposite. Which suggests that if they ran into each other they might explode and disappear, at least a bit.

The first concern is based on a picture where people doing different things from the rest of the EA community are an extremely valuable asset, worthy of being a priority to pursue (with effort that could go to figuring out how to stop AI, or paying for malaria nets, or pursuing collaborators from easy-to-pursue demographics).

The second concern is based on a picture where people who are doing different things from the rest of the EA community are worthy mostly as labor that might be redirected to doing something similar to the rest of us. Which is to say that on this picture, the fact that they are doing different things from the rest of us is an active downside.

There are more nuanced pictures that have both issues at once. For instance, maybe it is important to get people with different backgrounds, but not important that they remain doing different things. Maybe because different backgrounds afford different useful knowledge, but this happens fairly quickly, so nearly everything you would learn from spending ten years in the military, you have learned after the first.

I’m not sure if that is what anyone has in mind. I’m also not sure how often the same people hold both of the above concerns. But I do think it doesn’t matter. If some people were worried that EA didn’t have enough apples for some of its projects and some had the concern that it was too hard to turn apples into something useful like oranges, I feel like there should be some way for the former group to end up at least using the apples that the latter group is having trouble making good use of. And similarly for people with a wide range of backgrounds.

On this story, if you come to EA from a far away walk of life, before trying to change what you are doing to be more similar to what other EAs are doing, you might do well to help with with whatever concern (1) is asking for. (And if you aren’t moved by that concern yourself, you might still cooperate with those who do expect value there yet sadly find themselves to be yet more garden variety utilitarian-leaning Soylent-eating twenty-something programmers, who can perhaps give some more of their earnings on your behalf.)

3. A practical suggestion

But what can one do as a (locally) unusual person to help with concern (1) effectively?

I don’t know about effectively, but here’s at least one cheap suggestion to begin with (competing proposals welcome in the comments):


Choose a part of the world that you are especially familiar with relative to other EAs, and tell the rest of us about the ways it might be interesting.
It can be a literal place, an industry, a community, a social scene, a type of endeavor, a kind of problem you have faced, etc.

Here are a bunch of prompts about what I think might be interesting:

  1. What major concerns do people in that place tend to have that EAs might not be familiar with?
    What would they say if you asked what was bad, what was stupid that it hadn’t been solved, what was a gross injustice, what would they do if they had a million dollars?
  2. What major inefficiencies and wrongs do you see in that place?
    What would you do differently there if you were in charge, or designing it from scratch? What is annoying or ridiculous?
  3. Pick a problem that seems bad and tractable. Roughly how bad do you think it is? 
    Maybe do a back of the envelope calculation, especially if you are new to all this and want EAing practice.
  4. Are there maybe-good things to do that aren’t being done? How hard do you think it would be?
    If you can think of something for the problem(s) in Q4, perhaps estimate how efficiently the problem could be solved, on your rough account.
  5. What might be surprising about that part of the world, to those who haven’t spent time there?
  6. How does the official story relate to reality?
    The ‘official story’ might be what you could write in a children’s book or describe in a polite speech. Is it about right, or do things diverge from it? How is reality different? Are the things driving decisions things people can openly talk about?
  7. Are there important concepts, insights, ways of looking at the world, that are common in this place, that you think many EAs don’t know about?
    What is the most useful jargon? 
  8. What unused opportunities exist in the place?
    Who could really use to find this place? What value is being wasted?
  9. What is just roughly going on in the place?
    What are people trying to do? What are the obstacles? Who are the people? What would someone immediately notice?
  10. What are the big differences between there and where most EAs are?
    What do you notice most moving between them?
  11. What do the people in that place do if they want to change things?
    I hear some people do things other than writing blog posts about them, but I’m not super confident, skip this one it if it is confusing.
  12. If the EA movement had been founded in that place, what would it be doing differently?

(If you write something like this, and aren’t sure where to put it or don’t like to publicly post things, ask me.)


The only evidence I have that this would be good is my own intuition and the considerations mentioned above. I expect it to be quick though, and I for one would be interested to read the answers. I hope to post something like this myself later.

Appendix: Are diverse backgrounds actually pragmatically useful in this particular way? (Some miscellaneous thoughts, no great answers)

Diversity is good for many reasons. One might wonder if it is popular for the same range of reasons as usual within EA, and is just often justified on pragmatic EA grounds, because those kinds of grounds are more memetically fertile here.

One way to investigate this is to ask whether diversity has so far brought in good ideas for new things to do. Another is whether the things we currently do seem to be constrained to be close to home. I actually don’t see either of these being big (though could very easily be missing things), but I still think there is probably value to be had in this vicinity.

4.a. Does EA disproportionately think about EA-demographics relevant causes?

I don’t have a lot to say on the first question, but I’ll address the second a bit. The causes EAs think the most about seem to be trivially preventable diseases affecting poor people on the other side of the world, the far future and weird alien minds that might turn it into an unimaginable hellscape, and what it is like to be various different species.

These things seem ‘close to home’ for those people who find themselves mentally inclined to think about things very far from home, and to apply consistent reasoning to them. I feel like calling this a bias is like saying ‘you just found this apparently great investment opportunity because you are the kind of person who is willing to consider lots of different investment opportunities’. This seems like just a mark in our favor.

So while we may still be missing things that would seem even bigger but we just do not know about for demographic reasons, I think it’s not as bad as it might first seem.

My own guess is that EAs miss a lot of small opportunities for efficient value that are only available to people who intimately know a variety of areas, but actually getting to know those areas would be too expensive for the opportunities resulting to remain cheap.

4.b. Is EA influenced in other ways by arbitrary background things?

On the other hand, I think the ways we try to improve the world probably are influenced a lot by our backgrounds. We treat charity as the default way to improve matters for instance. In some sense, giving money to the person best situated to do the thing you want is pretty natural. But in practice charities are a certain kind of institution often, and giving money to people to do things has serious inefficiencies especially when the whole point is that you have unusual values that you want to fulfill, or you think that other people are failing to be efficient in a whole area of life where you want things to happen. And if charity seems as natural to you as to me, is probably because you are familiar with econ 101 or something, and if you had grown up in a very politically-minded climate the most natural thing to do would be to try to influence politics.

I also think our beliefs and assumptions are probably influenced by our backgrounds. For many such influences, I think they are straightforwardly better. For instance, being well-educated, having thought about ethics unusually much, being taught that it is epistemically reasonable to think about stuff on one’s own, and having read a lot of LessWrong just seem good. There are other things that seem more random. For instance, until I came to The Bay everyone around me seemed to think we were headed for environmental catastrophe (unless technology saved us, or destroyed us first), and now everyone around seems to think technology is going to save us or destroy us (unless environmental catastrophe destroys us first), and while these views are nominally the same, one group is spending all of their time trying to spread the word about the impeding environmental catastrophe, while the other adds ‘assuming no catastrophes’ to the end of their questions about superintelligences. And while I think there are probably good cases that can be made about which of these things to worry about, I am not sure that I have seen them made, and my guess is that many people in both groups are trusting those around them, and would have trusted those around them if they had stumbled across the other group and got on well with them socially.

4.c. To what extent could EA be fruitfully influenced by more things?

So far I have talked about behaviors that are more common among other groups of people: causes to forward, interventions to default to, beliefs and assumptions to hold. I could also have talked about customs and attitudes and institutions. This is all stuff we could copy from other people, if we were familiar enough with other people to know what they do and what is worth copying. But there is also value in knowing what is up with other people without copying them. Like, how things fail and how organizations lose direction and what sources of information can be trusted in what ways and which things tend to end up nobody’s job, and which patterns appear across all endeavors for all time. And arguably an inside perspective is more informative than an outside one. For instance, various observations about medicine seem like good clues for one’s overall worldview, though perhaps most of the value there is from looking in detail rather than from the inside. (Whether having a correct worldview effectively contributes to doing good shall remain a question for another time).


New Comment