I was going to post something about this in the open thread, but this post just popped up.
I've been putting together a club for Effective Altruism on my campus (Cavaliers for Effective Altruism), and I'm stuck. I can run fundraisers and donate the money to a charity Givewell supports. My college has a system for donating to charities and fundraising, so that isn't a problem.
The difficulty is getting other people interested in the club and teaching my club-members rationality, so the club continues existing after I graduate. I originally thought teaching people rationality wouldn't be necessary, but the couple friends I mentioned this to have no idea what I'm talking about when I explain how effective altruism works. They don't have the same intuitions that I do, so it sounds odd to them. It was around then I realized that I need my club-members to know some rationality. Are there any resources/guides out there for that kind of thing?
I know LessWrong is one of those resources, but I doubt many people will listen to me if I say "This week's club-homework is to read x post from this blog." I have a couple vague ideas for slipping this information into casual conversation, but they're vague ideas. And it's hard to impart enough information through casual conversation, anyway. I think I could try doing both (have people read specific articles/books and bring it up in casual conversation,) but that brings me back to the original problems: I have no idea how to teach rationality, and people don't respect me enough to listen to me if I tell them they need to know something.
I know some people here have experience in teaching rationality, so I'm fishing for any advice. My two major concerns are: -How to bridge the inference gap between myself and my club members (where do I even start?) and if there are any other ways to teach rationality beyond the two I mentioned.
At the summit, I gave a talk on community building. One of my main thesis was that I think it's actually better to do a rationality/self-improvement club that is also an Effective Altruism club than an EA club that's also a rationality club. You'll get people who don't just self identify as world savers (and who can, over time, be influenced by the world savers)
The self-imp/rationality group I run begins sessions by talking about our successes from the previous week, and ends with setting goals from the previous week. This means the thing that gets positively reinforced via social pressure is actually doing things, whereas with EA it's easy to simply reward signaling.
That's a good idea. I could try to advertise it that way, since I'm having major issues finding a single person at my college interested in effective altruism. I might be wrong, but do you think it would be harder to get people interested in rationality, or to get them interested in effective altruism? My priors tell me that charity > rationalism in many people's minds, but I'm not sure.
EDIT: I decided to go with the rationality club idea. There's no real advantage in my original plan compared to opening a THINK club, which is basically the same idea except I can do more fun things with it. Thanks for the advice!
Try giving game. http://www.givingwhatwecan.org/blog/2013-06-02/how-giving-games-can-spread-the-word-about-smarter-charity-choices-0
Avoid "teaching" and instead set up conversations and activities that introduce these ideas. Many people resist it their peers try to "educate" them. Look for movie, comics, webshorts, etc. that can start off the conversation in the right direction.
Remember that it will take people time to become comfortable with these ideas. Look to make progress over the course of months and years, not hours.
Good luck!
I've noticed similar situations as well. The sequences did a pretty good job conveying information to me, but I'm a math guy who grew up reading scif and watching animie so I'm about as close to the target demographic as its possible to be. I've often wished for a less flavorful more generic / corporate version of the content in the sequences that I could point people outside the target demographic towards.
Thanks, Luke, great overview! Just one thought:
Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future
This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill the universe with happy people. I think there is something true and important about the slogan "Make people happy, not happy people", although explaining that something is non-trivial.
It's not really clear to me that negative utilitarians and people with person-affecting views need to disagree with the quoted passage as stated. These views focus primarily on the suffering aspect of well-being, and nearly all of the possible suffering is found in the astronomical numbers of people who could populate the far future.
To elaborate, in my dissertation, I assume--like most people would--that a future where humans have great influence would be a good thing. But I don't argue for that and some people might disagree. If that's the only thing you disagree with me about, it seems you actually still end up accepting my conclusion that what matters most is making humanity's long-term future development go as well as possible. It's just that you end up focusing on different aspects of making the long-term future development go as well as possible.
Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem to disagree with the conclusion. Unfortunately and sadly, though, the utility of talking about (affecting) the far future is a tricky issue too, given fundamental disagreements in population ethics.
I don't know that the "like most people would" parenthesis is true. (A "good thing" maybe, but a morally urgent thing to bring about, if the counterfactual isn't existence with less well-being, but non-existence?) I'd like to see some solid empirical data here. I think some people are in the process of collecting it.
Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".
The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.
Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".
I do some of that in chapter 4. I don't engage with speculative arguments that the future will be bad (e.g. the dystopian scenarios that negative utilitarians like to discuss) or make my case by appealing to positive trends of the sort discussed by Pinker in Better Angels. Carl Shulman and I are putting together some thoughts on some of these issues at the moment.
The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.
Maybe so. I think the key is how you interpret the word "value." If you interpret as "only positive value" then negative utilitarians disagree but only because they think there isn't any possible positive value. If you interpret it as "positive or negative value" I think they should agree for pretty straightforward reasons.
OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.
It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection of presentist time preference makes the conclusions more likely.
But it's by no means sufficient for them. Bostrom and Beckstead need the further claim that bringing new people into existence is morally important and urgent.
This seems to be the crucial point. So I'd rather go for something like: "Many (Some?) EAs value/think it morally urgent to bring new people (with lives worth living) into existence, and therefore..."
The moral urgency of preventing miserable lives (or life-moments) is less controversial. People like Brian Tomasik place much more (or exclusive) importance on the prevention of lives not worth living, i.e. on ensuring the well-being of everyone that will exist rather than on making as many people exist as possible. The issue is not whether (far) future lives count as much as lives closer to the present. One can agree that future lives count equally, and also agree that far future considerations dominate the moral calculation (empirical claims enter the picture here). But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion). So it's possible to agree that future lives count equally and that far future considerations dominate but to still disagree on the importance of x-risk reduction or more particular things such as space colonization.
But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).
A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don't understand why you think astronomical population expansion inevitably produces miserable lives.
Also, I note that space colonization can produce an astronomical number of QALYs even assuming no population growth, by letting currently existing people continue to live after all the negentropy in our solar system has been exhausted.
Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.
Lots of dystopian future scenarios are possible. Here are some of them.
How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don't think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.
But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.
Lots of dystopian future scenarios are possible. Here are some of them. But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive.
Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.
The 2-50 bounds seem reasonable for EV(happiness)/EV(suffering) using normal people's pleasure-pain exchange ratios, which I think are insane. :) Something like accepting 1 minute of Medieval torture for a few days/weeks of good life.
Using my own pleasure-pain exchange ratio, the future is almost guaranteed to be negative in expectation, although I maintain ~30% chance that it would still be better for Earth-based life to colonize space to prevent counterfactual suffering elsewhere.
It is worth pointing out that by 'insane', Brian just means 'an exchange rate that is very different from the one I happen to endorse.' :-) He admits that there is no reason to favor his own exchange rate over other people's. (By contrast, some of these other people would argue that there are reasons to favor their exchange rates over Brian's.)
Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)
That's incorrect. David Pearce claims that pains below a certain intensity can't be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.
Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.
Is this too rosy a scenario?
Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.
I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours.
Which cases did you have in mind?
People generally don't altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don't consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.
Thanks for the explanation. I was thrown by your usage of the word "inevitable" earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)
Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I've removed the "therefore."
The reason I'd rather not phrase things as "morally urgent to bring new people into existence" is because that phrasing suggests presentist assumptions. I'd rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It's also rejected by the majority of EAs with whom I've discussed the issue, but that's not actually noteworthy because it's such a biased sample of EAs.)
Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.
I confess I'm not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don't understand the advantages to your phrasing, and I've provided links to more thorough discussions of these issues, for example Beckstead's dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are "doing the crucial work"? Or we could just let it be.
Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.
If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.
Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm constantly like: "No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it's morally urgent to fill the universe with people. I doubt it."
Okay, so we're talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives.
My sentence mentions both of those, in sequence: "Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]..."
And you are suggesting... what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase "morally urgent to" instead of "nearly all potential value is found in..."? Or something else?
Sorry for the delay!
I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.
If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.
By contrast, if you accept (1), it's still very much an open question whether you'll be on board.
Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they're of astronomical significance.
I don't really know if I'm suggesting any further specific change to the wording - sorry about that. It's tricky... If you're speaking to non-EAs, it's important to emphasize the rejection of time preference. But there shouldn't be a "therefore", which (in my perception) is still implicitly there. And if you're speaking to people who already reject time preference, it's even more important to make it clear that this rejection doesn't imply "fill the universe with people". One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: "Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value..."
Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we're talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.
Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we've established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.
I don't want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don't want to drop the point about time preference, as "intrinsic" time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead's thesis). Anyway, I appreciate your comments.
"Thinking quantitatively" is poor shorthand for good rational practice. Of course a rationalist shouldn't neglect quantitative thought; that leads to fuzz. But purely quantitative evaluation is just as bad; it leads to No-Child-Left-Behind-style teaching-to-the-test and, worse, testing-to-the-test (choosing metrics based on reliability over applicability).
I think that there are signs of that in the choice of four areas. It's not just that "effective environmental activism" didn't make the cut; what about politics itself? Rational improvements in political systems are incredibly easy to imagine; approval voting, for instance, is a tiny, simple change compared to plurality voting, yet would eliminate a number of senseless biases in politics. And politics is important; as any evil overlord knows, the goal is to take over the world. But it's very hard to quantify political progress objectively, and easy to get into mind-killing arguments, so it seems the whole issue just gets covered by an ick field for rationalists.
So, the question becomes: do you want to talk about what aspiring effective altruists do do, or what they should do? If it's the former, fine. If it's the latter, I think you have to start from more basic principles.
In the future, poverty reduction EAs might also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more indirectly, as when Chinese economic reforms lifted hundreds of millions out of poverty.
I'd like to see more discussion of economic growth and effective altruism. Something that can lift hundreds of millions of people out of poverty is something that should definitely be investigated. (See also Lant Pritchett's distinction between linear and transformative philanthropy.)
this was an unhelpful comment, removed and replaced by this comment
Belief updating (bayes) underwrites a steady stream of novelty, and thus joy, and therefore, assuming belief updating is real – joy in the merely real is plausible
Instantiate the environment, including self, identify changes plausible for each component, select the object for that change which maximises utility. If that happens to be the AI itself, then it will do that. done, recursively improving AI solved
I think I may be experiencing a psychotic episode :( Sorry for any unusual post...forget it I'll fix this up when I'm back.
I understand why effective poverty reduction is a focus area, but why effective health improvement more generally.
Because health interventions tend to do a lot for poverty as well. Healthy people can work much better than poor people. Having children in which society invested resources die to malaria is bad for the economy. It also leads to woman getting more children to make sure that some survive.
For instance, Expanding immunisation coverage for children is Givewell's number 1 priority proven health interventions . However, none of the recommended charities are remotely immunisation programs. Why is that?
Likely because GiveWell thinks that existing institutions already spent enough money on that task or because GiveWell isn't aware of charities with room for funding in that area that it recommends. GiveWell only recommends charities that are transparent enough to have open data about their effectiveness.
On Facebook, Eliezer suggested an alternate name for practitioners of effective altruism: "Ravenclaw Gryffindors."
Elizabeth Synclair replied: "What? Clearly any effective altruist worth their salt is a Ravenpuff."
(To explain: Hogwarts Houses.)
Gryffindors are brave, which is useful for fighting oppression...but, you know, terrorism and stuff also requires bravery.
Ravenclaws are curious, which can be used to help people, but it can be used for other things as well.
Slytherins are ambitious, and the story has done enough to illustrate the dual nature of that trait.
Hufflepuffs ... they aren't just one thing. Conscientiousness, Loyalty, and Agreeableness all seem to play a role, but are those things really so strongly correlated? Does this "wholesome" nature finally add up to altruism? Does this altruism extend outside the in-group? I think Rowling was going for some sort of hearty, homespun, down-to-earth archetype there... not sure if it would ever be measurable in a single psychometric variable. If I was writing her story, I'd probably settle on "Loyalty", as in valuing friends and loved ones, with the other two traits just being common behavioral side effects of having this value. In which case, it takes a far-sighted Hufflepuff to extend those feelings of friendship to all intelligent beings...while a near sighted one might become nationalism.
Robby Bensinger cleverly expended upon this, describing the various motivations for effective altruism as "slytherfuzzies", "ravenfuzzies", "gryffinfuzzies ", and "hufflefuzzies".
You have to be a Hufflepuff to want to help. You have to be a Ravenclaw to be smart enough to do so. You have to be Slytherin to realize what you can do. And I guess you have to be Gryffindor, to do something nobody else does.
While we're at it, what elements would you need?
Generosity, obviously.
I don't think the Elements of Harmony can help much beyond that.
Honesty in one's dealings is always important. As a member of ROTLCON staff (brony convention in Colorado), I am often asked difficult questions about helping people through our charity auction. Lying is not an option, if one expects to donate, or to accept donations. Kindness? Given how the show seems to show it off in Fluttershy, I would guess that kindness includes one's understanding and acceptance of other people. Saving a people by destroying something else means knowing exactly what you destroy, and seeing its value--perhaps, the destruction can be avoided. Only one example of kindness as shown in the show, of course. Loyalty--uncertain. Laughter--as a convention, the thing I'm working on is about fun. But, it is also an attempt to throw money at the problem in the best way possible (something we're just figuring out, by the way, so we will be applying the above article and related advice to altruism). So, also uncertain, but there is a connection for me and my fellow con staff.
Being effective at almost anything can benefit from the virtues of all 4 houses. I.e. Slitherclaw Gryffinpuffs.
Elizabeth Synclair replied: "What? Clearly any effective altruist worth their salt is a Ravenpuff."
We could use a few more Slitherpuffs too.
I think maybe I'd prefer to maximize my personal satisfaction in my charitable efforts. The knowledge that I may do more good some other way won't substitute for the charitable action that will leave me feeling most satisfied based on my normal human emotions, irrational though they may be.
It was a pleasure to see all major strands of the effective altruism movement gathered in one place at last week's Effective Altruism Summit.
Representatives from GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal Altruism, Leverage Research, the Center for Applied Rationality, and the Machine Intelligence Research Institute either attended or gave presentations. My thanks to Leverage Research for organizing and hosting the event!
What do all these groups have in common? As Peter Singer said in his TED talk, effective altruism "combines both the heart and the head." The heart motivates us to be empathic and altruistic toward others, while the head can "make sure that what [we] do is effective and well-directed," so that altruists can do not just some good but as much good as possible.
Effective altruists (EAs) tend to:
Despite these similarities, EAs are a diverse bunch, and they focus their efforts on a variety of causes.
Below are four popular focus areas of effective altruism, ordered roughly by how large and visible they appear to be at the moment. Many EAs work on several of these focus areas at once, due to uncertainty about both facts and values.
Though labels and categories have their dangers, they can also enable chunking, which has benefits for memory, learning, and communication. There are many other ways we might categorize the efforts of today's EAs; this is only one categorization.
Focus Area 1: Poverty Reduction
Here, "poverty reduction" is meant in a broad sense that includes (e.g.) economic benefit, better health, and better education.
Major organizations in this focus area include:
In addition, some well-endowed foundations seem to have "one foot" in effective poverty reduction. For example, the Bill & Melinda Gates Foundation has funded many of the most cost-effective causes in the developing world (e.g. vaccinations), although it also funds less cost-effective-seeming interventions in the developed world.
In the future, poverty reduction EAs might also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more indirectly, as when Chinese economic reforms lifted hundreds of millions out of poverty. Though it is generally easier to evaluate the cost-effectiveness of direct efforts than that of indirect efforts, some groups (e.g. GiveWell Labs and The Vannevar Group) are beginning to evaluate the likely cost-effectiveness of these causes.
Focus Area 2: Meta Effective Altruism
Meta effective altruists focus less on specific causes and more on "meta" activities such as raising awareness of the importance of evidence-based altruism, helping EAs reach their potential, and doing research to help EAs decide which focus areas they should contribute to.
Organizations in this focus area include:
Other people and organizations contribute to meta effective altruism, too. Paul Christiano examines effective altruism from a high level at Rational Altruist. GiveWell and others often write about the ethics and epistemology of effective altruism in addition to focusing on their chosen causes. And, of course, most EA organizations spend some resources growing the EA movement.
Focus Area 3: The Long-Term Future
Many EAs value future people roughly as much as currently-living people, and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the long-term future (Bostrom 2003; Beckstead 2013). Future-focused EAs aim to somewhat-directly capture these "astronomical benefits" of the long-term future, e.g. via explicit efforts to reduce existential risk.
Organizations in this focus area include:
Other groups study particular existential risks (among other things), though perhaps not explicitly from the view of effective altruism. For example, NASA has spent time identifying nearby asteroids that could be an existential threat, and many organizations (e.g. GCRI) study worst-case scenarios for climate change or nuclear warfare that might result in human extinction but are more likely to result in "merely catastrophic" damage.
Some EAs (e.g. Holden Karnofsky, Paul Christiano) have argued that even if nearly all value lies in the long-term future, focusing on nearer-term goals (e.g. effective poverty reduction or meta effective altruism) may be more likely to realize that value than more direct efforts.
Focus Area 4: Animal Suffering
Effective animal altruists are focused on reducing animal suffering in cost-effective ways. After all, animals vastly outnumber humans, and growing numbers of scientists believe that many animals consciously experience pleasure and suffering.
The only organization of this type so far (that I know of) is Effective Animal Activism, which currently recommends supporting The Humane League and Vegan Outreach.
Edit: There is now also Animal Ethics, Inc.
Major inspirations for those in this focus area include Peter Singer, David Pearce, and Brian Tomasik.
Other focus areas
I could perhaps have listed "effective environmental altruism" as focus area 5. The environmental movement in general is large and well-known, but I'm not aware of many effective altruists who take environmentalism to be the most important cause for them to work on, after closely investigating the above focus areas. In contrast, the groups and people named above tend to have influenced each other, and have considered all these focus areas explicitly. For this reason, I've left "effective environmental altruism" off the list, though perhaps a popular focus on effective environmental altruism could arise in the future.
Other focus areas could later come to prominence, too.
Working together
I was pleased to see the EAs from different strands of the EA movement cooperating and learning from each other at the Effective Altruism Summit. Cooperation is crucial for growing the EA movement, so I hope that even if it’s not always easy, EAs will "go out of their way" to cooperate and work together, no matter which focus areas they’re sympathetic to.