Disclaimer: I like and support the EA movement.
I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:
Hmm. I didn't interpret a hypothetical apostasy as the fiercest critique, but rather the best critique--i.e. weight the arguments not by "badness if true" but by something like badness times plausibility.
But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement's direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
I don't have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl's critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl's is the best critique.
In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).
I had the same sense of "This is the kind of criticism where you say 'we need two Stalins'" as one of the commenters. That doesn't mean its correct, and I, like some others, particularly liked the phrase "pretending to actually try". It also seems to me self-evident that this is a huge step forward and a huge improvement over merely pretending to try. Much of what is said here is correct, but none of it is the kind of criticism which would kill EA if it were correct. For that you would have to cross over into alleging things which are false.
From my perspective, by far the most obvious criticism of EA is to take the focus on global poverty at face value and then remark that from the pespective of 100,000,000 years later it is unlikely that the most critical point in this part of history will have been the distribution of enough malaria nets. Since our descendants will reliably think this was not the most utility-impactful intervention 100,000,000 years later, we should go ahead and update now, etc. And indeed I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.
Excuse me, but this sounds to me like a terrible argument. If the far future goes right, our descendents will despise us as complete ignorant barbarians and won't give a crap what we did or didn't do. If it goes wrong (ie: rocks fall, everyone dies), then all those purported descendents aren't a minus on our humane-ness ledger, they're a zero: potential people don't count (since they're infinite in number and don't exist, after all).
Besides, I damn well do care how people lived 5000 years ago, and I would certainly hope that my great-to-the-Nth-grandchildren will care how I live today. This should especially matter to someone whose idea of the right future involves being around to meet those descendents, in which case the preservation of lives ought to matter quite a lot.
God knows you have an x-risk fetish, but other than FAI (which carries actual benefits aside from averting highly improbable extinction events) you've never actually justified it. There has always been some small risk we could all be wiped out by a random disaster. The world has been overdue for certain natural disasters for millenia now, and we just don't really have a way to prevent any of them. Space colon...
start to find the things that are actually the best thing for the far future
I have strong doubts about your (not personal but generic) ability to evaluate the far-future consequences of most anything.
Cross-posted from http://www.benkuhn.net/ea-critique since I want outside perspectives, and also LW's comments are nicer than mine.
They are! I wish I had realized you cross-posted this here before I commented there. So also cross-posting my comment:
First, good on you for attempting a serious critique of your views. I hope you don’t mind if I’m a little unkind in responding to to your critique, as that makes it easier and more direct.
Second, the cynical bit: to steal Yvain’s great phrase, this post strikes me as the “we need two Stalins!” sort of apostasy that lands you a cushy professorship. (The pretending to try vs. actually trying distinction seems relevant here.) The conclusion- “we need to be sufficiently introspective”- looks self-serving from the outside. Would being introspective happen to be something you consider a comparative advantage? Is the usefulness of the Facebook group how intellectually stimulating and rigorous you find the conversations, or how many dollars are donated as a result of its existence?
Third, the helpful bit: instead of saying “this is what I think would make EA slightly less bad,” consider an alternative prompt: ten years from now, you look bac...
Arguably trying for apostasy, failing due to motivated cognition, and producing only nudging is a good strategy that should be applied more broadly.
I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue)
GiveWell is moving into politics and advocacy, there are 80k people in politics, and GWWC principals like Toby Ord do a lot of advocacy with government and international organizations, and have looked at aid advocacy groups.
EA doesn't want to take over countries
"Take over countries" is such an ugly phrase. I prefer "country optimisation".
Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.
I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system's utility function, but that win-win trades are possible between the system and the people inside it.
Good work! Though, this is much weaker than my model of a hypothetical apostasy, which is informed by my actual deconversion from Christianity, which involved writing a thoroughly withering critique of theism and Christianity, not a "here's how Christianity could be tweaked and improved."
If I were to write a hypothetical apostasy for EA, I might take the communism part further and try to argue that enacting global policies on the basis of unpopular philosophical views was likely to be disastrous. Or maybe that real-world utilitarianism is so far from intuitive human values (which have lots of emotional deontological principles and so on) that using it in the real world would cause the humans to develop all kinds of pathologies. Or something more damning that what you've written. But if you published such a thing then you'd have lots more people misunderstand it and be angry at you, too. :)
Edit: I see that Carl has said this better than I did.
I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.
As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on fixing X" more compelling when you point to things we've been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I'd be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I've considered doing so and agree this would be help address some of the issues you've identified. But I would welcome more of that kind of thing.
I disagree with your perspective that the effective altrui...
This is MUCH better than I expected from the title. I strongly agree with essentially the entire post, and many of my qualms about EA are the result of my bringing these points up with, e.g. Nick Beckstead and not seeing them addressed or even acknowledged.
I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.
Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.
I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.
I may, of course, be wrong, and I'm not sure how people should respond in such a situation. It strongly seems to me that a) leftist movements tend to collapse in schizm, b) rightist movements tend to converge on generic xenophobic authoritarianism regardless of their associated theory. I'd rather we avoid both of those situations, but the first seems like an inevitable result of not accommodating belief in belief, while the second seems like an inevitable result of accommodating it. My instinct is that the best option is to not accommodate belief in belief and to keep a movement small enough that schizm can be avoided. The worst thing for an epistemic standard is not th...
As a practicing socialist, I found the comparison to Communism illuminating and somewhat disturbing.
You've already listed some of the major, obvious aspects in which the Effective Altruism movement resembles Communism. Let me add another: failure to take account of local information and preferences.
Information: Communism (or as the socialists say: state capitalism, or as the dictionaries say: state socialism -- centrally planned economies!) failed horrifically at the Economic Calculation Problem because no central planning system composed of humans can take account of all the localized, personal information inherent in real lives. Markets, on the other hand, can take advantage of this information, even if they're not always good at it (see for a chuckle: "Markets are Efficient iff P=NP"). Effective altruism, being centrally planned, suffers this problem.
Preferences: the other major failure of Communist central planning was its foolish claim that the entirety of society had a single, uniform set of valuations over economic inputs and outputs which was determined by the planning committee in the Politburo. The result was, of course, that the system produced vast amounts...
Tangential: has their been discussion on LW of the EA implications of having kids? Personally, I would expect that having kids would at least be positive expected utility since they would inherit a good number of your genes/memes and be more likely than a person randomly chosen from the population to become effective altruists. But the opportunity costs seem really high.
I'm also curious how people feel about increasing fertility among reasonably smart people in general.
I think a lot of these criticisms are very valid. Many of them are stuff I had been thinking about but your post does a really good job of explaining them better than I ever could.
I guess I have a somewhat unique take on the whole EA thing, since I'm one of the few (probably) non-white people here. I'd be happy to elaborate if you wish.
It has nothing to do with white people - it has to do with cross cultural misunderstandings in general. People just use the word "white" frequently because of certain implicit assumptions about the racial / cultural background of the audience.
Anyway, let me give you an example of when this sort of thing actually happens: In India, there used to be religious figures called Devadasis. They are analogous to nuns in one sense - they get "married" to divinity, and never take a human spouse. Unlike nuns, they are trained in music and dancing. In medieval India, music, dancing, and sexual services were all lumped under the same general category...as in, there was a large overlap between dancers, musicians, and sex workers, and this was widely recognized. (This is not really true today, but if you watch really old Indian movies you can see remnants of this association). We can presume that many of the Devadasis engaged in sex work. It should be noted that they also had a high social status, which allows us to further infer that the sex work probably didn't involve intense coercion and probably wasn't driven by extremely harsh economic pressures.
You can guess where this ...
Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
I do think this is worrying, or at least worth looking into. This is part of why I've been looking into the history of earning to giv...
Thanks, Ben. I agree with about half of the points and disagree with the other half. I think some of the claims, e.g., that other people haven't raised these issues, are untrue.
Effective altruists often express surprise that the idea of effective altruism only came about so recently. [...] But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
Honestly, I think this idea is one of EA's bigger oversights -- not that people haven't noticed that EA is recent, but that people don't realize that ...
I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them.
This is generally known as playing the devil's advocate, and its an idea that long predates Nick Bostrum.
(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it's a more general issue, or at least a more general way of putting the same issues.)
I'm wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.
If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing...
To my mind, the worst thing about the EA movement are its delusions of grandeur. Both individually and collectively, the EA people I have met display a staggering and quite sickening sense of their own self-importance. They think they are going to change the world, and yet they have almost nothing to show for their efforts except self-congratulatory rhetoric. It would be funny if it wasn't so revolting.
A few thoughts (disclaimer: I do NOT endorse effective altruism):
The main reason most people donate to charities may be to signal status to others, or to "purchase warm fuzzies" (a form of status signalling to one's own ego).
Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and "rationality" are well received, and/or similarly a way to "purchase warm fuzzies" for som
GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, "mistakes" tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.
I think this is as good of an incentive structure as we're going to get
I think it would be better with more competitors in the same space keeping each other honest.
Philosophical difficulties
The main insight of the EA movement is to pick some criteria and go with it (rather than the "warm fuzzies" heuristic that most people use).
What criteria you use is up to you and your preferences.
Poor cause choices
or marketable cause choices. Uncontroversial cause choices. The act of giving a recommendation is also outcome focused...you have to think about what percentage of your audience will actually be moved to act as a result of your announcement. Effective Altruism for a meta-charity also means Effective Ad...
The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them.
This may be tautological depending on how you define your terms (if people don't think of an idea quickly after it's possible to do so, it wasn't obvious.)
If defined in such a way that it could possibly be false, of course, it very much begs further evidence.
One of the plausible ways EA could be worsening instead of improving resources allocation is if, in fact, resources are better with the very rich instead of the very poor countries. I do not believe this question was rightly assessed by the EA movement. More often than not, it is just assumed as evident resources are better on the hands of the ones who don't have it, which would make sense inside an egalitarian-ethics, but not on utilitarianism. I do know there are texts, articles, etc. on this question, but I do not think they are nearly enough given how ...
I have written a lengthy response that deals with only one of the points in the critique above, the suggestion that, as a whole, the Effective Altruist movement is pretending to really try, here: http://lesswrong.com/r/discussion/lw/j8v/in_praise_of_tribes_that_pretend_to_try/
My main argument is that pretending to try is quite likely *a good thing(, in the grand scheme of EA.
Disclaimer: I support the EA movement.
Concerning historical analogues: From what I understand about their behaviour, it seems like the Rotary Club pattern-matches some of the ideas of Effective Altruism, specifically the earning-to-give and community-building aspects. They have a million members who give on average over $100/yr to charities picked out by Rotary Club International or local groups. This means that in the past decade, their movement has collected one billion dollars towards the elimination of Polio. Some noticeable differences include:
I wish to write a one-year retrospective and/or report on this post. I'll contact Ben Kuhn to run this idea by him, to see if he would be interested. If he's too busy, as I expect he might be, I at least hope to seek his blessing to extend the mission of critiquing effective altruism.
There are also other critiques, like this one. Additionally, there have been counters and defenses in response to this post in the previous year. Further, specific to effective altruism, e.g., on the effective altruism forum, there has been more discussion of these ideas. I w...
"for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work." I would love to see more work done on this. However, I understand "wanting to have more altruistic intentions" as part of a broader class of "wanting to act according to my ultimate/rational/long-term desires rather than my immediate desires", and this doesn't seem niche enough for membe...
Hi Ben,
Thanks for the post. I think this is an important discussion. Though I'm also sympathetic to Nick's comment that a significant amount of extra self-reflection is not the most important thing to EA's success.
I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.
Philosophical difficulties
Effective altruism was founded by philosophers, so I think there's enough effort going into this, including population ethics. (See Nick's comment)
Poor cause choices
There...
Another possible critique is that the philosophical arguments for ethical egoism are (I think) at least fairly plausible. The extent to which this is a critique of EA is debatable (since people within the movement state that it's compatible with non-utilitarian ethical theories and that it appeals to people who want to donate for self-interested reasons) but it's something which merits consideration.
I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.
(EDIT: As per the comments of Vaniver, Carl Shulman, and others, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)
How to read this post
(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)
Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.
Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)
(End less relevant paragraphs.)
Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.
Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Below I introduce various ways in which effective altruists have failed to go beyond the social-satisficing algorithm of establishing some credibly acceptable alternatives and then picking among them based on essentially random preferences. I exhibit other areas where the norms of effective altruism fail to guard against motivated cognition. Both of these phenomena add what I call “epistemic inertia” to the effective-altruist consensus: effective altruists become more subject to pressures on their beliefs other than those from a truth-seeking process, meaning that the EA consensus becomes less able to update on new evidence or arguments and preventing the movement from moving forward. I argue that this stems from effective altruists’ reluctance to think through issues of the form “being a successful social movement” rather than “correctly applying utilitarianism individually”. This could potentially be solved by introducing an additional principle of effective altruism—e.g. “group self-awareness”—but it may be too late to add new things to effective altruism’s DNA.
Philosophical difficulties
There is currently wide disagreement among effective altruists on the correct framework for population ethics. This is crucially important for determining the best way to improve the world: different population ethics can lead to drastically different choices (or at least so we would expect a priori), and if the EA movement can’t converge on at least their instrumental goals, it will quickly fragment and lose its power. Yet there has been little progress towards discovering the correct population ethics (or, from a moral anti-realist standpoint, constructing arguments that will lead to convergence on a particular population ethics), or even determining which ethics lead to which interventions being better.
Poor cause choices
Many effective altruists donate to GiveWell’s top charities. All three of these charities work in global health. Is that because GiveWell knows that global health is the highest-leverage cause? No. It’s because it was the only one with enough data to say anything very useful about. There’s little reason to suppose that this correlates with being particularly high-leverage—on the contrary, heuristic but less rigorous arguments for causes like existential risk prevention, vegetarian advocacy and open borders suggest that these could be even more efficient.
Furthermore, the our current “best known intervention” is likely to change (in a more cost-effective direction) in the future. There are two competing effects here: we might discover better interventions to donate to than the ones we currently think are best, but we also might run out of opportunities for the current best known intervention, and have to switch to the second. So far we seem to be in a regime where the first effect dominates, and there’s no evidence that we’ll reach a tipping point very soon, especially given how new the field of effective charity research is.
Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides. And anyway, donating when you believe it’s not (except for example-setting) the best possible course of action, in order to make a point about figuring out the best possible course of action and then doing that thing, seems perverse.
Non-obviousness
Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.
Efficient markets for giving
It’s often claimed that “nonprofits are not a market for doing good; they’re a market for warm fuzzies”. This is used as justification for why it’s possible to do immense amounts of good by donating. However, while it’s certainly true that most donors aren’t explicitly trying to purchase utililty, there’s still a lot of money that is.
The Gates Foundation is an example of such an organization. They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses.
So why not just donate to the Gates Foundation? Effective altruists need a better account of the “market inefficiencies” that they’re exploiting that Gates isn’t. Why didn’t the Gates Foundation fund the Against Malaria Foundation, GiveWell’s top charity, when it’s in one of their main research areas? It seems implausible that the answer is simple incompetence or the like.
A general rule of markets is that if you don’t know what your edge is, you’re the sucker. Many effective altruists, when asked what their edge is, give some answer along the lines of “actually being strategic/thinking about utility/caring about results”, and stop thinking there. This isn’t a compelling case: as mentioned before, it’s not clear why no one else is doing these things.
Inconsistent attitude towards rigor
Effective altruists insist on extraordinary rigor in their charity recommendations—cf. for instance GiveWell’s work. Yet for many ancillary problems—donating now vs. later, choosing a career, and deciding how “meta” to go (between direct work, earning to give, doing advocacy, and donating to advocacy), to name a few—they seem happy to choose between the not-obviously-wrong alternatives based on intuition and gut feelings.
Poor psychological understanding
John Sturm suggests, and I agree, that many of these issues are psychological in nature:
In general, most effective altruists respond to deep conflicts between effective altruism and other goals in one of the following ways:
The third is debatably defensible—though, for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work.
Furthermore, EA norms do not proscribe even the first two, leading to a group norm that doesn’t cause people to notice when they’re engaging in a certain amount of motivated cognition. This is quite toxic to the movement’s ability to converge on the truth. (As before, effective altruists are still better than the general population at this; the core EA principles are strong enough to make people notice the most obvious motivated cognition that obviously runs afoul of them. But that’s not nearly good enough.)
Historical analogues
With the partial exception of GiveWell’s history of philanthropy project, there’s been no research into good historical outside views. Although there are no direct precursors of effective altruism (worrying in its own right; see above), there is one notably similar movement: communism, where the idea of “from each according to his ability, to each according to his needs” originated. Communism is also notable for its various abject failures. Effective altruists need to be more worried about how they will avoid failures of a similar class—and in general they need to be more aware of the pitfalls, as well as the benefits, of being an increasingly large social movement.
Aaron Tucker elaborates better than I could:
Monoculture
Effective altruists are not very diverse. The vast majority are white, “upper-middle-class”, intellectually and philosophically inclined, from a developed country, etc. (and I think it skews significantly male as well, though I’m less sure of this). And as much as the multiple-perspectives argument for diversity is hackneyed by this point, it seems quite germane, especially when considering e.g. global health interventions, whose beneficiaries are culturally very foreign to us.
Effective altruists are not very humanistically aware either. EA came out of analytic philosophy and spread from there to math and computer science. As such, they are too hasty to dismiss many arguments as moral-relativist postmodernist fluff, e.g. that effective altruists are promoting cultural imperialism by forcing a Westernized conception of “the good” onto people they’re trying to help. Even if EAs are quite confident that the utilitarian/reductionist/rationalist worldview is correct, the outside view is that really engaging with a greater diversity of opinions is very helpful.
Community problems
The discourse around effective altruism in e.g. the Facebook group used to be of fairly high quality. But as the movement grows, the traditional venues of discussion are getting inundated with new people who haven’t absorbed the norms of discussion or standards of proof yet. If this is not rectified quickly, the EA community will cease to be useful at all: there will be no venue in which a group truth-seeking process can operate. Yet nobody seems to be aware of the magnitude of this problem. There have been some half-hearted attempts to fix it, but nothing much has come of them.
Movement building issues
The whole point of having an effective altruism “movement” is that it’ll be bigger than the sum of its parts. Being organized as a movement should turn effective altruism into the kind of large, semi-monolithic actor that can actually get big stuff done, not just make marginal contributions.
But in practice, large movements and truth-seeking hardly ever go together. As movements grow, they get more “epistemic inertia”: it becomes much harder for them to update on evidence. This is because they have to rely on social methods to propagate their memes rather than truth-seeking behavior. But people who have been drawn to EA by social pressure rather than truth-seeking take much longer to change their beliefs, so once the movement reaches a critical mass of them, it will become difficult for it to update on new evidence. As described above, this is already happening to effective altruism with the ever-less-useful Facebook group.
Conclusion
I’ve presented several areas in which the effective altruism movement fails to converge on truth through a combination of the following effects:
These problems are worrying on their own, but the lack of awareness of them is the real problem. The monoculture is worrying, but the lackadaisical attitude towards it is worse. The lack of rigor is unfortunate, but the fact that people haven’t noticed it is the real problem.
Either effective altruists don’t yet realize that they’re subject to the failure modes of any large movement, or they don’t feel motivation to do the boring legwork of e.g. engaging with viewpoints that your inside view says are annoying but that the outside view says are useful on expectation. Either way, this bespeaks worrying things about the movement’s staying power.
More importantly, it also indicates an epistemic failure on the part of effective altruists. The fact that no one else within EA has done a substantial critique yet is a huge red flag. If effective altruists aren’t aware of strong critiques of the EA movement, why aren’t they looking for them? This suggests that, contrary to the emphasis on rationality within the movement, many effective altruists’ beliefs are based on social, rather than truth-seeking, behavior.
If it doesn’t solve these problems, effective-altruism-the-movement won’t help me achieve any more good than I could individually. All it will do is add epistemic inertia, as it takes more effort to shift the EA consensus than to update my individual beliefs.
Are these problems solvable?
It seems to me that the third issue above (lack of self-awareness as a social movement) subsumes the other two: if effective altruism as a movement were sufficiently introspective, it could probably notice and solve the other two problems, as well as future ones that will undoubtedly crop up.
Hence, I propose an additional principle of effective altruism. In addition to being altruistic, maximizing, egalitarian, and consequentialist we should be self-aware: we should think carefully about the issues associated with being a successful movement, in order to make sure that we can move beyond the obvious applications of EA principles and come up with non-trivially better ways to improve the world.
Acknowledgments
Thanks to Nick Bostrom for coining the idea of a hypothetical apostasy, and to Will Eden for mentioning it recently.
Thanks to Michael Vassar, Aaron Tucker and Andrew Rettek for inspiring various of these points.
Thanks to Aaron Tucker and John Sturm for reading an advance draft of this post and giving valuable feedback.
Cross-posted from http://www.benkuhn.net/ea-critique since I want outside perspectives, and also LW's comments are nicer than mine.