Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.
Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.
The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.
However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.
A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and hence 2) inferior to EA things.
Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*
...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.
Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.
There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.
* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.
Hi,
Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.
So here are a few points:
1. EA does not equal utilitarianism.
Utilitarianism makes many claims that EA does not make:
EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.
EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.
EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.
EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).
2. Rather, EA is something that almost every plausible moral theory is in favour of.
Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.
3. Is EA explicitly welfarist?
The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.
So, to answer your dilemma:
EA is not trying to be the whole of morality.
It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.
sd