Utilitarianism is fundamentally incompatible with value complexity.
Could you explain why exactly? To me it seems that if you value multiple things, let's call them A, B, C, you could construct a function such as F = min(A, B, C), which by its maximization supports all of these values.
In such situation, imagine that currently e.g. A = 10, B = 1000, C = 1500. Which could mean e.g. that we have a lot of good music, many good movies, but thousands of people are literally starving to death. In such situation, trying to increase the function F means fully focusing on increasing A and ignoring the values B and C (until A reaches them). In a short term, it may seem as not having complex values. But that's just a local situation.
Shortly: Even if you have complex value, you may find that in current situation the best way to increase total outcome is to focus on one of these values.
Near mode: Imagine that you live in a village with 1000 citizens, where half of them are starving to death, and the other half is watching movies. One person proposes a new food program. Another person proposes making another movie (of which you already have a few dozens). As a mayor, you choose to spend the tax money on the former. The latter guy accuses you of not understanding the complexity of values. Do you think the accusation is fair?
Utilitarianism is fundamentally incompatible with value complexity.
To me it seems that if you value multiple things, let's call them A, B, C, you could construct a function
It sounds like you might be confusing utilitarianism with utility functions (a common mistake on LW). While utilitarianism always involves a utility function, not all utility functions are utilitarian.
Even if you have complex value, you may find that in current situation the best way to increase total outcome is to focus on one of these values.
Yes, that's always theoretically p...
Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.
Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.
The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.
However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.
A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and hence 2) inferior to EA things.
Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*
...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.
Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.
There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.
* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.