Sorry if this is obviously covered somewhere but every time I think I answer it in either direction I immediately have doubts.
Does EA come packaged with "we SHOULD maximize our altruism" or does it just assert that IF we are giving, well, anything worth doing is worth doing right?
For example, I have no interest in giving materially more than I already do, but getting more bang for my buck in my existing donations sounds awesome. Do I count? I currently think not but I've changed my mind enough to just ask.
...
Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension
Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer's drowning child example, it makes it harder to come up with reasons for not changing one's actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they wan...
Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.
My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.
My impression is also that it is a risk factor for religious mania.
Lack of compartmentalization, also called taking ideas seriously, when applied to religious ideas, gives you religious mania. Applied to various types of collective utilitarianism, can produce various anything from EA to antinatalism, from tithing to giving away all that you have. Applied to what it actually takes to find out how the world works, gives you Science.
Whether it's a good thing or a bad thing depends on what's in the compartments.
My interest in rationality was more driven by my interest in effective altruism than the other way around.
This comment actually makes aspects of your writings here make sense, that did not make sense to me before.
Your post, overall, seems to have the assumption underlying it, that effective altruism is rational, and obviously so. I am not convinced this is the case (at the very least, not the "and obviously so" part).
To the extent that effective altruism is anything like a "movement", a "philosophy", a "community"...
I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.
Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.
To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?
To put this another way: if "effective altruism" is really just "we should be effective in ...
In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.
LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That's a characteristic of this particular community, not a feature of either rationalism or EA.
A couple of points:
(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say
[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"
Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb...
Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others.
I think this needs to differentiated further or partly corrected:
Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:
Cognitive biases which improve individual fitness by avoiding dangerous parts
I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, th...
Another effect: people on LW are massively more likely to describe themselves as effective altruists. My moral ideals were largely formed before I came into contact with LW, but not until I started reading was I introduced to the term "effective altruism".
The question appears to assume that LW participation is identically equal to improved rationality. Involvement in LW and involvement in EA is pretty obviously going to be correlated given they're closely related subcultures.
If this is not the case: Do you have a measure to hand of "improved rationality" that doesn't involve links to LW?
...The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect r
It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).
The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors:
So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.
The above claim can be broken into several subclaims (any or all of which may be intended):
Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value.
Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.
Claim 3: Being more rational strengthens people's altruistic motivation.
Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
Some elements of effective altruism thinking are:
Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic."
Claim 3: "Being more rational strengthens people's altruistic motivation."
Putting it all together
The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.
My own experience
In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:
How about you?