It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).
The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors:
- It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
- Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
- Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
- The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.
So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.
The above claim can be broken into several subclaims (any or all of which may be intended):
Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value.
Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.
Claim 3: Being more rational strengthens people's altruistic motivation.
Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
Some elements of effective altruism thinking are:
- Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
- Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.
- The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).
Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic."
- The question of how useful epistemic rationality is in general has been debated, (e.g. here, here, here, here, and here).
- I think that epistemic rationality matters more for altruistic endeavors than it does in other contexts. Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others. I think that epistemic rationality matters still more for those who aspire to maximize utilitarian expected value: cognitive biases correlate more strongly with well-being of others within one's social circles than they do with the well-being of those outside of one's social circles.
- In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer describes some cognitive biases that can lead one to underestimate the likelihood of risks of human extinction. To the extent that reducing these risks is the most promising philanthropic cause (as Eliezer has suggested), reducing cognitive biases improves people's prospects of maximizing utilitarian expected value.
Claim 3: "Being more rational strengthens people's altruistic motivation."
- I think that there may be some effect in this direction mediated through improved well-being: when people's emotional well-being increases, their empathy also increases.
- It's possible to come to the conclusion that one should care as much about others as one does about oneself through philosophical reflection, and I know people who have had this experience. I don't know whether or not this is accurately described as an effect attributable to improved accuracy of beliefs, though.
Putting it all together
The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.
My own experience
In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:
- I was fully on board with consequentialism and with ascribing similar value to strangers as to familiar people as an early teenager, before I had any knowledge of cognitive biases as such, and at a time when my predictive model of the world was in many ways weaker than those of most adults.
- It was only when I read Eliezer's posts that the justification for expected value maximization in altruistic contexts clicked. Understanding it didn't require background knowledge — it seems independent of most aspects of rationality.
- I started reading Less Wrong because a friend pointed me to Yvain's posts on utilitarianism. My interest in rationality was more driven by my interest in effective altruism than the other way around. This is evidence that the high fraction of Less Wrongers who identify as effective altruists is partially a function of it being an attractor.
- So far increased rationality hasn't increased my productivity to a degree that's statistically significant. There are changes that have occurred in my thinking that greatly increase my productivity in the most favorable possible future scenarios, relative to a counterfactual in which these changes hadn't occurred. This is in consonance with my remark under the "putting it all together" heading above.
How about you?
Ok, I've watched Singer's TED talk now, thank you for linking it. It does work as a statement of purpose, certainly. On the other hand it fails as an attempt to justify or argue for the movement's core values; at the same time, it makes it quite clear that effective altruism is not just about "let's be altruists effectively". It's got some specific values attached, more specific than can justifiably be called simply "altruism".
I want to see, at least, some acknowledgment of that fact, and preferably, some attempt to defend those values. Singer doesn't do this; he merely handwaves in the general direction of "empathy" and "a rational understanding of our situation" (note that he doesn't explain what makes this particular set of values — valuing all lives equally — "rational").
Edit: My apologies! I just looked over your post again, and noticed this line, which my brain somehow ignored at first:
That (in fact, that whole paragraph) does go far toward addressing my concerns. Consider the objections in this comment at least partially withdrawn!
Apology accepted :-). (Don't worry, I know that my post was long and that catching everything can require a lot of energy.)