It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).
The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors:
- It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
- Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
- Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
- The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.
So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.
The above claim can be broken into several subclaims (any or all of which may be intended):
Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value.
Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.
Claim 3: Being more rational strengthens people's altruistic motivation.
Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
Some elements of effective altruism thinking are:
- Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
- Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.
- The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).
Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic."
- The question of how useful epistemic rationality is in general has been debated, (e.g. here, here, here, here, and here).
- I think that epistemic rationality matters more for altruistic endeavors than it does in other contexts. Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others. I think that epistemic rationality matters still more for those who aspire to maximize utilitarian expected value: cognitive biases correlate more strongly with well-being of others within one's social circles than they do with the well-being of those outside of one's social circles.
- In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer describes some cognitive biases that can lead one to underestimate the likelihood of risks of human extinction. To the extent that reducing these risks is the most promising philanthropic cause (as Eliezer has suggested), reducing cognitive biases improves people's prospects of maximizing utilitarian expected value.
Claim 3: "Being more rational strengthens people's altruistic motivation."
- I think that there may be some effect in this direction mediated through improved well-being: when people's emotional well-being increases, their empathy also increases.
- It's possible to come to the conclusion that one should care as much about others as one does about oneself through philosophical reflection, and I know people who have had this experience. I don't know whether or not this is accurately described as an effect attributable to improved accuracy of beliefs, though.
Putting it all together
The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.
My own experience
In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:
- I was fully on board with consequentialism and with ascribing similar value to strangers as to familiar people as an early teenager, before I had any knowledge of cognitive biases as such, and at a time when my predictive model of the world was in many ways weaker than those of most adults.
- It was only when I read Eliezer's posts that the justification for expected value maximization in altruistic contexts clicked. Understanding it didn't require background knowledge — it seems independent of most aspects of rationality.
- I started reading Less Wrong because a friend pointed me to Yvain's posts on utilitarianism. My interest in rationality was more driven by my interest in effective altruism than the other way around. This is evidence that the high fraction of Less Wrongers who identify as effective altruists is partially a function of it being an attractor.
- So far increased rationality hasn't increased my productivity to a degree that's statistically significant. There are changes that have occurred in my thinking that greatly increase my productivity in the most favorable possible future scenarios, relative to a counterfactual in which these changes hadn't occurred. This is in consonance with my remark under the "putting it all together" heading above.
How about you?
I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.
Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.
To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?
To put this another way: if "effective altruism" is really just "we should be effective in our altruistic actions", then it seems frankly ridiculous that less than one-third of Less Wrong readers should identify as EA-ers. What do the other 71.4% think? That we should be ineffective altruists?? That altruism in general is just a bad idea? Do those two views really account for over seventy percent of the LW readership, do you think? Surely, in this case, the effective altruist movement just really needs to get better at explaining itself, and its obvious and uncontroversial nature, to the Less Wrong audience.
But effective altruism isn't just about that, yes? As a movement, as a philosophy, it's got all sorts of baggage, in the form of fairly specific values and ethical systems (that are assumed, and never really argued for, by EA-ers), like (a specific form of) utilitarianism, belief in things like the moral value of animals, and certain other things. Or, at least — such is the perception of people around here (myself included); and that, I think, is what's behind that 28.6% statistic.
[1] Well, trivial given the background that we, as Lesswrongians who have read and understood the Sequences, are assumed to have.
I haven't watched that TED talk (though I've read some of Peter Singer's writings); I will do that tomorrow.