Money is one measure of social status. People compare themselves favorably or unfavorably to others in their social circles based on their wealth and their earning power, and signals thereof, and compare their social circles favorably or unfavorably with other social circles based on the average wealth of people in the social circles. Humans crave social status, and this is one of people’s motivations for making money. 

Effective altruists attempt to quantify “amount of good done” and maximize it. Once this framing is adopted, “amount of good done” becomes a measure of social status in the same way that money is. Most people who aspire to be effective altruists will be partially motivated by a desire to matter more than other people, in the sense of doing more good. People who join the effective altruism movement may do so partially out of a desire to matter more than people who are not in the movement. 

Harnessing status motivations for the sake of doing the most good can have profound positive impacts. But under this paradigm, effective altruists will generally be motivated to believe that they’re doing more good than other people are. This motivation is not necessarily dominant in any given case, but it’s sufficiently strong to be worth highlighting.

With this in mind, note that effective altruists will be motivated to believe that the activities that they themselves are capable of engaging in have higher value than they actually do, and that activities that others are engaged in have lower value than they actually do. Without effort to counterbalance this motivation, effective altruists’ views of the philanthropic landscape will be distorted, and they’ll be apt to bias others in favor of the areas that use their own core competencies.

I worry that the effective altruist community hasn’t taken sufficient measures to guard against this issue. In particular, I’m not aware of any overt public discussion of it. Independently of whether or not there are examples of public discussion that I’m unaware of, the fact that I’m not aware of any suggests that any discussion that has occurred hasn’t percolated enough.

I’ll refrain from giving specific examples that I see as causes for concern, on account of political sensitivity. The effective altruist community is divided into factions, and Politics is the Mind-Killer. I believe that there are examples of each faction irrationally overestimating the value of their activities, and/or irrationally undervaluing the value of other faction's activities, and I believe that in each case, motivated reasoning of the above type may play a role.

I request that commenters not discuss particular instances in which they believe that this has occurred, or is occurring, as I think that such discussion would reduce collaboration between different factions of the effective altruist community. 

The effective altruist movement is in early stages, and it’s important to arrive at accurate conclusions about effective philanthropy as fast as possible. At this stage in time, it may be that the biggest contribution that members of the community can make is to engender and engage in an honest and unbiased discussion of how best to make the world a better place.

I don't have a very definite proposal for how this can be accomplished. I welcome any suggestions. For now, I would encourage effective altruist types to take pride in being self-skeptical when it comes to favorable assessments of their potential impact relative to other effective altruist types, or relative to people outside of the effective altruist community.

Acknowledgements: Thanks to Vipul Naik and Nick Beckstead for feedback on an earlier draft of this post.

Note: I formerly worked as a research analyst at GiveWell. All views here are my own.

I cross-posted this article to http://www.effective-altruism.com/

New Comment
20 comments, sorted by Click to highlight new comments since:

Seems kind of obvious? We've got plenty of people running around saying "Perhaps you overestimate your importance".

I agree that people say such things all the time. What I haven't seen very much is

  • People questioning whether they themselves are subject to this influence (as opposed to questioning whether other people are subject to this influence).
  • Meta-level discussion about how to counteract this influence.

On the latter point, I find certain principles from your How To Actually Change Your Mind sequence to be highly relevant and significant, but I don't remember having seen explicit application of these principles to "assessing the relative social impact of different effective altruism interventions" in the public domain.

I wrote a post which is related, except that I thought different people might be more or less influenced by different biases and didn't identify one in particular as the most relevant.

Yes, I vaguely remember having seen this — good point.

It's obvious to people in the rationality community (I'd agree with Jonah that even here, we don't do a good enough job of actually instilling habits. )

But the Effective Altruism community is in the process of going... not mainstream, exactly, but at least drawing from different pools of people than the rationality community. Some of those people are coming from places like felicifia.org, which has a fair emphasis of intellectual rigor, but a lot of those people are coming from circles where a lot of ideas we take for granted aren't really common. Over the past few months, there's been an influx of people into the facebook group discussions and I've been a lot more concerned about careful thinking.

I've been noticing similar issues promoting the NYC Less Wrong group outside of LW-itself lately. On LW there's a shared culture of taking responsibility for your own intellectual rigor, or at the very least, acknowledging when you haven't researched an idea enough to be confident in it. Figuring out how to instill this in newcomers seems pretty important.

Well, of course everyone's going to be thinking they're doing the most effective thing, because they chose to do it based on the fact that it seemed like it'd be the most effective. (Hopefully.)

Different people having different comparative advantage.

Consider the question "is it better to become a doctor, or a banker?" that's been raised by 80K. Someone who's naturally suited to being a doctor will be motivated to believe that becoming a doctor is higher value, and somebody who's naturally suited to being a banker will be motivated to believe that becoming a banker is higher value.

Thinking about one's comparative advantage can be a good heuristic for figuring out how to do the most good. The trouble arises, when, e.g., people who are especially good at being doctors (resp. bankers) are motivated to believe that their activity is of higher value, and then try to convince others (who don't have the same comparative advantage) to adopt the same profession because of this.

Also, people can be confused as to what their comparative advantage actually is, and so be in a field that's suboptimal for themselves, and try to get other people to go into the field for the reasons described above.

Absent specific, non-hypothetical examples and empirical evidence, I find this question hard to think or reason about. I have not noticed this problem myself, so I cannot recollect any such examples from my own experience.

I note this as another example of the "Politics is the Mind-Killer" is the Mind-Killer meta-problem. The point of the "Politics is the Mind-Killer" essay (and a correct one) is that we should avoid using tribal-loyalty triggering examples when discussing issues such as mathematics, cognitive biases, and logic that are not fundamentally about issues that touch on tribal identity. Triggering tribal loyalty unnecessarily is bad pedagogy.

However, "Politics is the Mind-Killer" is not a general excuse for avoiding discussion of politics or other matters that touch on tribal or personal identity when those matters are exactly the subject at hand. If rationality cannot come to epistemically correct and instrumentally useful results despite the blinders of tribal loyalty and personal identity, it is weak, impotent, and irrelevant.

The claim of this post is that people have cognitive biases based on personal identity that cause them to reach incorrect conclusions about the relative efficacy of different altruistic actions. If this group is truly rational, then we should be able to calmly discuss the actual issues and resolve them factually or at least work them down to the point where we realize some of us have different fundamental values. For instance, I would not expect us to resolve the question of whether to value future people equally with currently living people, but I would expect us to be able to make plausible estimates as to the number of QALYs (quality adjusted life years) per dollar of different interventions, or at the very least to figure out what information is missing and needs to be collected to answer the question. If we can't do that, if we can't even talk about that, then I have to question what the point of the entire LessWrong project actually is.

Jonah has recently been attempting to persuade Eliezer that Eliezer's comparative advantage is not the FAI research that he is currently doing but instead doing (more) evangelism. Now we have a post explaining how status signalling related motivated cognition can cause people to overestimate the value of the altruistic efforts that they happen to personally have chosen. This is almost certainly true---typical human biases work like that in all similar areas so it would be startling to find that an activity so heavily evolutionarily entangled with signalling motives was somehow immune! I feel it is important, however, to at least make passing acknowledgement of the fact that this exhortation about motivated cognition is itself subject to motive.

Jonah himself acknowledges people are more likely to suggest motivated cognition as something that the other guy might be suffering from than to apply it to themselves. While in this case there is no overt claim like "... and therefore you should believe the guy I was arguing with is biased and so agree with me instead" and I don't believe Jonah intends anything so crude, the recent context does change the meaning of any given post---at least the context and expected social influence of a post influences how I personally evaluate contributions that I encounter and I currently do not label that habit of reading a bug.

To be clear the pattern "significant argument --(short time)--> post by one participant which points out a bias that the other participant may have" isn't (always) a cause to reject the post. This one isn't particularly objectionable (a tad obvious but that's ok in discussion). Nevertheless I suggest that for the purpose of making the actual explicit point without distraction it may usually be best to keep such posts in draft form for a couple of weeks and post them later when the context loses relevance. Either that or include a lampshade or disclaimer regarding the relevance to the existing conversation. There is something about acting oblivious that invites scepticism.

  • In writing my post, I had a number of different examples in the back of my mind.
  • Even if I don't think that MIRI's current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer's holistic output for the reason that I describe in my post.
  • There's a broader context that my post falls into: note that I've made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell's on climate change and meta-research, to effective philanthropy in general, to epistemology.
  • You may be right that I should be spacing my posts out in a different way, temporally.

I endorse the lampshade approach significantly more than the delay approach.

More generally, I endorse stating explicitly whatever motivational or cognitive biases may nonobviously be influencing my posting whenever doing so isn't a significant fraction of the effort involved in the post.

For example, right now I suspect I'm being motivated by interpreting wedrifid's comment as a relatively sophisticated way of taking sides in the Jonah/Eliezer discussion he references, and because power struggles make me anxious my instinct is to "go meta" and abstract this issue further away from that discussion.

In retrospect, that isn't really an example; working out that motive and stating it explicitly was a significant fraction of the effort involved in this comment.

For now, I would encourage effective altruist types to take pride in being self-skeptical when it comes to favorable assessments of their potential impact relative to other effective altruist types, or relative to people outside of the effective altruist community.

Yes, I find it remarkable how EAs tend to think their work is obviously vastly more important than that of "non-EAs" (as if such a thing were even well defined). There's not a lot new under the sun, and like most movements, EA is largely a recycling and recombination of things other people have been doing since the dawn of civilization. It may be a good combination, but little in EA is really unique to EA.

All of that said, I think a big reason people think their own work dominates that of others is because they have different values from other people. It's perfectly possible for lots of people to be doing lots of things that are each optimal relative to their own values. You might (perhaps correctly) point out that most EAs have values more similar to each other than my values are to theirs, so my point may apply less broadly than I suggested.

All of that said, I think a big reason people think their own work dominates that of others is because they have different values from other people.

The situation is blurred by the fact that people are motivated to believe that the work that they're doing fulfills their values. For an extreme but vivid case, consider participants in a genocide. It's very hard to imagine that massacring a population reflects their fundamental values, but my impression is that such people often believe that they're doing the "right" thing in some moral sense.

I worry that this might have (much more mild!) incarnations within the EA community.

I’m not aware of any overt public discussion of [this issue].

Katja's recent post?

ETA: It's not the same issue, I didn't read either of you properly. But perhaps the same ballpark.

Can you elaborate? I don't immediately see the connection with effective altruists being motivated to believe that the activities that they engage in are of higher relative value than they are.

This feeds back into the earlier discussion about the flexibility of donations vs careers. Hot money donors who switch to apparently better alternatives face less in the way of costs to encourage rationalization. They still have some pressures along these lines, since they don't want to say their previous donations were foolish, and would probably like to be able to point to some new evidence or justification for the switch, but the problem would certainly seem to be smaller.

This is a very good point, which I had not considered. As you know, I've generally erred in the direction of updating too much rather than too little, and so this issue hasn't been salient to me. It's something for me to brood on.

As I said in response to your comment on my earlier post, I think that this problem can partially be mitigated by developing transferable skills and connections that can be applied in a wide variety of contexts.

This seems potentially connected to Goodhart's law.

[-][anonymous]00

Good post.

One obvious problem with trying to overcome bias by means of "self-skepticism" is that many of the biases we try to overcome also shape our skeptical attitudes. Here, as elsewhere, adopting the outside view is probably more effective than attempting to find flaws in one's thinking "from the inside".

A possible application for the case at hand is this. Consider the reasons why you chose to work on a particular cause, instead of the many other causes you could have worked for. Are these reasons still the same ones that you currently regard as valid? If not, you should increase your credence in the hypothesis that you might be working on the wrong cause, relative to your present beliefs and values, since you might have reached this view as a result of motivated cognition.

I will give an example from my own personal life. I chose to become a vegetarian many years ago, out of concern for the animals that were suffering (in expectation) as a result of my dietary choices. However, as I read and reflected more on the issue, I came to realize that the indirect effects on other sentient beings where much more relevant than the direct effects on the animals themselves. In particular, I thought that the effects of spreading concern for all sentience by abstaining from eating animals might shape the choices made by our descendants with power to create astronomical amounts of suffering in the Universe. However, this should make me suspicious. Was I really lucky that my new reasons just so happen to vindicate the diet to which my old reasons had caused me to become deeply attached? Or is this instead the result of motivated cognition on my part? I am still a vegetarian, but for arguments of this sort I am less convinced that this is what morality requires of me.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

The effective altruist movement is in early stages, and it’s important to arrive at accurate conclusions about effective philanthropy as fast as possible. At this stage in time, it may be that the biggest contribution that members of the community can make is to engender and engage in an honest and unbiased discussion of how best to make the world a better place.

This philosophy strikes me as remarkably compatible with that of Leverage Research. Are you in contact with those folks at all?

[This comment is no longer endorsed by its author]Reply