Introduction

Jane is an effective altruist: she researches, donates, and volunteers in the highest impact ways she can find. Jane has been intending to write an effective altruism book for over a year, but hasn't managed to overcome the akrasia. Jane then meets fellow effective altruist, Jessica, who she is keen to impress. She starts writing with palpable enthusiasm.

In one possible world:

Jane feels guilty that she has an impure motive for writing the book.

In another:

Jane is glad to leverage the motivation to impress Jessica to help her do good.

 

In the past few months, I've heard multiple people mention their use of less noble motivations in order to get valuable things done. It appears to be a common experience among rationalists and EAs, myself included. The way I'm using the terms, a reason for performing some action is the ostensible goal you wish to accomplish, e.g. the goal of reducing suffering. A motivator for that action is an associated reward which makes performing the action seem enticing - “yummy” - e.g. impressing your friends. I use the less common term ‘motivator’ to distinguish the specific motivations I'm discussing from the more general meaning of ‘motivation’.

Many of our goals are multiple steps removed from the actions necessary to achieve them, particularly the broad-scale altruistic ones. The goals are large, abstract, long-term, ill-specified, difficult to see progress on, and unintuitively connected to the action required. ‘I wrote a LessWrong post, is the world more rational yet?’ In contrast, motivators are tangible, immediate, and typically tickle the brain’s reward centres right in the sweet spot. Social approval, enjoyment of the action, money, skills gained, and others all serve as imminent rewards whose immediate anticipation drives us. ‘Woohoo, 77 upvotes!’ Unsurprisingly, we find ourselves turning to these immediate rewards if we want to accomplish something.

Note that a reason - the ostensible goal - can still be the root cause of the desire to perform an altruistic action. For one thing, if I didn't truly care about all the things I say I care about doing, and really only wanted social approval, then why join this particular group of people out of all others? 

 

Embrace or Disgrace?

If motivators are the mechanism by which I'm getting things done, then I want to know exactly what’s going on, what benefits I’m getting, and what costs I'm paying. To date, I have seen people respond in two ways after recognising their motivators for altruistic action: i) by enthusiastically embracing the ability of motivators to spur good action, or ii) by feeling abject shame and guilt at not acting for the right reasons. 

The second response follows from society’s conventional attitude towards anything it deems to be an impure motive: absolute and unrestrained damnation. Few things are considered more evil than doing public good for personal gain. 

We have a visceral reaction to the idea that anyone would make very much money helping other people. Interesting that we don't have a visceral reaction to the notion that people would make a lot of money not helping other people. You know, if you want to make 50 million dollars selling violent video games to kids, go for it. We'll put you on the cover of Wired magazine. But you want to make half a million dollars trying to cure kids of malaria, and you're considered a parasite yourself.
 -- Dan Pallotta on The Way We Think About Charity is Dead Wrong

Anyone who has internalised this cannot acknowledge their motivators without admitting to themselves that they are a bad person. And to the extent that we expect others have internalised this, we are reluctant to disclose our motivators for fear of censure. Even if you are not morally condemned, acting solely for the benefit of your beneficiary is always considered more praiseworthy than acting for the benefit of the beneficiary as well as your own gain. This is so much so, that often people consider a charitable act praiseworthy only if the benefactor had no personal gain. Hence the perennially popular question “Is true altruism ever possible if you’re always getting something out of it?”

Momentarily I will assert that society is foolish in this regard, but society’s foolishness typically has an explanation - often that it was an attitude which was once adaptive but is no longer so, or was adaptive in a different context but it didn't transfer. Given the vehemence towards impure motives, there might be something to these reasons.

If I consider immediate personal relations, then I find that I would much prefer to be friends with someone who wants to be my friend just because they like me for me rather than with someone who wants to be my friend, but has admitted that she finds being my friend a lot easier because she likes using my swimming pool on these hot summer days. The former friend’s friendship is more unconditional and trustworthy - the latter might desert me a soon as winter comes. That motivators introduce an amount of contingency to one’s allegiances is a point worth noticing.

In contrast, the first response – enthusiastic embrace of motivators – is the consequentialist liberation from being overly concerned with the motives of the actor. What matters are the consequences! And if motivators mean more good things get done, well, then they get the Official Consequentialist Stamp of Approval. I don’t care if you cured malaria solely for profit, I just care that you cured malaria. But this point requires little pushing in these parts. 

It might even be that not only do motivators provide a stronger drive for altruistic action, but they are in fact the only way to get ourselves to act. Even if some parts of our minds can take on long-term, broad-scale, intangible goals, other parts just don’t speak that language. The rider might be able to engage in long term planning, but if you want the elephant to budge, you've got to produce some carrots now.

An interesting aside, the use of motivators to get things done may be more necessary for effective altruists than the general altruistic population. Warm fuzzies are great motivators. Directly seeing those you are helping at the local shelter or looking at a photo of the smiling child who you sent money to might provide a reward immediate enough to require no other. Whereas, when you’re donating to curing schistosomiasis or reducing x-risk, the benefit is harder to feel and you've got to get a thumbs up from your friends instead to feel good. 

 

Caveats

While the above may be enough reason to endorse the use of motivators to get things done, there is reason for caution.

Motivated Cognition

Foremost, motivators induce motivated cognition. When selecting altruistic projects, one’s choice becomes distorted from what would actually have the highest impact to that which has the strongest motivator, typically what will impress people the most. Furthermore, if motivators are accepted then it is legitimate to include them in the equation. If one project has a greater motivator and is more likely to get done because of it, then even if it prima facie isn't the highest impact, that likelihood of getting it done is strong factor in its favour. But if this is admissible reasoning, it becomes very easy to abuse: “Well sure this isn’t the highest impact thing I could do, but volunteering at the local shelter is something I feel motivated to and actually will do, so therefore it’s the thing I should do.” 

Pretending to Try

These points have already been identified in the discussion of the ‘pretending to try’ phenomenon.

A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. 
-- Ben Kuhn on Pretending to Try

These random factors will be whatever happens to be the motivators for a person. Nevertheless, it is better that people do something good rather than nothing. Katja Grace argues that though this is what is going on, it is both inevitable and actually positive. I am tempted to quote her entire post, so I suggest that you should read all of it

‘Really trying’: directing all of your effort toward actions that you believe have the highest expected value in terms of the relevant goals [not making decisions based on motivators].

‘Pretending to try‘: choosing actions with the intention of giving observers the impression that you are trying [making decisions based on motivators].

‘Pretending to really try‘: choosing actions with the intention of giving observers the impression that you are trying, where the observers’ standards for identifying ‘trying’ are geared toward a ‘really trying’ model. e.g. they ask whether you are really putting in effort, and whether you are doing what should have highest expected value from your point of view.

-- Katja Grace on In Praise of Pretending to Really Try

 

The proposed solution is that we can leverage the power of motivators and still have people perform the highest impact actions they can, if we can create community norms whereby the the amount social praise you get is proportional to the strength of your case for the impact of your action is.

I like this, but it hasn't happened yet and I suspect there are barriers to making it work completely. Even if your action is selected solely for impact - truly trying - the reasoning behind your selection might be complicated and require time to follow and verify. A few close friends might check your plans and approve wholeheartedly when you pretend to really try, but the broader community won’t hear out all the details specific to your situation, instead continuing to praise only actions which fit the template of good effective altruist behaviour, e.g. taking a high paying job in order to earn to give.

Possibly the best we can do for community norms is to find and spread the best simple principles for deciding whether someone is really trying. The principles involved are unlikely to be adequately nuanced to identify the true optimum all of the time, but I think there’s room to improve over what we've fallen into so far.  To date, I see praise being given primarily for actions which are distinctive EA behaviour and signal belonging to the tribe. Conventional altruistic actions like volunteering in the third world aren't distinctly EA and I don’t expect them to get much praise, even if such actions were the highest impact for a particular person. More likely, a person doing something which doesn't fit the EA mould will be interrogated for failure to conform to what EAs are supposed to do.

 

Neglected Tasks

We can concretely see the impact a reliance on motivators has by noticing the many neglected tasks which result. High value actions which are not prestigious go undone because all they've got going for them is their pure altruistic impact. 

The Centre for Effective Altruism has had a surprising amount of trouble finding people to do whatever important work needs doing when it isn't research or communications. These things include: organising insurance, bookkeeping and making payments, maintaining our databases, making deliveries, ordering equipment, finding and managing places for people to live, random requests (e.g. cutting keys), receiving and processing mail, cleaning, organising food and office events, etc.

It's a bit of a shame that people seem willing to do whatever is most important... except whenever it isn't inherently fun or prestigious!
-- Robert Wiblin

 

. . . I've had 200 volunteers offer to do work for Singularity Institute. Many have claimed they would do "anything" or "whatever helped the most". SEO is clearly the most valuable work. Unfortunately, it's something "so mundane", that anybody could do it... therefore, 0 out of 200 volunteers are currently working on it. This is even after I've personally asked over 100 people to help with it.
-- Louie Helm

CFAR have made similar comments.

This is a serious issue for a community claiming to be serious about maximising impact.

 

Suggestions

Motivators are sufficiently powerful, if not unavoidable, that we should allow ourselves to work with them despite their dangers. The question becomes how to use them while minimising their pernicious effects. The ‘pretending to try’ discussion concerns the community collectively, but I am interested in how individuals should approach their own motivators. 

I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.

Awareness

When choosing a course of action, pay attention to what your motivators might be. Ask ‘what are the personal benefits I get out of this?’, ‘how much are these influencing my decision?’ and ‘if this action A lacked the consequence of personal benefit B, would I still do it?’

Self-Honesty

Attempting genuine awareness requires a high-degree of self-honesty. You have to be willing to admit that in all likelihood you have motivators already and they influence your decisions, and to acknowledge that even when you are trying to do good for others, you are interested in your own gain. If this admission is hard, I suggest remembering that this how people work, everyone else is the same and only the difference is that you’re being honest with yourself.

Choose, then Motivate

One strategy is to fully acknowledge to yourself that you want some immediate personal gain from your actions, but delaying thinking about that personal gain until after you have made a decision about what to do based solely upon expected impact. Once you've identified the highest impact action, then brainstorm ways to find motivators. This might involve nothing more than developing a good framing for your actions which makes you look very noble indeed. And most of the time that should be doable if you genuinely have a good reason.

Optimise Someone Else’s Altruism

One more way to limit the influence motivators have over your decision making is to pretend that you are deciding what someone else – who is in exactly your situation with exactly your talents – should do to maximise their impact. You are advising this other person, whose interests you don’t care for because they are not you, on how they might accomplish the most can towards their goals.

Really Caring

Probably the best way to ensure that you really try is to ensure that you really care. If you focus on your reason for action - the outcome that it is really about - then petty things like other people's praise will feel that important than actually accomplishing your true goal.

Bring this feeling of caring to the fore often. Are you trying to cure malaria? Keep a card with various malaria statistics on your desk, read it often, and remind yourself that you want stop those deaths which are happening right now. Care about the Far Future? Imagine your own fun-theoretic utopia, visualise it, and think about how good it would be to get there.


Conclusion

This is one attempt at getting a handle on motivators and I am unsure about much of it. There will be other angles to view this from, there are things I haven’t thought of, and mistakes in some of my assumptions. Plus, variation in human minds is astounding. Though many will experience motivation the way that I do, others will find what I'm reporting very alien. From them I would like to hear. 

What I am sure about is that if we want to live up to our principles of doing what is truly most effective, we cannot ignore the factors driving our behaviour. Here's hoping that we can do what we really need to do and feel maximally good about it too.

 

 

Acknowledgements: I owe an enormous thank you to tkadlubo and shokwave for thoroughly editing this post. 


New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 3:12 PM

it's a bit of a shame that people seem willing to do whatever is most important... except whenever it isn't inherently fun or prestigious!

Back in 2009, SIAI needed spam filtering. I took it on and manually filtered spam and also installed the Akismet spam filter, even though my skill level would have allowed me to do more sophisticated tasks. But that's what was needed.

I hereby claim retroactive social status for not insisting on only doing high status tasks :-)

I agree with this claim, though I may have no right even to speak of it.

The second response follows from society’s conventional attitude towards anything it deems to be an impure motive: absolute and unrestrained damnation.

If we teach people that realistic motivation is evil, only evil people will have realistic motivation.

This is actually a very poisonous meme for the society, because it convinces people that "win/win" situations do not exist, because as long as you "win", it magically makes the other person "not win", regardless of all other evidence.

This is a really excellent post!

I agree the revulsion against ulterior motives for altruism is somewhat detrimental but also somewhat rational. Using ulterior motives seems often like a good idea, but genuine caring can be good to cultivate too because it may be more robust against your pursuits changing when the next big thing comes along.

Two examples come to mind not doing things because of insufficient recognition:

  • Wikipedia contributions: People sometimes write blog posts or essays almost exclusively summarizing factual content. Such summaries could be added to Wikipedia and presumably would have had much bigger impact there, but one reason people don't contribute to Wikipedia is lack of authorship. I've tried to get around this barrier by compiling a list of my Wikipedia contributions and sharing the more important ones on Facebook.
  • Google Grants: I think signing up charities for Google Grants can be a high-impact activity, but I can only do so much of it because it's not the most interesting project, and it doesn't currently have high status in the EA community.

In other areas of life, I've also seen lots of examples where people are reluctant to help others because they won't get enough credit for helping. One way to help address this is to give acknowledgements to others (like you did at the end of your post!). Another thing that sometimes helps me is remembering that "we're all in this together," and picturing my sense of "ownership" as extending over the achievements of the whole group rather than just myself. ("There's no 'i' in 'team'".)

Not wanting to do tedious manual chores can be somewhat sensible if it's not your comparative advantage. It would be better to earn money and hire someone else to do them, unless you'd be doing them in your leisure time or you don't otherwise have high earning/research/outreach potential.

Several of the links in this post point to Google redirects rather than directly to the actual website. Could you fix this please? Thank you!

Thanks! Fixed.

Also two redundant sentences:

I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.

The aim of these techniques is to limit the influence of motivators have when we are deciding which actions to take, even if we allow or welcome then once we’re onto implementing our plans.

I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it's definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.

To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone made a million dollars selling video games (I like video games, but agree that their actual utility for most people's preferences/needs is pretty low compared to curing malaria). I would be less inclined to believe this if the person who cured malaria made their money by keeping the cure secret and charging enough for it that any number of people who needed it were unable to access it, with the loss in net altruism quantified by the number of people who were in this way prevented from alleviating their malaria.

Furthermore, if this hypothetical self-interested malaria curer were also to patent the cure and litigate aggressively (or threaten to) against other cures, or otherwise somehow intentionally prevent other people from producing a cure, and they are effective in doing so, the net utility of coming up with the cure could drop below zero, since they may well have prevented someone else who is more "purely" altruistic from coming up with a cure independently and helping more people than they did.

These are pretty plausible scenarios, exactly because the actions demanded by optimizing the non-altruistic motivators can easily diverge from the actions demanded by optimizing the altruistic end, even if the original intent was supposedly the latter. It's particularly plausible in the case of profit motive, because although it is not always the case that the best way to turn a profit is anti-altruistic, often the most obvious and easy-to-implement ways to do so are, as is the case with the example I gave.

That's not to say we should intrinsically be wary of people who manage to benefit themselves and others simultaneously, nor is it to say that a solution that isn't maximizing altruistic utility can't still be a net good, but the less-than-zero utility case is, I would argue, common enough that it's worth mentioning. People don't solely distrust selfishly-motivated actors for archaic or irrational reasons.

[-][anonymous]10y10

Good points.

It seems to me that people are also skeptical of those who claim to accomplish altruistic ends acting on self-serving motivations because of a common “rule of thumb”:

Benefits to charity are taken directly from the potential benefits of the donor.

Religion may be the main promoter of this belief. For example, the crucifixion of Jesus teaches the lesson that the greatest good came from the greatest sacrifice.

This assumption can only exist if charity payoffs from all (or a portion of) potential actions are unknown. If we can quantify these payoffs, perhaps we can eliminate the core uncertainty that spawned this rule of thumb, and, therefore, encourage optimal allocation of charity resources. That’s a big IF, I know.

I think your discussion about how to deal with the issues you bring up is good, but this also reminds me why I'm a bit annoyed with the whole EA movement taking over LW-dom in general. Basically, it looks to me awfully like a lot of people who have been tricked into pretending to care about something they don't actually care about, and using all their powers of rationality to maintain said pretense.

How can we hack the problem that Robert Wiblin and Louie Helm complain of above? Would it be worth posting an Unglamourous Achievements thread, where we upvote the contributions to making a difference that are the least shiny and most workmanlike - so "I took some stuff to the photocopy shop for CFAR" gets lots of votes but "I wrote a blog post about X" gets fewer?

by feeling abject shame and guilt at not acting for the right reasons.

(raises hand) I'm in this camp. This post was worth reading just to see that I'm not the only one to suffer this form of moral brain damage. It's hard to escape; attempts to argue against guilt are easy to dismiss as motivated reasoning.

ETA: I agonize about such things on a regular basis, and I think I'll be rereading this post a few times to see what comes out of it. You seem to be aiming it mostly at EAs; I'm not one, but I thought you'd like to know it's having an affect outside that audience. I'm glad you wrote it and I hope to see more.

Optimise Someone Else’s Altruism One more way to limit the influence motivators have over your decision making is to pretend that you are deciding what someone else – who is in exactly your situation with exactly your talents – should do to maximise their impact. You are advising this other person, whose interests you don’t care for because they are not you, on how they might accomplish the most can towards their goals.

What if we actually have someone else optimize for us? As in, describe one's situation to friends (who understand the basic idea of what you're trying to do) and ask them to tell you which things they think would have the highest impact. Outsourcing the reasoning to someone less biased, or at least differently-biased. Maybe talking to a few different people and then picking an average yourself.

An obvious problem with this is that a third party (or even a second party) might be unwilling to suggest courses of action that they know would be costly to you, as it might be perceived as making a demand or setting an expectation which they do not wish to do.

This is a good point. I think how much of an issue it is would depend on what kind of relationship you have with your advisors. (I know that in general, some of my friends are a lot more willing and able to substitute something else for the typical social norms.)

I have upvoted your post in the hope that it will contribute to your positive feelings about having written it, so that you will continue to write intelligent and thoughtful posts in future.

You are very kind, good sir.

Do me one more favour - share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.

What parts of this thought-space are you still most confused about?

I've found it interesting to ask the question "if I could have any goals, then what goals would I have?" - gut-checking this with various forms of button test.

create community norms whereby the the amount social praise you get is proportional to the strength of your case for the impact of your action is.

Agreed. We need more thinking/work on this. "Thumbs up" for example, don't seem to cut it because some things are so easy to like, whether they actually have real impact or not that they are not at all proportional to merit.