Summary: The psychology of charitable giving offers three pieces of advice to those who want to give charity and those who want to receive it: Enjoy the happiness that giving brings, commit future income, and realize that requesting time increases the odds of getting money.

One Saturday morning in 2009, an unknown couple walked into a diner, ate their breakfast, and paid their tab. They also paid the tab for some strangers at another table. 

And for the next five hours, dozens of customers got into the joy of giving and paid the favor forward.

This may sound like a movie, but it really happened.

But was it a fluke? Is the much-discussed link between happiness and charity real, or is it one of the 50 Great Myths of Popular Psychology invented to sell books that compete with The Secret?

Several studies suggest that giving does bring happiness. One study found that asking people to commit random acts of kindness can increase their happiness for weeks.1 And at the neurological level, giving money to charity activates the reward centers of the brain, the same ones activated by everything from cocaine to great art to an attractive face.2

Another study randomly assigned participants to spend money either on themselves or on others. As predicted, those who spent money helping others were happier at the end of the day.3

Other studies confirm that just as giving brings happiness, happiness brings giving. A 1972 study showed that people are more likely to help others if they have recently been put in a good mood by receiving a cookie or finding a dime left in a payphone.4 People are also more likely to help after they read something pleasant,5 or when they are made to feel competent at something.6

In fact, deriving happiness from giving may be a human universal.7 Data from 136 countries shows that spending money to help others is correlated with happiness.8

But correlation does not imply causation. To test for causation, researchers randomly assigned participants from two very different cultures (Canada and Uganda) to write about a time when they had spent money on themselves (personal spending) or others (prosocial spending). Participants were asked to report the happiness levels before and after the writing exercise. As predicted, those who wrote (and thought) about a time when they had engaged in prosocial spending saw greater increases in happiness than those who wrote about a time when they spent money on themselves.

So does happiness run in a circular motion?

This, too, has been tested. In one study,9 researchers asked each subject to describe the last time they spent either $20 or $100 on themselves or on someone else. Next, researchers had each participant report their level of happiness, and then predict which future spending behavior ($5 or $20, on themselves or others) would make them happiest.

Subjects assigned to recall prosocial spending reported being happier than those assigned to recall personal spending. Moreover, this reported happiness predicted the future spending choice, but neither the purchase amount nor the purchasing target (oneself or others) did. So happiness and giving do seem to reinforce each other.

So, should charities remind people that donating will make them happy?

This, alas, has not been tested. But for now we might guess that just as people generally do things they believe will make them happier, they will probably give more if persuaded by the (ample) evidence that generosity brings happiness.

Lessons for optimal philanthropists: Read the studies showing that giving brings happiness. (Check the footnotes below.) Pick out an optimal charity in advance, notice when you're happy, and decide to give them money right then.

Lessons for optimal charities: Teach your donors how to be happy. Remind them that generosity begets happiness.

 

Precommitment

Ulysses did not get past the beautiful but dangerous Sirens with sheer willpower. Rather, he knew his weaknesses and precommitted to sail past the Sirens. He tied himself to his ship's mast.

We all know the power of precommitment. Though many gym memberships remain unused, people do spend more time at the gym if they purchase a gym membership than if they pay per visit.10 Can precommitment work for giving to charity, too?

Yes, it can. In one study, donors were asked to increase their monthly contributions either immediately or two months in the future. One year later, the increase in donations was 32% higher for the group asked to precommit, and donor cancellation rates were identical (and very low) in both groups.11

Does it matter whether a charitable person precommits to donate money they already have vs. money they don't have yet?

Apparently it does. In one experiment, participants were entered into a raffle, with a chance to win $25. Participants had to decide in advance whether to donate the money to United Way or receive it in cash. Nearly 40% of the participants opted to precommit the potential winnings to charity. In another experiment, researchers asked subjects to imagine they had just won the lottery. Then, some were asked to donate some of their 'winnings' immediately, while others were asked to donate their 'winnings' in two months. Surprisingly, those asked to donate current 'winnings' later actually gave less.12

This suggests that pledging to donate current earnings later may be less motivating than donating current earnings now, while pledging to donate future earnings later should work well. (Of course, money is fungible. The donated $100 might as well be from today's paycheck as from the next one. But charities should frame requests for precommitment in terms of future earnings, like Giving What We Can does.)

Precommitment seems to work best when it creates psychological distance between donors and their money.13 The United Way allows donors to give via paycheck donations; because donors never feel like they have that money, they never face the pain of parting with it.

The same principle may explain the success of affinity credit cards. Affinity cards allow consumers to precommit their reward points to benefit a chosen charity. Donors never experience the pain of parting with other things that reward points could otherwise purchase (flights, etc.). As an aspiring optimal philanthropist, I use an affinity card that gives 1%-10% cash back to the Singularity Institute (plus $50 per new card signup). As a lazy optimal philanthropist, I'm glad it took me only four minutes to sign up.

Lessons for optimal philanthropists: Precommit. Use paycheck deduction and affinity cards to give money. Pledge future earnings.

Lessons for optimal charities: Ask donors to precommit to donate future earnings. Offer an affinity card. Offer paycheck deduction donations if possible.

 

Time vs. Money

In one creative study, researchers asked subjects to read some information about a fictional non-profit, the "American Lung Cancer Association." Subjects were then told that this organization was holding a fundraising event. Half the subjects were asked how much time they would like to donate (a time-ask). The other subjects were not asked about volunteering their time. Next, both groups were asked how much money they would like to donate (a money-ask). Those who first got a time-ask gave more money when asked for money ($36.44 vs. $24.46). Asking donors for time resulted in them giving more money!

Researchers also conducted a field experiment by partnering with HopeLab, a Bay Area charity that aims to improve the quality of life for children with chronic illnesses. A researcher representing HopeLab visited college campuses and waited outside a classroom full of students. When the students emerged, the researcher asked them individually whether they were willing to take part in a 30-minute study in exchange for $10.

Those who agreed read an introduction to HopeLab. Then, a third of them were asked how much they would like to give time to HopeLab, another third were asked how much they would like to donate to HopeLab, and a control group was asked no questions. Finally, all groups were asked their impressions of HopeLab, along with 20 minutes of filler questions.

When exiting the study, participants encountered the researcher (representing HopeLab) next to a box labeled 'HopeLab Donations.' The researcher paid each participant with ten $1 bills and gave them a flyer with details about volunteering for HopeLab. Researchers tracked the amount donated and which participants volunteered during the next month.

Subjects in the time-ask-first condition were the most generous, donating $5.85 of their $10, compared to $4.42 for those in the no-ask condition and $3.07 for those in the money-ask-first condition. Subjects in the time-ask-first condition also volunteered the most (7% gave time, averaging 6.5 hours), compared to those in the money-ask-first condition and the no-ask condition (1.6% each).14

Why do we see this 'Time-Ask Effect'? Perhaps it is because thinking about spending time on something activates a mindset of emotional meaning and satisfaction, allowing a donor to connect emotionally with a charity, whereas thinking about spending money activates a purely instrumental mindset.15 Whatever the reason, asking for time before money may result in more of both.

Lessons for optimal philanthropists: Volunteer your time to an optimal charity. You may soon find yourself giving time and money.

Lessons for optimal charities: Ask supporters for time before you ask them for money.

 

Multiplying Your Impact

Optimal philanthropy is a new but obvious idea. Spreading the meme at this early stage is a fairly optimal act in itself. 

Giving to optimal charities instead of average charities can multiply one person's impact 10, 100, or maybe 1000 times. Now multiply that change in impact by a hundred, thousand, or million people who have been persuaded by the simple math and equipped with the psychology of giving.16

That's a big impact.

So, contact me at OptimalPhilanthropy@gmail.com and precommit some of your time to working with a network of people to spread the meme of optimal philanthropy. :)

Or if you haven't got time for email, sign up for an affinity card.

The world thanks you.

 

 

 

Notes

1 Lyubomirsky et al. (2004).

2 Harbaugh et al. (2007).

3 Dunn et al. (2008).

4 Isen & Levin (1972).

5 Aderman (1972).

6 Harris & Huang (1973); Kazdin & Bryan (1971).

7 On human universals, see Norenzayan & Heine (2005).

8 Aknin et al. (2010).

9 Anik et al. (2010).

10 Della Vigna & Malmendier (2006); Gourville & Soman (1998).

11 Breman (2006).

12 Meyvis et al. (2010).

13 Meyvis et al. (2010). See the work on construal level theory: Trope & Liberman (2003); Liberman et al. (2007).

14 Liu & Aaker (2008).

15 Liu (2010).

16 For overviews, see Oppenheimer & Olivola (2010); Andreoni (2006); Bekkers & Wiepking (2007); Small & Simonsohn (2008); Reed et al. (2007).

  

References

Aderman (1972). Elation, depression, and helping behavior. Journal of Personality and Social Psychology, 24: 91-101.

Aknin, Barrington-Leigh, Dunn, Helliwell, Biswas-Diener, Kemeza, Nyende, Ashton-James, & Norton (2010). Prosocial spending and well-being: cross-cultural evidence for a psychological universal? NBER Working Paper 16415. National Bureau of Economic Research.

Andreoni (2006). Philanthropy. In Kolm & Ythier (eds.), Handbook of the Economics of Giving, Altruism, and Reciprocity, Vol. 2 (pp. 1201-1269). North Holland.

Anik, Aknin, Norton, & Dunn (2010). Feeling good about giving: The benefits (and costs) of self-interested charitable behavior. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 3-14). Psychology Press.

Armstrong, Carpenter, & Hojnacki (2006). Whose deaths matter? Mortality, advocacy, and attention to disease in the mass media. Journal of Health Politics, Policy and Law, 31: 779-772.

Bekkers & Wiepking (2007). Generosity and philanthropy: A literature review.

Bremen (2006). Give More Tomorrow: A Field Experiment on Intertemporal Choice in Charitable Giving. Working paper, Stockholm School of Economics.

Della Vigna & Malmendier (2006). Paying not to go to the gymAmerican Economic Review, 96: 694–719.

Dunn, Aknin, & Norton (2008). Spending money on others promotes happiness. Science, 319: 1687-1688.

Eisensee & Stromberg (2007). News floods, news droughts, and U.S. disaster relief. Quarterly Journal of Economics, 122: 693-728.

Gourville & Soman (1998). Payment depreciation: The behavioral effects of temporally separating payments from consumption. Journal of Consumer Research, 25: 160-174.

Harbaugh, Mayr, & Burghart (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science, 316: 1622-1625.

Harris & Huang (1973). Helping and the attribution process. Journal of Social Psychology, 90: 291-297.

Isen & Levin (1972). The effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21: 384-388.

Kazdin & Bryan (1971). Competence and volunteering. Journal of Experimental Social Psychology, 7: 87-97.

Liberman, Trope, & Stephan (2007). Psychological distance. In Kruglanski & Higgins (eds.), Social Psychology: Handbook of Basic Principles, 2nd edition. Guilford Press.

Liu & Aaker (2008). The happiness of giving: The time-ask effect. Journal of Consumer Research, 35: 543-547.

Liu (2010). The benefits of asking for time. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 201-214). Psychology Press.

Lyubomirsky, Tkach, & Sheldon (2004). Pursuing sustained happiness through random acts of kindness and counting one's blessings: Tests of two six-week interventions. Unpublished data, Department of Psychology, University of California, Riverside.

Meyvis, Bennett, & Oppenheimer (2010). Precommitment to charity. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 35-48). Psychology Press.

Norenzayan & Heine (2005). Psychological universals: What are they and how can we know? Psychological Bulletin, 131: 763-784.

Oppenheimer & Olivola, eds. (2010). The Science of Giving: Experimental Approaches to the Study of Charity. Psychology Press.

Reed, Aquino, & Levy (2007). Moral identity and judgments of charitable behaviors. Journal of Marketing, 71: 178-193.

Slovic (2007). 'If I look at the mass I will never act': Psychic numbing and genocide. Judgment and Decision Making, 2: 79-95.

Small & Simonsohn (2008). Friends of victims: Personal experience and prosocial behavior. Journal of Consumer Research, 35: 532-542.

Trope & Liberman (2003). Temporal construal. Psychological Review, 110: 403-421.

New to LessWrong?

New Comment
86 comments, sorted by Click to highlight new comments since: Today at 11:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Good picture. Together, we can punch the sun!

I'd be hesitant to generalize from normal people's motivations for giving to those of optimal philanthropists.

Do you think advocating optimal philanthropy is likely to yield greater returns than more direct ways to reduce existential risk? I could see it going either way, and it's hard to figure out what calculations to do to find out.

I decided to adopt "Together, we can punch the sun!" as a personal motto even before I scrolled back up and saw the relevant photo.

Now I just need to decide what it's a motto for.

8CronoDAS13y
It probably has something to do with that unpublished Gurren Lagaan crossover fic. ;)

I am also co-authoring a journal article and a popular pamphlet which make the case for x-risk reduction as the most optimal philanthropic venture. :)

Three cheers for sun-punching.

To give you something to argue against, consider the position that "saving the world" spreads because it acts as a superstimulus to do-gooders. There's no credible evidence that aiming at saving the world has any effect on the probability of the world ending. By contrast, "the end is nigh" plackard syndrome is well known - and it diverts resources from other potentially-useful tasks.

3Giles13y
X-risk reduction didn't really act as a superstimulus to me (I had to convince myself). To accept that x-risk reduction is a massive opportunity, I also needed to accept both that x-risk was a massive problem and that I was going to hold a non-mainstream worldview for the foreseeable future. So, there was more bad stuff to think about on this issue than good stuff - it was more ugh field than superstimulus. That's just me though; n=1.
4timtyler13y
Superstimulii do not have to be positive. Traditional religions spread by invoking eternal damnation. The End of Days groups spread their message by invoking eternal oblivion. As for holding non-mainstream views, that too is typical cult phenomenon. Weird beliefs act as markers of group membership. They show which tribe you belong to, so the ingroup can identify you. Normally the more crazy and weird the beliefs, the harder the signal is to convincingly fake. Without meaning to doubt your powers of introspection, people don't necessarily have to be aware of being influenced by superstimulii. Sometimes, if the stimulus becomes conscious, the effect is reduced. So. for example lipstick can be overdone, and often works best of a subliminal level. In the case of The End of Days groups, the superstimulus is pretty obvious, but the effect of on any particular individual it may not be. Anyway, you can look to the left and see large positive utility, to the right and see large negative utility - but then you have to draw your own conclusions about why you are seeing those things.
6nazgulnarsil13y
An economist might say that when you punch something, you get less of it. I want more giant sources of negative entropy for my use :(

I would like to see a thorough analysis of how someone raising funds can use the tricks from Cialdini's Influence to effectively contribute to charity. Even those without funds could use that sort of lesson to contribute meaningfully.

[-][anonymous]13y100

Darn you, comment retraction mechanism.

[This comment is no longer endorsed by its author]Reply
5jsalvatier13y
I think a Donor Advised Fund is what you're looking for. I recently set one up with Fidelity on Carl Shulman's advice.

Why do we see this 'Time-Ask Effect'? Perhaps it is because thinking about spending time on something activates a mindset of emotional meaning and satisfaction, allowing a donor to connect emotionally with a charity, whereas thinking about spending money activates a purely instrumental mindset.15 Whatever the reason, asking for time before money may result in more of both.

I must be more cynical than you. I'd think that if people said "yes", then they've already committed themselves to the organization and so would to give money, and/or if they said no, they would be feeling unpleasantly non-altruistic and would give money to assuage their conscience. Did the studies show differences in the money-ask broken down by whether they said yes or no to the time-ask?

Also, is there any data on whether people feel happier only after donating to fuzzy charities like the local animal shelter, or whether they'll also feel happier donating to something very abstract like SIAI?

3lukeprog13y
Yes. And the data suggesting the emotional hypothesis I gave are many and very detailed. But there's no way I can summarize it in a paragraph. The chapter on this in The Science of Giving is good.
[-][anonymous]13y80

About giving making you happy. I don't understand the research. I looked at Dunn's paper, but don't get the claim. They report (p. 5) that the result of participants asked to spend 5$ or 20$ on others/themselves is [mean = .18 / SD = .62] / [mean = -.19, SD = .66], respectively. What's the scale they use or the real distribution? (I can't figure it out from the paper alone.) Isn't this a huge SD? I also looked at Amazon's preview of The Science of Giving, but it includes no numbers whatsoever.

I ask because Dunn and Anik report significant improvements even... (read more)

7Unnamed13y
The Dunn article was published in Science, which means that most of the details are in the supplemental materials. Here's the relevant part: So happiness was measured with 11 items, 1 directly asking about happiness and 10 asking about positive emotions. Each item was rescaled so that the average of all subjects on that item was 0 and the standard deviation was 1. Then the 11 items were averaged together. Those who were instructed to give to others scored .37 points higher on that composite happiness measure at the end of the day than the spend on self group, controlling for scores on that composite happiness measure at the beginning of the day. Since the SD of that composite measure was about .64, that means that they were about .6 SD's happier, which is generally considered a "medium" effect size.
2taryneast13y
I'd count myself as non-neurotypical (and just one data point, of course) but... I agree that RAOK are short-lived - but that short-lived time is fun enough to keep me doing it every so often. I think it also helps to see that kind of thing as a sort of game. Making it fun makes me happy when i do it more frequently (though admittedly not very often). As to giving to charities. I don't have a regular charity-donation because that is boring. I do, however, randomly give a years-worth of donation to charities that strike my fancy (Sea Shepherds, SIAI, Methuselah Foundation and more) Perhaps non-optimal from their perspective, but it increases my happiness. Perhaps I'm aiming more at warm-fuzzies than utilons, but it works for me. One of the other commentors speaks of how to combine this effectively - ie monthly setting aside the cash in "charity account" then being able to donate from this at will - which sounds like a good strategy for keeping it more fun, while still maintaining your pre-committed optimal give-rate. As to giving in general. I realised a couple of years back that I was a bit of a tight-fist... the kind of person that never bought a round of drinks - and I have been actively working to change that behaviour pattern (eg by shouting lunch for my friends every so often, buying a plate of chips at a meetup etc). Even though I haven't changed very much yet, I have actually noticed a marked increase in happiness - albeit fleeting... there's a nice warm-fuzzy you get from spreading largesse. ...but the long-term effects are that I can now consider myself not to be so tight-fisted. My definition of myself is changing to one for which I have far more respect. That alone is worth the effort (for me).

What is the optimal time to donate to a charity?

As soon as funds become available? At set time intervals? After saving, accumulating interest, and waiting for a potentially larger than normal impact period?

0[anonymous]13y
To answer this, you would need optimal stopping time models of the utility obtained from giving vs. the utility obtained from other ventures with the money.

This post made my day, which was already one of my best days of the summer EVER BETTER. Thank you. EDIT: I hope i get approved though, i only earn ~10,000$ a year and live at home/dorm.

So, should charities remind people that donating will make them happy?

I worry that this might actually have the opposite effect.

There was a time when I helped people due to an explicit goal to feel good about myself, and less because I cared about them. Over time, it changed to me helping them because I wanted to. But before it did, I actually got very little satisfaction out of helping them, because I knew I was just doing it for selfish reasons. I remember complaining about this to someone.

6[anonymous]13y
Interesting. So it's essentially a Catch-22? You will feel happier if you donate, but only if you don't do it to feel happier. If this is true, wouldn't advertising the happiness aspect hurt donations?
2fburnaby13y
Either way, it seems easily testable.

I have seen a similar idea, presented in a much more cynical form. Essentially arguing that people give to charity because it makes them feel less bad about doing bad things during the rest of their day.

1MatthewBaker13y
Cynicism has its place alongside cautious optimism.

While donating may make people happier, is anything known about the donating habits of people who exhibit above-average happiness (controlling, of course, for how difficult it is to measure happiness)?

It would be interesting to see if it goes both ways.

I just signed up for the SI credit card. My current one gave me no benefits, so this was a no-brainer move, and it took less than 10 minutes.

3Rain13y
It's also a way to visibly affiliate with the SingInst brand. I wonder if they receive any money from the SingInst shirts and stuff on Zazzle. I'm also curious if seeing the SingInst logo every time you pull out the card to pay for something, you'll get a guilt twinge at realizing the potential utility tradeoffs.
3Armok_GoB13y
That last one is a good reason to put rationality and SIAI symbolism everywhere that you might make decisions or be subject to akrasia: by your computer, on your fridge, on your wallet...
2katydee13y
Good advice. In this spirit, the SIAI logo is now my desktop image. On a related note, is there a larger copy of the logo anywhere? The one I could find is a little crude when blown up to desktop resolution.

So where can I find anecdotes about how awesome and fun it is to be saving the world through FAI research and how rewarding it is to see your work have a direct impact, so I have something vicariously available to imagine when you ask me to donate my time?

2DSimon13y
Happy post-singularity science fiction, maybe.
0taryneast13y
Surely Eliezer has some of that lying around somewhere...
1Armok_GoB13y
Is there any reason anecdotes you can just make up yourself would be less effective?
0dugancm13y
I don't know what donating my time to SI would entail other than writing, so find it difficult to imagine in a positive frame. I may be able to get around this by training myself on the five-second level to instead mentally contrast a charity's desired future outcomes with the present (or your favorite charity's desired future outcomes, when tempted to switch) when asked, but how many others in my position will do so?
0[anonymous]13y
Looks like there are a lot at the Rationality Boot Camp Blog.

I think I would consider a charity saying "Give, it'll make you happy!" to be really suspicious...

0DSimon13y
There are more diplomatic ways of putting it. For example: "Help those in need, it'll make you happy, studies X, Y, and Z prove it. You can become happier by donating to a worthy charity, and we humbly suggest our own..."
2taryneast13y
nah -just sounds like even more weasel words... sorry.

And why would giving away money to charities be a good idea? Returns on investment of almost all of them are extremely close to none, and most people are horrible at identifying the exceptions.

For every effective cause like let's say polio eradication, Wikileaks, and whatever GiveWell considers good, there's thousands of charities that essentially waste your money. Especially SIAI.

You're trying to use methods of rationality to come up with the best way to appeal to emotions.

3Hul-Gil13y
I don't know if this a taboo subject or what, but I'm curious. What makes you include SIAI in this category? (If you'd rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
3taw13y
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there's not even any real reason to think this reduces rather than increasing existential risk. If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don't cost much.
[-][anonymous]13y220

Just weighing in here:

SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.

The field of AI has been littered with (metaphorical) corpses since the 1960's. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false -- especially if it concerns "general" intelligence or "human-level" intelligence. So, Eliezer is probably wrong just like everyone else. That's not a particular criticism of him; it still puts him in august company.

So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.

What I don't like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like -- I've had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is play... (read more)

So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.

I don't think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn't give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?

The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.

I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that's the compensation you expect from charities.

0[anonymous]13y
Clearly a few hours wouldn't be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
3Dr_Manhattan13y
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer's time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose). Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I'd prefer to get this data independently. ETA. Personally I've given some money to SI, but it's largely based on previous successes and not on a clear agenda of future direction. I'm ok with this, but it's possibly sub-optimal for getting others to contribute (or getting me to contribute more).
1[anonymous]13y
I should probably reread the papers. My brain tends to go "GAAAH" at the sight of game theory. I'm probably a bit biased because of that.
1multifoliaterose13y
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do. I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
4[anonymous]13y
The main claim that needs to be evaluated is "AI is an existential risk," and the various hypotheses that would imply that it is. If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I'm not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn't making much progress. Pretty low priority.
3komponisto13y
Are you considering other effects SIAI might have, besides those directly related to its primary purpose? In my opinion, Eliezer's rationality outreach efforts alone are enough to justify its existence. (And I'm not sure they would be as effective without the motivation of this "secret agenda".)
0[anonymous]13y
Interesting. Why do you think so?

Donating to SIAI is pure display of tribal affiliation

That just isn't true. It is partially a display of tribal affiliation.

They have nothing to show for it, and there's not even any real reason to think this reduces rather than increasing existential risk.

Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.

If I was to donate (more) to the SIAI it would be a mix of:

  • Tribal affiliation.
  • Reciprocation. (They gave me a free bootcamp and airplane ticket.)
  • Actually not having a better idea of a way to not die.
-8taw13y
0taryneast13y
Sounds interesting. Do you have links for charities of this sort that you recommend?
3mstevens13y
I'm a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock. They definitely try to promote the kind of thinking that will result in things like seed vaults though (I'm a member) My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven't seen any appealing organisations in this area.
0taryneast13y
So am I :) Just got my steel card last week, actually. I had a wonderful moment several months back when I was wandering about in the science museum in London... and stumbled across their prototype clock... SO cool!
0lessdazed13y
What's more, the tribal affiliation might not be a "display" to others. Hence wedfrifid leaving that word out of his bullet point list.
2MatthewBaker13y
Um... The return on SIAI so far is well worth it for me :). Can you give me specific examples of how you consider SIAI to waste money? Spreading knowledge of cryonics alone is worth it from an altruistic standpoint and FAI theory development from a selfish one.
-4taw13y
So it's just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance! Outside view says they're not anything like that, and they have zero to show for it as a counterargument. If you absolutely positively have to spend money on existential risk (not that I'm claiming this is a good idea, but if you have to), asteroids are known to cause mass extinction every year with 1:50,000,000 or so chance. That's 1:500,000 per century, not really negligible. And you can make some real difference by supporting asteroid tracking programs.
3bgaesop13y
No, that's not it at all. If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to, then it makes sense that one of them (Eliezer) would create an organization that would have a very high expected value to have exist (SIAI) and the rest of the people here would donate to it. If that is the case, that SIAI is the best charity to donate to in terms of expected value (which it may or may not be), then it would also be the best charity to best donate to in order to display tribal affiliations (which it definitely is). So if you accept that people on LW are more rational than average, then them donating so much to SIAI should be taken as weak evidence that SIAI is a really good charity to donate to. I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
6taw13y
I didn't downvote you, but what you're saying is essentially "if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity". Which is something every single group would say, in slight variation. Here's results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I'm not really following this too closely, I'm mostly glad that some people are doing something here.
7bgaesop13y
Thanks! I upvoted you. Well yeah; that's why you should examine the evidence and not just do what everyone else does. So let's look at the beliefs of all the Singularitarians on LW as evidence. What would we expect to see if LW is just an arbitrary tribe that picked a random cause to glom around? I suspect we would see that not many people in the world, and particularly not high-status people and organizations, would pay attention to the Singularity. I predict that everyone on LW would donate money to SIAI and shun people who don't donate or belittle SIAI. Now what would we see if LW is in fact a group of high-quality rationalists and the world, in general, is too blinded by various biases to think rationally about low-probability, high-impact events? Well, most people, including high-status people (but perhaps not some academics) wouldn't talk about it. People on LW would donate money to SIAI because they did the calculation and decided it was the highest expected value. And they would probably shun the people who disagree, because they're still humans. Those two situations look awfully similar to me. My point is, I certainly don't think that you can use LW's enthusiasm about SIAI compared to the general public as a strike against LW or SIAI. I'm not finding anything there indicating that they're hurting for funding, but perhaps I'm missing it.
4MatthewBaker13y
I honestly believe that the Singularity is a greater threat then asteroids to the human race. Either an asteroid will be small enough that we can destroy it or its too big to stop. Once you make an asteroid big enough to cause risk to humanity its also a lot easier to find and destroy. However, a positive singularity isn't valued enough and a negative singularity isn't feared enough among humanity unlike asteroid deflection efforts and that's why i focus on SIAI.
5taw13y
You actually need to detect these asteroids decades in advance for our current technology to stand any chance, and we currently don't do that. More detection efforts mean tracking smaller asteroids than otherwise, but more importantly tracking big asteroids faster. Arbitrarily massive asteroid can be moved off course very easily given enough time to do so. That's the plan, not "destroying" them.
3MatthewBaker13y
Still, considering there's a very low chance of a large asteroid strike and most the most quoted figure Ive heard is that we have more than 75% of NEO objects that are of dangerous size being tracked. I think a negative singularity is more likely to happen in the next 200 years then an asteroid strike. However, it is a good point that donating money to NEO tracking could be a good charitable donation as well i just don't think its on the same order of magnitude as the danger of a uFAI.
6taw13y
With asteroid strike everybody agrees on risk within order of magnitude or two. We have a lot of historical data about asteroid strikes of various sizes, can use power level distribution to smooth it a bit etc. With UFAI people's estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.
-2nazgulnarsil13y
Money spent on mind uploading is a better defense against asteroids than asteroid detection. At least for me.
1rwallace13y
In particular, for donation to a particular charity to be a good idea, two conditions have to hold: 1. The sign of the expected utility has to be positive rather than negative. 2. The magnitude has to be greater than the expected utility of purchasing goods and services in the usual way (which generates benefit not only to you, but to your trading partners, to their trading partners etc.) It is only moderately unlikely for either condition alone to be true, but it is very unlikely for both conditions to be true simultaneously.
0shokwave13y
The studies in the main post suggest that it brings more happiness than spending it on yourself, for small amounts relative to the amount you currently spend on yourself. Bringing happiness is what makes it a pretty good idea.
4taw13y
To be honest all these laboratory tests and happiness questionnaires seem fairly dubious methodologically. What's the best research here?

Why do we see this 'Time-Ask Effect'?

I think there is at least one possibility that I haven't yet seen mentioned here.

If I personally was given the choice between a charity asking for time, and a charity asking for money - I would consider the one asking for time to be more legitimate than the money-only charity.

So many people ask for your money these days and you basically don't know where it's all going.. whether it's effective or not. But if a charity is even set up such that it can take time-donations (even if i don't actually do it myself), then ... (read more)

The "Multiply your Impact" section seems like it was just slipped in. Optimal for what? "Human beings" is not specific. Is my charity budget better spent supporting institutions whose work I approve of, extending lifespans in Africa, or giving me a reputation as a generous spender among my friends? If the main reason to give is because it will make me happier, then isn't optimal philanthropy what makes me happiest, not what does the most good?

0taryneast13y
The trick is to find charities that align both of these goals. :)
1Vaniver13y
The least convenient possible world is one that involves tradeoffs between desires.
0taryneast13y
Only if your desires require compromises to match up. In which case - yes. Otherwise - if you do manage to find a match - it's a double-win! So IMO worth at least looking on the off-chance.
[-][anonymous]13y-10

Lets say you had two choices about how to view the world:

  1. Giving to charity is a wonderful thing to do. Giving makes you happy and not giving makes you feel sad.
  2. Giving to charity is a stupid thing to do. Giving makes you feel like a rube who is getting conned and not giving makes you happy for being smart enough to avoid it.

And lets also assume that you value money. Which way of viewing the world is better? Well, I think its obvious that 2 is better because you get to feel good about yourself and keep your money. Shouldn't LWers therefore try as best they can to achieve this viewpoint? Isn't that a better way to go through life? Its certainly not impossible. I've done it. You can too!

8CuSithBell13y
Removing a preference can interfere with satisfying that preference.
1nshepperd13y