Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Study: In giving charity, let not your right hand...

3 homunq 22 August 2014 10:23PM

So, here's the study¹:

It's veterans' day in Canada. As any good Canadian knows, you're supposed to wear a poppy to show you support the veterans (it has something to do with Flanders Field). As people enter a concourse on the university, a person there does one of three things: gives them a poppy to wear on their clothes; gives them an envelope to carry and tells them (truthfully) that there's a poppy inside; or gives them nothing. Then, after they've crossed the concourse, another person asks them if they want to put donations in a box to support Canadian war veterans.

Who do you think gives the most?

...

If you guessed that it's the people who got the poppy inside the envelope, you're right. 78% of them gave, for an overall average donation of $0.86. That compares to 58% of the people wearing the poppy, for an average donation of $0.34; and 56% of those with no poppy, for an average of $0.15.

Why did the envelope holders give the most? Unlike the no-poppy group, they had been reminded of the expectation of supporting veterans; but unlike the poppy-wearers, they hadn't been given an easy, cost-free means of demonstrating their support.

I think this research has obvious applications, both to fundraising and to self-hacking. It also validates the bible quote (Matthew 6:3) which is the title of this article.

¹ The Nature of Slacktivism: How the Social Observability of an Initial Act of Token Support Affects Subsequent Prosocial Action; K Kristofferson, K White, J Peloza - Journal of Consumer Research, 2014

 

 

 

Comment author: keen 13 August 2014 04:28:10PM 2 points [-]

Perhaps I phrased my template too formally. Though as I search for examples, I notice that different uses of the word "guy" would require various replacements ("person," "someone," or "the one") in order to sound natural.

Really, I begin to think it would be simpler to alter our culture so that nobody expects "guy" to imply "male".

Comment author: homunq 22 August 2014 04:17:13PM 1 point [-]

That's simpler to say, but not at all simpler to do.

Comment author: Thrasymachus 03 August 2014 09:34:44PM 1 point [-]

Thanks for this important spot - I don't think it is a nitpick at all. I'm switching jobs at the moment, but I'll revise the post (and diagrams) in light of this. It might be a week though, sorry!

Comment author: homunq 22 August 2014 04:04:38PM *  0 points [-]

Bump.

(I realize you're busy, this is just a friendly reminder.)

Also, I added one clause to my comment above: the bit about "imperfectly measured", which is of course usually the case in the real world.

Comment author: homunq 02 August 2014 05:58:39PM *  7 points [-]

Great article overall. Regression to the mean is a key fact of statistics, and far too few people incorporate it into their intuition.

But there's a key misunderstanding in the second-to-last graph (the one with the drawn-in blue and red "outcome" and "factor"). The black line, indicating a correlation of 1, corresponds to nothing in reality. The true correlation is the line from the vertical tangent point at the right (marked) to the vertical tangent point at the left (unmarked). If causality indeed runs from "factor" (height) to "outcome" (skill), that's how much extra skill an extra helping of height will give you. Thus, the diagonal red line should follow this direction, not be parallel to the 45 degree black line. If you draw this line, you'll notice that each point on it has equal vertical distance to the top and bottom of the elliptical "envelope" (which is, of course, not a true envelope for all the probability mass, just an indication that probability density is higher for any point inside than any point outside).

Things are a little more complex if the correlation is due to a mutual cause, "reverse" causation (from "outcome" to "factor"), or if "factor" is imperfectly measured. In that case, the line connecting the vertical tangents may not correspond to anything in reality, though it's still what you should follow to get the "right" (minimum expected squared error) answer.

This may seem to be a nitpick, but to me, this kind of precision is key to getting your intuition right.

Comment author: Davidmanheim 13 January 2014 05:32:12PM 0 points [-]

Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.

Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.

Comment author: homunq 16 March 2014 12:43:09PM 1 point [-]

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Comment author: Davidmanheim 03 December 2013 05:31:46PM *  4 points [-]

(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it's a more general issue, or at least a more general way of putting the same issues.)

I'm wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.

If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)

Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don't understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.

More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.

Comment author: homunq 21 December 2013 07:29:23PM *  2 points [-]

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

Comment author: Davidmanheim 03 December 2013 05:44:07PM 3 points [-]

Many of these issues seem related to arrow's impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.

To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn't solved for good reasons; it's hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It's not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we're really good at one thing, and think everyone is stupid for not being good at it - and even if we're right, we're not good at (understanding) many other things, and some of those things matter for fixing these problems.

Comment author: homunq 21 December 2013 07:23:09PM 4 points [-]

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Comment author: DylanEvans 12 December 2013 06:31:34PM 5 points [-]

To my mind, the worst thing about the EA movement are its delusions of grandeur. Both individually and collectively, the EA people I have met display a staggering and quite sickening sense of their own self-importance. They think they are going to change the world, and yet they have almost nothing to show for their efforts except self-congratulatory rhetoric. It would be funny if it wasn't so revolting.

Comment author: homunq 21 December 2013 07:15:16PM 1 point [-]

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

Comment author: SaidAchmiz 07 December 2013 08:05:26AM 1 point [-]

Playing the devil's advocate is when Alice is arguing for some position, and Bob is arguing against it, even though he does not actually disagree with Alice (perhaps because he wants to help Alice strengthen her arguments, clarify her views, etc.).

Hypothetical apostasy is when Alice plays her own devil's advocate, in essence, with no Bob involved.

Comment author: homunq 21 December 2013 01:01:43PM *  1 point [-]

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

Comment author: homunq 30 October 2013 05:26:59PM -1 points [-]

Your probability theory here is flawed. The question is not about P(A&B), the probability that both are true, but about P(A|B), the probability that A is true given that B is true. If A is "has cancer" and B is "cancer test is positive", then we calculate P(A|B) as P(B|A)P(A)/P(B); that is, if there's a 1/1000 chance of cancer and and the test is right 99/100, then P(A|B) is .99.001/(.001.99+.999.01) which is about 1 in 10.

Comment author: homunq 03 November 2013 11:08:30AM 0 points [-]

Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.

View more: Next