Reposted from a few days ago, noting that jsalvatier (kudos to him for putting up the prize money, very community spirited) has promised $100 to the winner, and I have decided to set a deadline of Wednesday 1st December for submissions, as my friend has called me and asked me where the article I promised him is. This guy wants his god-damn rationality already, people!
My friend is currently in a potentially lucrative management consultancy career, but is considering getting a job in eco-tourism because he "wants to make the world a better place" and we got into a debate about Efficient Charity, Roles vs. Goals, and Optimizing versus Acquiring Warm Fuzzies.
I thought that there would be a good article here that I could send him to, but there isn't. So I've decided to ask people to write such an article. What I am looking for is an article that is less than 1800 words long, and explains the following ideas:
- Charity should be about actually trying to do as much expected good as possible for a given amount of resource (time, $), in a quantified sense. I.e. "5000 lives saved in expectation", not "we made a big difference".
- The norms and framing of our society regarding charity currently get it wrong, i.e. people send lots of $ to charities that do a lot less good than other charities. The "inefficiency" here is very large, i.e. Givewell estimates by a factor of 1000 at least. Our norm of ranking charities by % spent on overheads is very very silly.
- It is usually better to work a highly-paid job and donate because if you work for a charity you replace the person who would have been hired had you not applied
- Our instincts will tend to tempt us to optimize for signalling, this is to be resisted unless (or to the extent that) it is what you actually want to do. Our instincts will also tend to want to optimize for "Warm Fuzzies". These should be purchased separately from actual good outcomes.
- Our human intuition about how to allocate resources is extremely bad. Moreover, since charity is typically for the so-called benefit of someone else, you, the donor, usually don't get to see the result. Lacking this feedback from experience, one tends to make all kinds of gigantic mistakes.
but without using any unexplained LW Jargon. (Utilons, Warm Fuzzies, optimizing). Linking to posts explaining jargon is NOT OK. Just don't use any LW Jargon at all. I will judge the winner based upon these criteria and the score that the article gets on LW. Maybe the winning article will not rigidly meet all criteria: there is some flexibility. The point of the article is to persuade people who are, at least to some extent charitable and who are smart (university educated at a top university or equivalent) to seriously consider investing more time in rationality when they want to do charitable things.
So, I know it's wise to purchase warm fuzzies and utilons separately, but it just so happens that I get a significant quantity of warm fuzzies from saving hundreds of lives. I'm weird like that.
Anyway, suppose (against all evidence) that utilities are ordinally intercomparable. Suppose further that the relevant chunk of my utility function is U(charity) = U(fuzzies) + U(altruism), where U(fuzzies) = ln(# of lives saved), and U(altruism) = (net utility of saved life to owner) * (my discount rate for the utility of strangers). Let's say the typical life saved by charities is worth 30,000 utilons to its owner, and that my discount rate for strangers' utility is 1/100,000.
So, if I save 200 lives, I get ln(200) + (30,000 200 / 100,000) = 65 utilons for me. If I save 2,000 lives, I get ln(2000) + (30,000 2,000 / 100,000) = 607 utilons for me. My original point was going to be that I do get diminishing marginal returns to charity, but apparently given my assumptions they diminish so slowly as to be practically constant, and so I will shut up and pick just one charity in so far as I can find the willpower to do so.
Hooray for accidentally proving yourself wrong with back of the envelope calculations.