Viliam_Bur comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong

44 Post author: wdmacaskill 15 November 2012 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (182)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gedusa 10 November 2012 12:22:36AM *  15 points [-]

Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).

A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.

Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC's $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.

Comment author: Viliam_Bur 11 November 2012 05:40:57PM 8 points [-]

This probably sounds horrible, but "saving human lives" in some contexts is an applause light. We should be able to think beyond that.

As a textbook example, saving Hitler's life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.

Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that is a multiplier of my intervention, and consequences of consequences of my actions are consequences of my actions, even when I am not aware of them.

Don't just tell me that I saved a hypothetical person from malaria. Tell me whether that person is likely to live a happy life and contribute to happy lives of their neighbors, or whether I have most likely provided another soldier for the next genocide.

Even in areas with frequent wars and human rights violations, curing malaria does more good than harm. (To prevent the status quo bias: Imagine healthy people suffering from the war or genocide. Would sending tons of malaria-infected mosquitoes make the situation better or worse?) But perhaps something else, like education or government change that could reduce war, would be better in long term, even if in the short term there are less "lives per dollar saved".

Of course, as is the usual problem with consequentialism, it is pretty difficult to predict the consequences of our actions.