During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?
If the recipients are highly functional and creative thereafter, they should make money. If they make money, even if you don't want it, they can pay you back.
I do approve of charity which gives to things that do go on to create more than was invested. An example would be investing into basic research that isn't going to pay off until decades later. Investing in that is, I think, one of the most commendable charitable acts.
Most charity, however, is not that. It is more so charitable indulgence; it is spending money on something that is emotionally appealing, but never provides a return; neither to the giver, nor to anyone else.
I despise the travesty of such acts being framed as morally valuable charity, rather than as an indulgent throwing of resources away.
Well, if you want to say that curing a TB patient to have a mostly happy life with low economic productivity in tradables is a despicable "travesty" and an "indulgent" waste of resources (and not because the return on investment could be used to do more good later), you can use words that way.
But in future it would be nice to make it plain when your bold conclusions about "cost-benefit analysis" depend so profoundly on normative choices like not caring about the lives or welfare of the powerless, rather than any interesting empirical considerations or arguments relevant to folk who do care.