During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?
Toby, ignoring donations to SIAI and possibly FHI I'm still very skeptical of your claims. GiveWell have done analysis strongly indicating that the cheapest lives to save actually cost between $1K and $2K, but one would have to search for a long time to find them GiveWell and much longer to do GiveWell's analysis yourself. Evaluating GiveWell is intermediate and most people lack the cognitive abilities to do that.
Furthermore, the lives in question are fairly low value compared to our own lives. I don't have any qualms in saying that if purely selfish I'd unhesitatingly play 5 full chamber Russian Roulette rather than being economically, physically, and mentally reduced to the conditions of a typical Tuberculosis victim regardless of what happiness researchers may say about them. Note that I have lived in the 3rd world and have known such people so it's not just distance that makes me say that.
I have some feel for the odds against snake eyes and with more hesitation I'd go for that too. In any event I have more feel for that then I do for what giving up essentially all my human capital would mean from the inside.
Anyway, based on the numbers I just gave, saving a quality of life comparable to my own would cost more like $50K. Would I spend $50K to save my life? Hell yes. To avoid a 1% chance of death? Maybe. Lets try that again like a behavioral economist. To reduce my chance of death in the next 10 years by half? Not so sure. I'm a 31 year old male so ignoring other considerations that would constitute a 1% risk of death. http://www.ssa.gov/OACT/STATS/table4c6.html Other considerations probably halve it already so make it 15 years and it's still borderline. Though inclined to consider it somewhat for altruistic reasons, I don't pay for cryonics, which is pretty much pure selfish survival along the above lines and which would be considerably cheaper. This leads me to conclude that I would have to be over 1% altruistic to spend on third world aid, not .01% as you suggest.