During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?
I suppose another point to add is that "aid" is worth "one life." The actual specific life doesn't matter as long as one life is being redeemed with the aid.
If you do this, than the value of aid could be forecasted to include scenarios where the cost of aid decreases or the amount available to spend on aid increases. It would be okay to spend $10 to get $20 and then turn it into 2 lives saved. So, the question becomes 1 life now, 2 lives later.
Okay, yeah, this makes it work. The trick is valuing $10 at one life. If you are getting less than one life for $10 than you are getting robbed. Or, more accurately, you are saying that whatever you did get for $10 is worth the same as a life.
$10 is just a number. We could put in $X.
So... does this mean anything? If 1 life is $X and a random material thing costs $X, than of course they cost the same. By definition, they have the same dollar value.
Does the question become how much moral value can you get per dollar value? In that case, the best moral value per dollar is spending all of your dollars on lives saved. This throws us back into the field of value systems, but in a way that makes sense.
Okay, so does this actually answer my original question?
It answers the question by saying, since $10 can be spent to save 1 life but you are instead spending the money on a latte, you value a latte as much as you value 1 life. But this is a tautology. The next step is saying, "Therefore, you are killing someone by not saving them and buying a latte instead."
The implication could be that if X equals Y in one value system than X equals Y in all value systems. But this is obviously false.
The implication could be that you should spend all dollars in a way that maximizes moral value. Or, more accurately, it is more moral to trade dollars for higher moral value. The inverse would be that it is less moral to trade dollars for lower moral value.
I can see the jump from this to the statement, "[E]very time you get your hair cut, or go to a movie, or drink a Starbucks latte, you're killing someone."
The reason my initial criticism actually fails is because a latte costs $10 and the time it takes to earn $10. By the time I get another $10, someone dies.
The next person who touches the $10 has the same moral weight because time is ticking away. This is why we have the question of asking if 1 life now is better than 2 lives later. If the particular 1 life now was included in the 2 lives later the answer would be trivial. The actual question is, 1 life now, or 2 lives and 1 death later.
So the specific answer to my question:
The next person to touch the $10 doesn't matter because the real value being spent behind the scenes is the hour-value.
Tada! There was an answer.
And it was so close to something Dustin said. If only he had said "one hour" instead of "nothing."