During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?
Why do we have to decide between them? Long before I ever heard of "Shut Up and Multiply," I used a test that produced the same results, but worked equally well for "Shut Up and Divide." My general statement was, "Be consistent." I would put things in the appropriate context and make sure to apply similar value functions regardless of size or scope - or, perhaps to phrase it better, making sure my consistently applied value function definitely considered size and scope.
From where should we derive our values? Well, we've got the option of using what's already there (the value function implemented in the human brain), or we have the option of appealing to something else, or we can just apply our reason and alter the function as needed. It seems to me that we don't really have access to that "something else," so I doubt we have a choice on this part. Our natural empathic hardwiring will shoot off all kinds of flares when we see suffering up close and personal, and will fail to activate when it should on the larger scale. We can still place arbitrary hacks into the value function to try and correct the scope insensitivity. The function was arbitrary in the first place, so there's no conflict other than ease of application.
How much of our values are from hardwiring as opposed to reasoned thought? Well, probably however much we haven't put thought into. For most people, I expect this to be a large portion. However, once we've thought about it, and applied our function to our functions, we can label them good or bad, and work at adding more arbitrary hacks to the arbitrary, evolution-designed, hardwired values. I see this in a particular way: a piece of the function, an item on the list of human morality, is "this list may change or update as needed," or, "this function is subject to revision based upon its output when ran against itself." Again, the ease of doing this is a more interesting debate, in my opinion.
If by "essential" you mean, "someone without it would not be human," then I grant that it's possible. But if you mean, "we can't change it," then I would disagree. We can change our values, now and certainly in the future as we begin rewiring things on a more fundamental level. I see it as another question of definitions: if we change ourselves "for the better," are we "extincting the human race," or "continuing as human and more"? It seems that practical reality won't care either way.