During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?
I haven't read the other comments here and I know this post is >10yrs old, but…
For me, (what I'll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.
That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.
What do you think your second order “want to want to help" desire is based on or came from? For example one possibility is that someone previously appealed to your occasional (first order) desire to help strangers and suggested you generalize it, which caused you to have a cached thought that that's what you "should" do. I mean this seems to be exactly what Peter Singer's Drowning Child argument tries to do, and a lot of people cite it as their introduction/conversion to EA. (And you also say that you personally did it to others.)
Or suppose you only have y... (read more)