Perhaps I should have been more specific than to use a vague term like "morality". Replace it with CEV, since that should be the sum total of all your values.
Most people value happiness, so let me use that as an example. Even if I value own happiness 1000x more than other people's happiness, if there are more than 1000 people in the word, then the vast majority of my concern for happiness is still external to myself. One could do this same calculation for all other values, and add them up to get CEV, which is likely to be weighted toward others for the same reason that happiness is.
Of course, perhaps some people legitimately would prefer 3^^^3 dust specs in people's eyes to their own death. And perhaps some people's values aren't coherent, such as preferring A to B, B to C, and C to A. But if neither of these is the case, then replacing one's self with a more efficient agent maximizing the same values should be a net gain in most cases.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks for letting me know that CEV is obsolete. I'll have to look into the details. However, I don't think our disagreement is in that area.
Agreed, but the argument works just as well for decreasing happiness as for possible increases. Even someone who valued their own happiness 1000x more than that of others would still prefer to suffer than for 1001 people to suffer. If they also value their own life 1000x as much as other people's lives, they would be willing to die to prevent 1001+ deaths. If you added up the total number of utils of happiness, according to his or her utility function, 99.9999% of the happiness they value would be happiness in other people, assuming there are on the order of billions of people and that they bite the bullet on the repugnant conclusion. (For simplicity's sake.)
But all that's really just to argue that there are things worth dying for, in the case of many people. My central argument looks something like this:
There are things worth dying for. Loosing something valuable, like by suppressing a biased emotion, is less bad than dying. If suppressing emotional empathy boosts the impact of cognitive empathy (I'm not sure it does) enough to achieve something worth dying for, then one should do so.
But I'm not sure things are so dire. The argument gets more charitable when re-framed as boosting cognitive empathy instead. In reality, I think what's actually going on is empathy either triggers something like near-mode thinking or far-mode, and these two possibilities are what leads to "emotional empathy" and "cognitive empathy". If so, then "discarding [emotional] empathy" seems far less worrying. It's just a cognitive habit. In principle though, if sacrificing something more actually was necessary for the greater good, then that would outweigh personal loss.
There are other things you value besides happiness, which can also be hyper-satisfied at the cost of abandoning other values. Maybe you really love music, and funding poor Western artists instead of saving the global poor from starvation would increase the production of your favorite sub-genre by 1000x. Maybe you care about making humanity an interplanetary species, and giving your savings to SpaceX instead of the AMF could make it come true. If only those pesky emotion of empathy didn't distract you all the time.
How can you choose one value to maximize?
Furthermore, 'increasing happiness' probably isn't a monolithic value, it has divisions and subgoals. And most likely, there are also multiple emotions and instincts that make you value them. Maybe you somewhat separately value saving people's lives, separately value reducing suffering, separately value increasing some kinds of freedom or equality, separately value helping people in your own country vs. the rest of the world.
If you could choose to hyper-satisfy one sub-value at the expense of all the others, which would you choose? Saving all the lives, but letting them live in misery? Eliminating pain, but not caring when people die? Helping only people of one gender, or of one faith, or one ethnicity?
The answer might be to find other people who care about the same set of values as you do. Each will agree to work on one thing only, and gain the benefits of so specializing. (If you could just pool and divide your resources the problem would be solved already.) But your emotions would still be satisfied from knowing you're achieving all your values; if you withdraw from the partnership, the others would adjust their funding in a way that would (necessarily) defund each project proportionally to how much you value it. So you wouldn't need to 'discard' your emotions.
I do think all this is unnecessary in practice, because there aren't large benefits to be gained by discarding some emotions and values.