This is actually something I've been trying to articulate for a long time. It's fantastic to finally have a scientific name for it, (emotional vs cognitive empathy) along with a significantly different perspective.
I'd be inclined to share this outside the rationalist community. Ideally, me or someone else would weave most of the same concepts into a piece intellectuals in general as a target audience. (NOT someone associated directly with EA though, and not with too much direct discussion of EA, because we wouldn't want to taint it as a bunch of straw Vulcans.)
However, this is well written and might suffice for that purpose. The only things I think would confuse random people linked to this would be the little Hanson sitting on your shoulder, the EY empathy/saving the world bit, and the mention of artificial intelligence. It might also not be clear that your argument is quite narrow scope. (You're only criticizing some forms of emotional empathy, not all forms, and not cognitive empathy. You aren't, for instance, arguing against letting emotional empathy encourage us to do good in the first place, but only against letting it overpower the cognitive empathy that would let us do good effectively.)
So, does anyone have any thoughts as to whether linking non-nerds to this would still be a net positive? I guess the value of information is high here, so I can share with a few friends as an experiment. Worst case is I spend a few idiosyncrasy credits/weirdness points.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.
To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what's necessary to achieve their good goals. We think they're wrong, but that's an outside view. As has been pointed out, the road to hell is never in need of repair.
Given this, it's useful to have firebreaks, boundaries which serve to stop really determined people who think they're doing good from doing too much evil. A major firebreak is emotional empathy -- it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.
And, besides, I like humans more than I like optimization algorithms :-P
How about: doing evil (even inadvertently) requires coercion. Slavery, Nazis, tying a witch to a stake, you name it. Nothing effective altruists currently do is coercive (except to mosquitoes), so we're probably good. However, if we come up with a world improvement plan that requires coercing somebody, we should A) hear their take on it and B) empathize with them for a bit. This isn't a 100% perfect plan, but it seems to be a decent framework.