The idea that the more you understand someone the less you have to empathize with them sounds right, but it just lowers the lower bound of empathy. You can still choose to add more empathy on top if you want to.
Actually, you can decide not to empathize with people you don't understand as well, but then you don't have any decent method of predicting them and they end up looking innately evil or something.
steven0461 (comment under "Preference For (Many) Future Worlds"):
Yvain (Behaviorism: Beware Anthropomorphizing Humans):
Eliezer (Sympathetic Minds):
So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/sympathize with it? See this post for some possible examples of this. Or consider Yvain's blue-minimizing robot. At first we might empathize or even sympathize with its apparent goal of minimizing blue, at least until we understand that it's just a dumb program. We still sympathize with the predicament of the human-level side module inside that robot, but maybe only until we can understand it as something besides a "human level intelligence"? Should we keep carrying forward behaviorism's program of de-anthropomorphizing humans, knowing that it might (or probably will) reduce our level of empathy/sympathy towards others?