New Comment
38 comments, sorted by Click to highlight new comments since:

I'm confused by this post, and don't quite understand what its argument is.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

If I buy an expensive gift for my father's birthday because I feel that fulfills my filial duty, you probably wouldn't tell me to de-emphasize filial piety and focus more on cognitive empathy for distant strangers. In general, I don't expect you to suggest people should spend all their resources on EA. Usually people designate a donation amount and then optimize the donation target, and it doesn't much matter what fuzzies you're spending your non-donation money on. So why de-fund emotional empathy in particular? Why not purchase fuzzies by spending money on buying treats for kittens, rather than reducing farm meat consumption?

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations. And then we would spend less "moral capital" on doing good. If so, you should want to de-fund all moral emotions, as long as this doesn't compromise your motivations for doing good, or your resources. Starting with most forms of love, loyalty, cleanliness and so on. Someone who genuinely feels doing good is their biggest moral concern would be a more effective altruist! But I don't think you're really suggesting e.g. not loving your family any more than distant strangers.

Maybe your main point is that empathy is a bias relative to your conscious goals:

When choosing a course of action that will make the world a better place, the strength of your empathy for victims is more likely to lead you astray that to lead you truly.

But the same can be said of pretty much any strong, morally entangled emotion. Maybe you don't want to help people who committed what you view as a moral crime, or who if helped will go on to do things you view as bad, or helping whom would send a signal to a third party that you don't want to be sent. Discounting such emotions may well match your idea of doing good. But why single out emotional empathy?

If people have an explicit definition of the good they want to accomplish, they can ignore all emotions equally. If they don't have an explicit definition, then it's just a matter of which emotions they follow in the moment, and I don't see why this one is worse than the others.

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations.

This actually has a name. It's called moral licensing.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

I agree with you that nothing makes them special. But you seem to view this as a reductio ad absurdum. Doing the same for all other emotions which might bias us or get in the way of doing what’s moral would not lead to a balanced lifestyle, to say the least.

But we could just as easily bite that bullet. Why should we expect optimizing purely for morality to lead to a balanced lifestyle? Why wouldn’t the 80/20 rule apply to moral concerns? Under this view, one would do best to amputate most parts of one’s mind that made them human, and add parts to become a morality maximizer.

Obviously this would cause serious problems in reality, and may not actually be the best way to maximize morality even if it was possible. This is just a sort of spherical cow in a vacuum level concept.

Why wouldn’t the 80/20 rule apply to moral concerns?

If the 80/20 rules applies to moral concerns why do you think that getting rid of empty is part in the 20% that does 80%?

Even if it were the best way to maximize morality, why would you want to maximize it?

Human values are complex. Wanting to maximize one at the expense of all others implies it already is your sole value. Of course, human don't exactly converge on the subgoal of preserving their values, so the right words can (and have) convinced people to follow many single values.

Perhaps I should have been more specific than to use a vague term like "morality". Replace it with CEV, since that should be the sum total of all your values.

Most people value happiness, so let me use that as an example. Even if I value own happiness 1000x more than other people's happiness, if there are more than 1000 people in the word, then the vast majority of my concern for happiness is still external to myself. One could do this same calculation for all other values, and add them up to get CEV, which is likely to be weighted toward others for the same reason that happiness is.

Of course, perhaps some people legitimately would prefer 3^^^3 dust specs in people's eyes to their own death. And perhaps some people's values aren't coherent, such as preferring A to B, B to C, and C to A. But if neither of these is the case, then replacing one's self with a more efficient agent maximizing the same values should be a net gain in most cases.

I don't believe a CEV exists or, if it does, that I would like it very much. Both were poorly supported assumptions of the CEV paper. For related reasons, as the Wiki says, "Yudkowsky considered CEV obsolete almost immediately after its publication in 2004". I'm not sure why people keep discussing CEV (Nick Tarleton, and other links on the Wiki page) but I assume there are good reasons.

One could do this same calculation for all other values, and add them up to get CEV,

That doesn't sound like CEV at all. CEV is about extrapolating new values which may not be held by any actual humans. Not (just) about summing or averaging the values humans already hold.

Getting back to happiness: it's easy to say we should increase happiness, all else being equal. It's not so obvious that we should increase it at the expense of other things, or by how much. I don't think happiness is substantially different in this case from morality.

Thanks for letting me know that CEV is obsolete. I'll have to look into the details. However, I don't think our disagreement is in that area.

it's easy to say we should increase happiness, all else being equal. It's not so obvious that we should increase it at the expense of other things

Agreed, but the argument works just as well for decreasing happiness as for possible increases. Even someone who valued their own happiness 1000x more than that of others would still prefer to suffer than for 1001 people to suffer. If they also value their own life 1000x as much as other people's lives, they would be willing to die to prevent 1001+ deaths. If you added up the total number of utils of happiness, according to his or her utility function, 99.9999% of the happiness they value would be happiness in other people, assuming there are on the order of billions of people and that they bite the bullet on the repugnant conclusion. (For simplicity's sake.)

But all that's really just to argue that there are things worth dying for, in the case of many people. My central argument looks something like this:

There are things worth dying for. Loosing something valuable, like by suppressing a biased emotion, is less bad than dying. If suppressing emotional empathy boosts the impact of cognitive empathy (I'm not sure it does) enough to achieve something worth dying for, then one should do so.

But I'm not sure things are so dire. The argument gets more charitable when re-framed as boosting cognitive empathy instead. In reality, I think what's actually going on is empathy either triggers something like near-mode thinking or far-mode, and these two possibilities are what leads to "emotional empathy" and "cognitive empathy". If so, then "discarding [emotional] empathy" seems far less worrying. It's just a cognitive habit. In principle though, if sacrificing something more actually was necessary for the greater good, then that would outweigh personal loss.

There are other things you value besides happiness, which can also be hyper-satisfied at the cost of abandoning other values. Maybe you really love music, and funding poor Western artists instead of saving the global poor from starvation would increase the production of your favorite sub-genre by 1000x. Maybe you care about making humanity an interplanetary species, and giving your savings to SpaceX instead of the AMF could make it come true. If only those pesky emotion of empathy didn't distract you all the time.

How can you choose one value to maximize?

Furthermore, 'increasing happiness' probably isn't a monolithic value, it has divisions and subgoals. And most likely, there are also multiple emotions and instincts that make you value them. Maybe you somewhat separately value saving people's lives, separately value reducing suffering, separately value increasing some kinds of freedom or equality, separately value helping people in your own country vs. the rest of the world.

If you could choose to hyper-satisfy one sub-value at the expense of all the others, which would you choose? Saving all the lives, but letting them live in misery? Eliminating pain, but not caring when people die? Helping only people of one gender, or of one faith, or one ethnicity?

The answer might be to find other people who care about the same set of values as you do. Each will agree to work on one thing only, and gain the benefits of so specializing. (If you could just pool and divide your resources the problem would be solved already.) But your emotions would still be satisfied from knowing you're achieving all your values; if you withdraw from the partnership, the others would adjust their funding in a way that would (necessarily) defund each project proportionally to how much you value it. So you wouldn't need to 'discard' your emotions.

I do think all this is unnecessary in practice, because there aren't large benefits to be gained by discarding some emotions and values.

I agree with you on the complexity of value. However, perhaps we are imagining the ideal way of aggregating all those complex values differently. I absolutely agree that the simple models I keep proposing for individual values are spherical cows, and ignore a lot of nuance. I just don't see things working radically differently when the nuance is added in, and the values aggregated.

That sounds like a really complex discussion though, and I don't think either of us is likely to convince the other without a novel's worth of text. However, perhaps I can convince you that you already are suppressing some impulses, and that this isn't always disastrous. (Though it certainly can be, if you choose the wrong ones.)

there aren't large benefits to be gained by discarding some emotions and values.

Isn't that what akrasia is?

If I find that part of me values one marshmallow now at the expense of 2 later, and I don't endorse this upon reflection, wouldn't it make sense to try and decrease such impulses? Removing them may be unnecessarily extreme, but perhaps that's what some nootropics do.

Similarly, if I were to find that I gained a sadistic pleasure from something, I wouldn't endorse that outside of well defined S&M. If I had an alcoholism problem, I'd similarly dislike my desire for alcohol. I suspect that strongly associating cigarettes with disgust is helpful in counteracting the impulse to smoke.

If I understand correctly, some Buddhist try to eliminate suffering by eliminating their desires. I find this existentially terrifying. However, I think that boosting and suppressing these sorts of impulses is precisely what psychologists call conditioning. A world where none refines or updates their natural impulses is just as unsettling as the Buddhist suppression of all values.

So, even if you don't agree that there are cases where we should suppress certain pro-social emotions, do you agree with my characterization of antisocial emotions and grey area impulses like akrasia?

(I'm using values, impulses, emotions, etc fairly interchangeably here. If what I'm saying isn't clear, let me know and I can try to dig into the distinctions.)

I think I understand your point better now, and I agree with it.

My conscious, deliberative, speaking self definitely wants to be rid of akrasia and to reduce time discounting. If I could self modify to remove akrasia, I definitely would. But I don't want to get rid of emotional empathy, or filial love, or the love of cats that makes me sometimes feed strays. I wouldn't do it if I could. This isn't something I derive from or defend by higher principles, it's just how I am.

I have other emotions I would reduce or even remove, given the chance. Like anger and jealousy. These can be moral emotions no less than empathy - righteous anger, justice and fairness. It stands to reason some people might feel this way about any other emotion or desire, including empathy. When these things already aren't part of the values their conscious self identifies with, they want to reduce or discard them.

And since I can be verbally, rationally convinced to want things, I can be convinced to want to discard emotions I previously didn't.

It's a good thing that we're very bad at actually changing our emotional makeup. The evolution of values over time can lead to some scary attractor states. And I wouldn't want to permanently discard one feeling during a brief period of obsession with something else! Because actual changes take a lot of time and effort, we usually only go through with the ones we're really resolved about, which is a good condition to have. (Also, how can you want to develop an emotion you've never had? Do you just end up with very few emotions?)

Agreed. I'll add 2 things that support of your point, though.

First, the Milgram experiment seems to suggest that even seemingly antisocial impulses like stubbornness can be extremely valuable. Sticking to core values rather than conforming likely led more people to resist the Nazis.

Also, I didn't bring it up earlier because it undermines my point, but apparently sociopaths have smaller amygdalas than normal, while kidney donors have larger ones, and empathy is linked to that region of the brain. So, we probably could reduce or remove emotional empathy and/or cognitive empathy if we really wanted to. However, I'm not at all inclined to inflict brain damage on myself, even if it could somehow be targeted enough to not interfere with cognitive empathy or anything else.

So, more generally, even reversible modification worries me, and the idea of permanently changing our values scares the shit out of me. For humanity as a whole, although not necessarily small groups of individuals as a means to an end, I don't endorse most modifications. I would much rather we retain a desire we approve of but which the laws of physics prevent us from satisfying, than to remove that value and be fulfilled.

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

I think this article is something that people outside of this community really ought to read.

Interesting. Why people outside of this community? I find it is actually the LW and EA communities that place an exorbitant amount of emphasis on empathy. Most of those I know outside of the rationalist community understand the healthy tradeoff between charitable action and looking out for oneself.

My observation is that people who are smart generally try to live more ethically, but usually have skewed priorities; e.g. they'll try to support the artists they like and to be decent in earning their money, when they'd fair better just worrying less about all that and donating a bit to the right place every month. Quantitative utility arguments are usually met with rejection.

LW's, on the other hand, seem to be leaning in that direction anyway. Though I'm fairly new to the community, so I could be wrong.

I wouldn't show it to people who lack a "solid" moral base in the first place. They probably fair better in keeping every shred of empathy they have (thinking of how much discrimination still exists today).

It sounds like you are still clinging to the idea that emotional empathy is a qualitatively good thing... motivated thinking?

This doesn't entirely match my impression of the LW community. (I know much less about the non-LW EA community.) What are you basing this on? Were there major LW posts about empathy, or LW Survey questions, or something else?

Thank you, this is the biggest compliment I could hope for.

I worry whenever I write anything that could fall into bravery debate territory. I worry that for some readers it would sound stale and obvious, or be the precise opposite of the advice they need, while others would reject it in disgust after reading two lines. I write about things that hit me in the right spot: ideas I was on the precipice of and something pushed me over. And then I hope that I'll find at least a few readers who are in the same spot I am.

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

[-]gjm90

The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.

So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".

I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.

With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.

Moral progress is the progress of knowledge. Slavers in the antebellum South convinced themselves that they were doing a favor to the slaves because the latter couldn't survive by themselves in an advanced economy. A hundred years later, they changed their minds more than they changed their hearts. We (some of us) have learned that coercion is almost always bad, making world saving plans that involve a lot of coercion tend to fail, and preserving people's freedom (blacks, witches and Jews included) increases everyone's welfare.

Is empathy part of one's motivation to even pursue moral progress? Perhaps, but if so it's a very deep part of us that will never be discarded. All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.

This required first to, basically, decide that something which looks like a person is actually not and so is not worthy of empathy. That is not a trivial barrier to overcome. Without empathy to start with, burning witches is much easier.

Moral progress is the progress of knowledge.

This is a very... contentious statement. There are a lot of interesting implications.

All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

And that is what I'm strongly disagreeing with.

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

[-]gjm60

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

Sounds terrible! But, wait, once you've decided on a course of action. The main problem with sociopaths is that they do horrible things and do them very effectively, right? Someone who chooses what to do like a non-sociopath and then executes those plans like a sociopath may sound scary and creepy and all, but it's not at all clear that it's actually a bad idea.

(I am not convinced that Jacobian is actually arguing that you decide on a course of action and then turn yourself into a sociopath. But even that strawman version of what he's saying is, I think, much less terrible than you obviously want readers to think it is.)

But, wait, once you've decided on a course of action.

You are misreading Jacobian. Let me quote (emphasis mine):

whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

.

but it's not at all clear that it's actually a bad idea.

Such people are commonly called "fanatics".

[-]gjm70

You are misreading Jacobian

Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.

Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.

(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)

I am afraid I cannot claim here any particularly noble motives.

In Jacobian's text there are, basically, two decision points: the first one is deciding to do good, and the second one is deciding on a course of action. You lose empathy in between them. There are (at least) two ways to interpret this. In one when you decide "do good", you make just a very generic decision to do some unspecified good. All the actual choices are at the "course of action" point. In another one at the first decision point you already decide what particular good do you want to work towards and then the second decision point is just the details of implementation.

I didn't want to start dissecting Jacobian's post at this level of detail, so I basically simplified it by saying that you lose your empathy before making some (but not necessarily all) choices. I don't know if you want to classify it as "technically incorrect" :-/

You still haven't made a single argument in favor of emotional empathy, other than conflating lack of emotional empathy with, in order of appearance: Stalinism, Nazism, witch hunting, fanaticism. None of this name calling was supported by any evidence re:empathy.

The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.

To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what's necessary to achieve their good goals. We think they're wrong, but that's an outside view. As has been pointed out, the road to hell is never in need of repair.

Given this, it's useful to have firebreaks, boundaries which serve to stop really determined people who think they're doing good from doing too much evil. A major firebreak is emotional empathy -- it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.

And, besides, I like humans more than I like optimization algorithms :-P

How about: doing evil (even inadvertently) requires coercion. Slavery, Nazis, tying a witch to a stake, you name it. Nothing effective altruists currently do is coercive (except to mosquitoes), so we're probably good. However, if we come up with a world improvement plan that requires coercing somebody, we should A) hear their take on it and B) empathize with them for a bit. This isn't a 100% perfect plan, but it seems to be a decent framework.

[-]gjm60

Some argument along these lines may work; but I don't believe that doing evil requires coercion.

Suppose that for some reason I am filled with malice against you and wish to do you harm. Here are some things I can do that involve no coercion.

I know that you enjoy boating. I drill a small hole in your boat, and the next time you go out on the lake your boat sinks and you die.

I know that you are an alcoholic. I leave bottles of whisky around places you go, in the hope that it will inspire you to get drunk and get your life into a mess.

The law where we live is (as in many places) rather overstrict and I know that you -- like almost everyone in the area -- have committed a number of minor offences. I watch you carefully, make notes, and file a report with the police.

I get to know your wife, treat her really nicely, try to give her the impression that I have long been nursing a secret yearning for her. I hope that some day if your marriage hits an otherwise-navigable rocky patch, she will come to me for comfort and (entirely consensually) leave you for me.

I discover your political preferences and make a point of voting for candidates whose values and policies are opposed to them.

I put up posters near where you live, accusing you of horrible things that you haven't in fact done.

I put up posters near where you live, accusing you of horrible things that you have in fact done.

None of these involves coercion unless you interpret that word very broadly. Several of them don't, so far as I can see, involve coercion no matter how broadly you interpret it.

So if you want to be assured of not doing evil, you probably need more firebreaks besides "no coercion".

I agree with gjm that evil does not necessarily require coercion. Contemplate, say, instigating a lynching.

The reason EAs don't do any coercion is because they don't have any power. But I don't see anything in their line of reasoning which would stop them from coercing other people in case they do get some power. They are not libertarians.

I completely agree: asking people to discard moral emotions is rather like asking rational agents to discard top goals!

Wikipedia says that "body-counts of modern witch-hunts by far exceed those of early-modern witch-hunting", referencing: Behringer, Wolfgang 2004: Witches and Witch-hunts. A global History. Cambridge: Polity Press.

My point being that our emotional empathy is already out of tune with social propriety, if you consider the social norms typical around the world and not just among rich, Western populations. Let alone the norms common in the West for most of its existence, and so perhaps again in the future.

This post doesn't have much that addresses the "expanding circle" case for empathy, which goes something like this:

Empathy is a powerful tool for honing in on what matters in the world. By default, people tend to use it too narrowly. We can see that in many of the great moral failings of the past (like those mentioned here) which involved people failing to register some others as an appropriate target of empathy, or doing a lousy job of empathizing which involved making up stories more than really putting oneself in their shoes, or actively working to block empathy by dehumanizing them and evoking disgust, fear, or other emotions. But over time there has been moral progress as societies have expanded the circle of who people habitually feel empathy for, and developed norms and institutions to reflect their membership in that circle of concern. And it is possible to do better than your societal default if you cultivate your empathy, including the ability to notice the blind spots where you could be empathizing but are not (and the ability to then direct some empathy towards those spots). This could include people who are far away or across some boundary, people in an outgroup who you might feel antagonistic towards, people who have been accused of some misdeed, people and nonhumans that are very different from you, those who are not salient to you at the moment, those who don't exist yet, those who are only indirectly affected by your actions, etc.

I am very much in favor of "expanding the circle of empathy". My thesis is that this consists of supplanting your emotional empathy (who your heart beats in harmony with naturally) with cognitive empathy (who your brain tells you is worthy of empathy even if you don't really feel their Tajik feelings).

I think that "supplant" is not the right move. I do agree that having a wide circle does not require going around feeling lots of emotional empathy for everyone, but I think that emotional empathy helps with getting the circle to expand. A one-time experience of emotional empathy (e.g., from watching a movie about an Iranian family) can lead to a permanent expansion in the circle of concern (e.g., thinking of the Tajiks as people who count, even if you don't actively feel emotional empathy for them in the moment).

A hypothesis: counterfactual emotional empathy is important for where you place your circle of concern. If I know that I would feel emotional empathy for someone if I took the time to understand their story from their perspective, then I will treat them as being inside the circle even if I don't actually go through the effort to get their point of view and don't have the experience of feeling emotional empathy for them.

This is a tangent, but:

You know that “four delicious tiny round brown glazed Italian chocolate cookies” is the only proper way to order these adjectives.

There are definitely some ordering rules, but I am not convinced they are nearly as universal or as complex as this suggests. See the Language Log on this subject.

This is actually something I've been trying to articulate for a long time. It's fantastic to finally have a scientific name for it, (emotional vs cognitive empathy) along with a significantly different perspective.

I'd be inclined to share this outside the rationalist community. Ideally, me or someone else would weave most of the same concepts into a piece intellectuals in general as a target audience. (NOT someone associated directly with EA though, and not with too much direct discussion of EA, because we wouldn't want to taint it as a bunch of straw Vulcans.)

However, this is well written and might suffice for that purpose. The only things I think would confuse random people linked to this would be the little Hanson sitting on your shoulder, the EY empathy/saving the world bit, and the mention of artificial intelligence. It might also not be clear that your argument is quite narrow scope. (You're only criticizing some forms of emotional empathy, not all forms, and not cognitive empathy. You aren't, for instance, arguing against letting emotional empathy encourage us to do good in the first place, but only against letting it overpower the cognitive empathy that would let us do good effectively.)

So, does anyone have any thoughts as to whether linking non-nerds to this would still be a net positive? I guess the value of information is high here, so I can share with a few friends as an experiment. Worst case is I spend a few idiosyncrasy credits/weirdness points.

I'm actually not a fan of the bit I've written about Eliezer, I should probably remove it if that will allow you to share it with more people. That paragraph doesn't do a lot for the piece.