VAuroch comments on On Caring - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (272)
I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where
and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.
I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.
This is why I prefer to frame EA as something exciting, not burdensome.
Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.
This just might have more general application.
If I'm working at my capacity, I don't see how it's my fault for not having the world fixed immediately. I can't do any more than I can do and I don't see how I'm responsible for more than what my efforts could change.
From my perspective, it's "I have to think about all the problems in the world and care about them." That's burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don't actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don't have to worry about connecting them to people starving in distant lands.
Do we have any data on which EA pitches tend to be most effective?
I've read that. It's definitely been the best argument for convincing me to try EA that I've encountered. Not convincing, currently, but more convincing than anything else.
I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?
Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.
I always find it curious how people forget that equality is symmetrical and works in both directions.
So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...
See also http://xkcd.com/1035/, last panel.
One man's modus ponens... I don't lose much sleep when I hear that a child I had never heard of before was killed.
No, except by interpreting the words "morally equivalent" in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn't matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).
The only symmetry here is that of "equal and opposite".
Did anyone actually need that spelled out?
These verbal contortions do not look convincing.
The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.
A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it's much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.
For instance, what do you think George Bush Sr's worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn't fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.
This is utterly bizarre on many levels, but I'm grateful too -- I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.
When you ask how bad an action is, you can mean (at least) two different things.
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.
In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.
That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.
But this recurses until all the leaf nodes are "how much harm does it do?" so it's exactly equivalent to how much harm we expect this person to inflict over the course of their lives.
By the same token, it's easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn't change the analysis that the embargo was ten thousand times worse than the school shooting.
For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.)
That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all.
For the avoidance of doubt, I'm not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn't perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.)
But if the question is who is more *evil, GHWB or the Columbine killers?", the answer depends on what you mean by "evil" and most people most of the time don't mean "causing harm"; they mean something they probably couldn't express in words but that probably ends up being close to "having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with" -- which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn't include a tendency to callousness when dealing with the lives of strangers thousands of miles away.
Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.
I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.
Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.
In case anyone else was curious about this, here's a quote:
Oops.
Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.
Are some cases of killing a child equivalent to buying expensive shoes?
Those in which the way you kill the child is by spending money on luxuries rather than saving the child's life with it.
Do elaborate. How exactly does that work?
For example, I have some photographic equipment. When I bought, say, a camera, did I personally kill a child by doing this?
(I have the impression that you're pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we're discussing. But I'm going to take what you say at face value anyway.)
The context here is the idea (stated forcefully by Peter Singer, but he's by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities.
In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that's comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
(Not necessarily specifically a child. It may be more expensive to save children's lives, in which case it would need to be a more expensive camera.)
Of course there isn't a specific child you have killed all by yourself personally, but no one suggested there is.
So, that was the original claim that Richard Kennaway described. Your objection to this wasn't to argue with the moral principles involved but to suggest that there's a symmetry problem: that "killing a child is morally equivalent to buying an expensive luxury" is less plausible than "buying an expensive luxury is morally equivalent to killing a child".
Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer's claim is something like "for all expensive luxury purchases, there exists a morally equivalent case of killing a child"; your proposed reversal is something like "for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury".) Hence pianoforte611's response.
You seemed happy to accept an amendment that attempts to fix up the asymmetry. And (I assumed) you were still assuming for the sake of argument the Singer-ish position that buying luxury goods is like killing children, and aiming to show that there's an internal inconsistency in the thinking of those who espouse it because they won't accept its reversal.
But I think there isn't any such inconsistency, because to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong. Which would be fair enough if you weren't saying that what's wrong with the original principle is that its reversal is no good.
Nope. I express my rhetorical contempt in, um, more obvious ways. It's not exactly that I don't understand, it's rather that I see multiple ways of proceeding and I don't know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing "right" and "wrong" which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We're talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
Yes, I accept that.
Not quite. I don't think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions -- for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn't focus on that issue -- for the purposes of this argument I accepted the thesis and looked into its implications.
Correct.
It's not an issue of plausibility. It's an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what's commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions -- making the heinous more normal works just as well.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer's proposition and if you don't think it does, I would like to know why it doesn't.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don't see what "luxuries" have to do with it -- you kill children by failing to max out your credit cards as well).
In common language, however, "killing a child" does not mean "fail to do something which could, we think, on the average, avoid one death somewhere in Africa". "Killing a child" means doing something which directly and causally leads to a child's death.
No. I think the original principle is wrong, but that's irrelevant here -- in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
Now that the funding gap of the AMF has closed, I'm not sure this is still the case.
Presumably if you stole a child's lunch money and bought a pair of shoes with it
The biggest problem I have with 'dead baby' arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don't find that babies have a whole lot of intrinsic value until they're properly programmed.
If you don't take care of babies, you'll eventually run out of adults. If you don't have adults, the babies won't be taken care of.
I don't know what a balanced approach to the problem would look like.
NancyLebovitz:
RichardKennaway:
Richard's question is a good one, but even if there's no good answer it's a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending - and that this is an easier sell. So the framing of EA that Nancy describes has practical value.
I'm not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn't improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn't seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It's difficult to enjoy a delusion you're conscious of.
Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.)
Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.
If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with?
Wouldn't you just shrug and dismiss the misfortune as irrelevant?
If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?
I would not. This is a fair point.
Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?
Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the "pain" nerves at a given intensity.
Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.
So, considering this counterexample, I'd say that no, not every possible misfortune includes damage. Though I imagine that most do.
No need for sci-fi.
Much of what could be called damage in this context wouldn't necessarily happen within your body, you can take damage to your reputation for example.
You can certainly be deluded about receiving damage especially in the social game.
That is true; but it's enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).
Yes. I didn't mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.
Let's say you cut your finger while chopping vegetables. If you don't feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.
Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other's distress vs. sympathy is understanding other's pain as it reflects your own distress. Specifically mitigating 'feeling pain from other's distress' as applied to a broad sphere of 'others' has been a significant part of my turn away from an altruistic outlook; this wasn't hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people's distress viscerally distant.
Here's a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that's not an issue. The idea is to score as many points as possible before that happens.
If you save someone's life on expectation, you save someone's life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.
But you don't have to bear it alone. It's not as if one person has to care about everything (nor each single person has to care for all).
Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).
That's one way for people to become religious.
I'm not sure what point is being made here. Distributing burdens is a part of any group, why is religion exceptional here?
Theory of mind, heh... :-)
The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don't look at you strangely), it's not your church community which shares your burden. It's Jesus who lifts this burden off your shoulders.
Ah, that's probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).
Yes, of course. I pointed out another way through which you don't have to bear it alone.
Ah, I understand. Thanks for clearing up my confusion.
Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can't let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent.
Or in short, blur out the details and this is me:
Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.
It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes)
I don't think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.
That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.
"A part of the system that you cannot change" is a vague term (and it's a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like "if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can't change?"
Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can't change affects the amounts that you assign to the parts that you can change.
Ditto, though I diverged differently. I said, "Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?"
I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at "zero, despair, crying at every news story." But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won't add or subtract much from the total utility of human civilization.
Super relevant slatestarcodex post: Nobody Is Perfect, Everything is Commensurable.
Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
But what about the quantitative way? :(
Edit: Forget that... I finally get it. Like, really get it. You said:
Oh, my gosh... I think that's why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what's true, but that's probably not true. Honestly, I probably gave it up because having the power to impact someone else's eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone's eternal soul.
And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don't seem nearly as important as they probably would had I not come from the perspective that people's lives on earth are just specks, just one-infinitieth total existence.