On Caring
This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.
1
I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".
Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.
The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.
I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.
This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.
For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.
The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.
Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.
And this is a problem.
2
It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.
My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.
Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.
When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to grasp the gravity of the situation.
Saving a person's life feels great, and it would probably feel just about as good to save one life as it would feel to save the world. It surely wouldn't be many billion times more of a high to save the world, because your hardware can't express a feeling a billion times bigger than the feeling of saving a person's life. But even though the altruistic high from saving someone's life would be shockingly similar to the altruistic high from saving the world, always remember that behind those similar feelings there is a whole world of difference.
Our internal care-feelings are woefully inadequate for deciding how to act in a world with big problems.
3
There's a mental shift that happened to me when I first started internalizing scope insensitivity. It is a little difficult to articulate, so I'm going to start with a few stories.
Consider Alice, a software engineer at Amazon in Seattle. Once a month or so, those college students with show up on street corners with clipboards, looking ever more disillusioned as they struggle to convince people to donate to Doctors Without Borders. Usually, Alice avoids eye contact and goes about her day, but this month they finally manage to corner her. They explain Doctors Without Borders, and she actually has to admit that it sounds like a pretty good cause. She ends up handing them $20 through a combination of guilt, social pressure, and altruism, and then rushes back to work. (Next month, when they show up again, she avoids eye contact.)
Now consider Bob, who has been given the Ice Bucket Challenge by a friend on facebook. He feels too busy to do the ice bucket challenge, and instead just donates $100 to ALSA.
Now consider Christine, who is in the college sorority ΑΔΠ. ΑΔΠ is engaged in a competition with ΠΒΦ (another sorority) to see who can raise the most money for the National Breast Cancer Foundation in a week. Christine has a competitive spirit and gets engaged in fund-raising, and gives a few hundred dollars herself over the course of the week (especially at times when ΑΔΠ is especially behind).
All three of these people are donating money to charitable organizations… and that's great. But notice that there's something similar in these three stories: these donations are largely motivated by a social context. Alice feels obligation and social pressure. Bob feels social pressure and maybe a bit of camaraderie. Christine feels camaraderie and competitiveness. These are all fine motivations, but notice that these motivations are related to the social setting, and only tangentially to the content of the charitable donation.
If you took any of Alice or Bob or Christine and asked them why they aren't donating all of their time and money to these causes that they apparently believe are worthwhile, they'd look at you funny and they'd probably think you were being rude (with good reason!). If you pressed, they might tell you that money is a little tight right now, or that they would donate more if they were a better person.
But the question would still feel kind of wrong. Giving all your money away is just not what you do with money. We can all say out loud that people who give all their possessions away are really great, but behind closed doors we all know that people are crazy. (Good crazy, perhaps, but crazy all the same.)
This is a mindset that I inhabited for a while. There's an alternative mindset that can hit you like a freight train when you start internalizing scope insensitivity.
4
Consider Daniel, a college student shortly after the Deepwater Horizon BP oil spill. He encounters one of those college students with the clipboards on the street corners, soliciting donations to the World Wildlife Foundation. They're trying to save as many oiled birds as possible. Normally, Daniel would simply dismiss the charity as Not The Most Important Thing, or Not Worth His Time Right Now, or Somebody Else's Problem, but this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.
He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can. They simply don't have the resources to clean all the available birds. A pathetic young bird flops towards his feet, slick with oil, eyes barely able to open. He kneels down to pick it up and help it onto the table. One of the bird-cleaners informs him that they won't have time to get to that bird themselves, but he could pull on some gloves and could probably save the bird with three minutes of washing.

Daniel decides that he would spend three minutes of his time to save the bird, and that he would also be happy to pay at least $3 to have someone else spend a few minutes cleaning the bird. He introspects and finds that this is not just because he imagined a bird right in front of him: he feels that it is worth at least three minutes of his time (or $3) to save an oiled bird in some vague platonic sense.
And, because he's been thinking about scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of birds: the internal feeling of caring can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.
Thousands and thousands of birds were oiled by the BP spill alone. After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars. And that's not even counting wildlife threatened by other oil spills.
And if he cares that much about de-oiling birds, then how much does he actually care about factory farming, nevermind hunger, or poverty, or sickness? How much does he actually care about wars that ravage nations? About neglected, deprived children? About the future of humanity? He actually cares about these things to the tune of much more money than he has, and much more time than he has.
For the first time, Daniel sees a glimpse of of how much he actually cares, and how poor a state the world is in.
This has the strange effect that Daniel's reasoning goes full-circle, and he realizes that he actually can't care about oiled birds to the tune of 3 minutes or $3: not because the birds aren't worth the time and money (and, in fact, he thinks that the economy produces things priced at $3 which are worth less than the bird's survival), but because he can't spend his time or money on saving the birds. The opportunity cost suddenly seems far too high: there is too much else to do! People are sick and starving and dying! The very future of our civilization is at stake!
Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.
Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?
In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.
In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.
Alice and Bob and Christine usually aren't spending time solving all the world's problems because they forget to see them. If you remind them — put them in a social context where they remember how much they care (hopefully without guilt or pressure) — then they'll likely donate a little money.
By contrast, Daniel and others who have undergone the mental shift aren't spending time solving all the world's problems because there are just too many problems. (Daniel hopefully goes on to discover movements like effective altruism and starts contributing towards fixing the world's most pressing problems.)
5
I'm not trying to preach here about how to be a good person. You don't need to share my viewpoint to be a good person (obviously).
Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.
The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.
But this is an error — prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters.
Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.
You don't get to feel the appropriate amount of "care", in your body. Sorry — the world's problems are just too large, and your body is not built to respond appropriately to problems of this magnitude. But if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control.
6
This, of course, leads us to the question of "what the hell do you then?"
And I don't really know yet. (Though I'll plug the Giving What We Can pledge, GiveWell, MIRI, and The Future of Humanity Institute as a good start).
I think that at least part of it comes from a certain sort of desperate perspective. It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.
I'm not trying to guilt you into giving more money away — becoming a philanthropist is really really hard. (If you're already a philanthropist, then you have my acclaim and my affection.) First it requires you to have money, which is uncommon, and then it requires you to throw that money at distant invisible problems, which is not an easy sell to a human brain. Akrasia is a formidable enemy. And most importantly, guilt doesn't seem like a good long-term motivator: if you want to join the ranks of people saving the world, I would rather you join them proudly. There are many trials and tribulations ahead, and we'd do better to face them with our heads held high.
7
Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid.
And similarly, addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face.
It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.
Nobody gets to comprehend the scope of these problems. The closest we can get is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings.
Because our feelings lie to us.
When you do the multiplication, you realize that addressing global poverty and building a brighter future deserve more resources than currently exist. There is not enough money, time, or effort in the world to do what we need to do.
There is only you, and me, and everyone else who is trying anyway.
8
You can't actually feel the weight of the world. The human mind is not capable of that feat.
But sometimes, you can catch a glimpse.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (272)
Nice write-up. I'm one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year... and then ends up giving away bits of spending money for other things anyway, but that's warm-fuzzies.
So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes) of income, around the end of the year or tax time, every year, in "JUST F-ING DO IT" mode.
This is the only thing I actually object to here. Any choice we make that influences the future at all could be said to reallocate probability between one set of future people and another set. There will only be one real future, though. While I vastly prefer for it to be a good one, I don't consider abortion to be murder, and so I don't feel any moral compulsion to maximize future people, or even to direct the future population towards a particular number. That would imply, to my view, that I'm already deciding the destinies of next year's people, let alone next aeon's, and that's already deeply immoral.
We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that's worth living in, not an arid desert, neither a universe tiled in smiley faces.
While I agree that, since future people will never be concrete entities, like shadowy figures, we don't get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It's easier in our days than it's ever been!
Additionally, I personally value a universe filled with humans higher than a universe filled with ■.
My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar's number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe's energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don't want to have changed.
Bioengineering might lead to humans who are much less similar to each other.
And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar's number.
That's something I've wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar's numbers.
Or a breeding population selecting for higher Dunbar's numbers.
Or does that qualify as bioengineering?
I suppose it should count as bioengineering for purposes of this discussion.
Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other's values or not value each other. Difficult potential balance to maintain, though that doesn't necessarily mean it should be rejected as an option.
Bioengineering makes CEV a lot harder.
Wow this post is pretty much exactly what I've been thinking about lately.
Yup. Been there. Still finding a way to use that ICU-nursing high as motivation for something more generalized than "omg take all the overtime shifts."
Also, I think that my brain already runs on something like virtue ethics, but that the particular thing I think is virtuous changes based on my beliefs about the world, and this is probably a decent way to do things for reasons other than visceral caring. (I mean, I do viscerally care about being virtuous...)
Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.
Yes, that is how adults help in real life. In science we chop off little sub-sub-problems we think we can address to do our part to address larger questions whose answers no one person will ever find alone, and thus end up doing enormous work on the shoulders of giants. It works roughly the same in activism.
I think this is a really good post and extreamly clear. The idea of of the broken care-O-meter is a very compelling metaphor. It might be worthwhile to try to put this somewhere higher exposure where people who have money and are not allready familiar with the LW memeplex would see it
I'm open to circulating it elsewhere. Any ideas? I've crossposted it on the EA forum, but right now that seems like lower exposure than LW.
No ideas here, but maybe ping David, Jeff or Julia?
Submitting things to reddit/metafilter/etc. can work surprisingly well.
I'm slightly averse to submitting my own content on reddit, but you (John_Maxwell_IV, to combat the bystander effect, unless you decline) are encouraged to do so.
My preference would be for the Minding Our Way version over the EA forum version over the LW version.
I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.
Going section by section:
I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to the main point.)
No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.
I occasionally donate modest amounts to this cause or that, if I feel like it. I don't think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.
I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I "can't possibly do enough". I help people when I feel like it and I don't pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.
I have read a fair amount of EA propaganda, and I still do not feel that I "should care about people suffering far away", sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.
I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the "biggest problems", as opposed to doing what I am good at, though I am happy that some people feel that way; humanity's strength is in its diversity.
Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like "Gandhi, Mother Theresa, Nelson Mandela" tend to have more effect than those of us within 1 standard deviation. Some people address "global poverty", others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.
I don't feel the weight of the world. Because it does not weigh on me.
Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.
Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :))
On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.
I feel like I'm somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.
My view is similar to yours, but with the following addition:
I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I've effectively stolen their money and spent it on something they wouldn't have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.
Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):
I think you are.
You are heartless.
Here's my question, and I hope you take the time to answer as honestly as you wrote your comment:
Why?
After all you've rejected to care about, why in the world would you care about something as abstract as "humanity surviving and thriving"? It's just an ape species, and there have already been billions of them. In addition, you clearly don't care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.
I don't mean to convince you otherwise, but it seems arbitrary - and surprisingly common - that someone who doesn't care about the suffering or lives of strangers would care about that one thing out of the blue.
I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.
I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?
It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?
Yes, rereading it, I think I misinterpreted response 2 as saying it doesn't matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn't matter, just durability and surivival. I thought this defeated the usual Big Future argument.
But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only "nicer" if they don't suffer (translation: their wellbeing doesn't really matter), then in what way would the Big Future matter?
I care a lot about humanity's future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.
...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.
And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.
(For what it's worth, I don't think your comments indicate that you're heartless.)
It's interesting because people will often accuse a low status out group of "thinking they are better than everyone else" *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw .... until I saw Hedonic Treader's comment.
I do sort of understand the attitude of the utilitarian EA's. If you really believe that everyone must value everyone else's life equally, then you'd be horrified by people's brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.
*I've seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
Perhaps to avoid confusion, my comment wasn't intended as an in-group out-group thing or even as a statement about my own relative status.
"Better than" and "worse than" are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It's a rather simple statement of relative moral status.
Here's the problem: If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.
Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
One can judge "judgmentalism on set A" without being "judgemental on set A" (while, of course, still being judgmental on set B).
That's a strawman. I haven't seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn't make them equal, by the way).
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
Why is that?
It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
By morality I mean a particular part of somebody's system of values. Roughly speaking, morality is the socially relevant part of the value system (though that's not a hard definition, but rather a pointer to the area where you should search for it).
I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.
I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:
[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn't say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
I do imagine that the first situation is more common, in general, than the second.
This is entirely because of the point:
A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
Sorry, I wasn't clear enough. By "so does everyone else" I meant "everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others".
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I'm not sure that it's actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream -- by my standards; it doesn't follow that I also believe it's better by your standards or by objective standards (whatever they might be) and feel smug about it.
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.
(Even food preferences -- a thing so notoriously subjective that the very word "taste" is used in other contexts to indicate something subjective and personal -- can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?
That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.
I don't.
So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?
I know it's possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
You are confused between two very different statements:
(1) I don't think that my morals are (always, necessarily) better than other people's.
(2) I have no basis whatsoever for judging morality and/or behavior of other people.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?
I mean, it's the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I'm evaluating others' beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about "moral progress", claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
No, I don't think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there's no reason to consider your own value system to be the very best there is, especially given that it's your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I'll make two points and see if they move the conversation forward:
1: "There's no reason to consider your own value system to be the very best there is"
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren't the absolute best there is. The same logic holds true for morals. I know I'm making some mistakes, but I don't know where those mistakes are. On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong. This is what I mean by "thinking that one's own morals are the best". I know I might not be right on everything, but I think I'm right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but "least wrong").
Which brings me to point 2.
2: "Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were."
I'm absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I've been equivocating between the two, that's why). I know I can't alter my moral beliefs on a whim, but that's because I have no reason to want to. Consider self-modifying to want to murder innocents. I can't do this, primarily because I don't want to, and CAN'T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn't get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that's an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying "You should do X". So wishing I had a different morality is like saying "I wish I though I should do X". What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I'm with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there's a discontinuity either in my reading or your writing.
It's not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So while I wouldn't murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn't seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn't seem to provide any new information.
There's no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"? I think this is basically the point EY makes about the "psychological unity of humankind".
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don't have to deal with those.
I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality "Has empathy and values survival and survival is impaired by murder".
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and "Has a moral system that prohibits murder" is a quality that successfully creates offspring that typically have the quality "Has a moral system that prohibits murder".
The different quality "Commits wanton murder" is less successful at creating offspring in modern society, because convicted murderers don't get to teach children that committing wanton murder is something to do.
Would you make that a normative statement?
Well, kinda-sorta. I don't think the subject is amenable to black-and-white thinking.
I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don't feel that people who think their morals are bad are to be admired and emulated either.
There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn't good either.
So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
In one particular aspect, yes. There are many aspects.
The barber shaves everyone who doesn't shave himself..? X-)
Then every human being in existence is heartless.
You are saying that shminux is "a worse person than you" and also "heartless", but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, "whoever cares about more people is better", then all you're saying is, "shminux cares about fewer people because he cares about fewer people". This is true, but tautologically so.
All morals are axioms, not theorems, and thus all moral claims are tautological.
Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.
I don't agree.
Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms.
Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.
This is exactly how I feel. I would slightly amend 1 to "I care about family, friends, some other people I know, and some other people I don't know but I have some other connection to". For example, I care about people who are where I was several years ago and I'll offer them help if we cross paths - there are TDT reasons for this. Are the they the "best" people for me to help under utilitarian grounds? No, and so what?
I don't disagree, and I don't think you're a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won't (e.g. after a syn bio proof of concept that kills 1/4 of the race I would hope that diversity in problem-selection would decrease). "It is best to diversify and hope" seems like a platitude that dodges the fun parts.
I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the "you must be Fundamentally Different" fallacy. Part of the theme behind this post is "you can interpret the internal caring feelings differently if you want", and while I interpret my care-senses differently, I do empathize with this sentiment.
That's not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:
In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don't matter less just because I'm not yet close to them). There's also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.
Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don't know most of them yet; see also the expanding circle).
I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don't "care" about most of the nameless masses who are out of my sight (in that I don't have feelings for them), but there's a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.
Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.
In that case, saying "any individual could become a close friend, so I should multiply 'caring for one friend' by the the number of individuals in the group" is wrong. Instead, I should multiply "caring for one friend' by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn't increase at all.
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.
Let's say, for example, that:
1. If X was my close friend, I would care about X
2. If Y was my close friend, I would care about Y
3. X and Y could not both be close friends of mine simultaneously.
Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn't I just care about both X and Y regardless of whether they are my close friends or not?
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, "only X or Y could be my close friend, but not both" may be an effect of that.
It's not "they're my close friend, and that's the reason to care about them", it's "they're under my caring limit, and that allows me to care about them". "Is my close friend" is just another way to express "this person happened, by chance, to be added while I was still under my limit". There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don't affect their merit as a person, such as living closer to you).
Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren't; if you had lived somewhere else or had different random experiences, you'd have had different close friends.
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as 'time devoted to a common activity' or 'emotional availability.') Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there's an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can't.
(Friendships in this view don't have to be symmetric- there are people that I'd listen to them complain that I don't expect they'd listen to me complain, and the reverse exists as well.)
I think that it's reasonable to call facts 'special' relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
That's a solid point, and to a significant extent I agree.
There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed "allies", which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.
This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it's simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of "selfishness"--ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.
However, this is not the same thing as "caring" in the sense So8res is using the term; I think he's using the term more in the sense of "value". For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you're going to be better able to help your close friends than you are a random stranger on the street.
The way you put it, it seems like you want to care for both X and Y but are unable to.
However, if that's the case then So8res's point carries, because the core argument in the post translates to "if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y".
"I want to care for an arbitrarily chosen person from the set of X and Y" is not "I want to care for X and Y". It's "I want to care for X or Y".
Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.
I think it depends on what you mean by "care".
If you mean "devote time and effort to", sure; I completely agree that it makes a lot of sense to do this for your friends, and you can't do that for everyone.
If you mean "value as a human being and desire their well-being", then I think it's not justifiable to afford special privilege in this regard to close friends.
By "care" I mean allocating a considerably higher value to his particular human compared to a random one.
Yes, I understand you do, but why do you think so?
I don't think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.
If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that's so then there are many, many other people in the world who have similar qualities but are not my friends.
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
I can't pay everyone's mortgage, and nor can anyone else, so different people will need to pay for different mortgages.
Which approach works better, me paying my mortgage and you paying yours, or me paying your mortgage and you paying mine?
Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they're probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they're only middle class to begin with I feel more pity than admiration.
* Meaning the extreme, "all human lives are equally valuable to me" version, rather than just a desire to not waste charity money.
I'm not sure what to make out of it, but one could run the motivating example backwards:
"He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he'd endure that and quickly arrives at the conclusion that he doesn't care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low."
What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he'd feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.
It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.
Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can't solve. But maybe societies can.
That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it's implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small (not really what I mean, but I'm short on time right now, I hope you can see what I'm pointing at), and obviously mine aren't going to work for everyone.
(As a matter of fact, my actual "bird death" thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.)
If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.
You didn't give details on how or how not to set up the thought experiment. I took it to mean 'your spontaneous valuation when imagining the situation' followed by n objective'multiplication'. Now my reaction wasn't that of aversion, but I tried to think of possible reactions and what would follow from that.
Nothing wrong with mind hacks per se. I have read your productivity post. But I don't think they don't help in establishing 'intrinsic' value. For personal self-modification (motivation) it seems to work nice.
I don't have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don't have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.
Because of this, and because I'm basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on "more rational" values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I'm not sure why many people here understand that idea in some cases but ignore it in others. If we're going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.
Gunnar Zarncke makes a good point as well, one I think complements my argument. There's no standard with which to choose between helping all the birds and helping none, once you've thrown the care-o-meter away.
I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right.
But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).
I worry that such consistency isn't possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.
I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where
and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.
I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.
I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
I'm not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn't improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn't seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It's difficult to enjoy a delusion you're conscious of.
Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?
Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.
I always find it curious how people forget that equality is symmetrical and works in both directions.
So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...
No, except by interpreting the words "morally equivalent" in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn't matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).
The only symmetry here is that of "equal and opposite".
Did anyone actually need that spelled out?
These verbal contortions do not look convincing.
The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.
Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.
I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.
Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.
In case anyone else was curious about this, here's a quote:
Oops.
A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it's much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.
For instance, what do you think George Bush Sr's worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn't fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.
This is utterly bizarre on many levels, but I'm grateful too -- I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.
When you ask how bad an action is, you can mean (at least) two different things.
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.
In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.
That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.
But this recurses until all the leaf nodes are "how much harm does it do?" so it's exactly equivalent to how much harm we expect this person to inflict over the course of their lives.
By the same token, it's easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn't change the analysis that the embargo was ten thousand times worse than the school shooting.
For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.)
That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all.
For the avoidance of doubt, I'm not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn't perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.)
But if the question is who is more *evil, GHWB or the Columbine killers?", the answer depends on what you mean by "evil" and most people most of the time don't mean "causing harm"; they mean something they probably couldn't express in words but that probably ends up being close to "having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with" -- which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn't include a tendency to callousness when dealing with the lives of strangers thousands of miles away.
Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.
Are some cases of killing a child equivalent to buying expensive shoes?
Presumably if you stole a child's lunch money and bought a pair of shoes with it
Those in which the way you kill the child is by spending money on luxuries rather than saving the child's life with it.
Do elaborate. How exactly does that work?
For example, I have some photographic equipment. When I bought, say, a camera, did I personally kill a child by doing this?
(I have the impression that you're pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we're discussing. But I'm going to take what you say at face value anyway.)
The context here is the idea (stated forcefully by Peter Singer, but he's by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities.
In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that's comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
(Not necessarily specifically a child. It may be more expensive to save children's lives, in which case it would need to be a more expensive camera.)
Of course there isn't a specific child you have killed all by yourself personally, but no one suggested there is.
So, that was the original claim that Richard Kennaway described. Your objection to this wasn't to argue with the moral principles involved but to suggest that there's a symmetry problem: that "killing a child is morally equivalent to buying an expensive luxury" is less plausible than "buying an expensive luxury is morally equivalent to killing a child".
Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer's claim is something like "for all expensive luxury purchases, there exists a morally equivalent case of killing a child"; your proposed reversal is something like "for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury".) Hence pianoforte611's response.
You seemed happy to accept an amendment that attempts to fix up the asymmetry. And (I assumed) you were still assuming for the sake of argument the Singer-ish position that buying luxury goods is like killing children, and aiming to show that there's an internal inconsistency in the thinking of those who espouse it because they won't accept its reversal.
But I think there isn't any such inconsistency, because to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong. Which would be fair enough if you weren't saying that what's wrong with the original principle is that its reversal is no good.
Now that the funding gap of the AMF has closed, I'm not sure this is still the case.
Nope. I express my rhetorical contempt in, um, more obvious ways. It's not exactly that I don't understand, it's rather that I see multiple ways of proceeding and I don't know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing "right" and "wrong" which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We're talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
Yes, I accept that.
Not quite. I don't think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions -- for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn't focus on that issue -- for the purposes of this argument I accepted the thesis and looked into its implications.
Correct.
It's not an issue of plausibility. It's an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what's commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions -- making the heinous more normal works just as well.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer's proposition and if you don't think it does, I would like to know why it doesn't.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don't see what "luxuries" have to do with it -- you kill children by failing to max out your credit cards as well).
In common language, however, "killing a child" does not mean "fail to do something which could, we think, on the average, avoid one death somewhere in Africa". "Killing a child" means doing something which directly and causally leads to a child's death.
No. I think the original principle is wrong, but that's irrelevant here -- in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
See also http://xkcd.com/1035/, last panel.
One man's modus ponens... I don't lose much sleep when I hear that a child I had never heard of before was killed.
NancyLebovitz:
RichardKennaway:
Richard's question is a good one, but even if there's no good answer it's a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending - and that this is an easier sell. So the framing of EA that Nancy describes has practical value.
The biggest problem I have with 'dead baby' arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don't find that babies have a whole lot of intrinsic value until they're properly programmed.
Ditto, though I diverged differently. I said, "Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?"
I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at "zero, despair, crying at every news story." But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won't add or subtract much from the total utility of human civilization.
But you don't have to bear it alone. It's not as if one person has to care about everything (nor each single person has to care for all).
Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).
Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can't let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent.
Or in short, blur out the details and this is me:
Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.
That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.
It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes)
I don't think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.
"A part of the system that you cannot change" is a vague term (and it's a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like "if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can't change?"
Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can't change affects the amounts that you assign to the parts that you can change.
That's one way for people to become religious.
I'm not sure what point is being made here. Distributing burdens is a part of any group, why is religion exceptional here?
Theory of mind, heh... :-)
The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don't look at you strangely), it's not your church community which shares your burden. It's Jesus who lifts this burden off your shoulders.
Ah, that's probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).
Yes, of course. I pointed out another way through which you don't have to bear it alone.
Ah, I understand. Thanks for clearing up my confusion.
This is why I prefer to frame EA as something exciting, not burdensome.
I've read that. It's definitely been the best argument for convincing me to try EA that I've encountered. Not convincing, currently, but more convincing than anything else.
Do we have any data on which EA pitches tend to be most effective?
Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.
This just might have more general application.
From my perspective, it's "I have to think about all the problems in the world and care about them." That's burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don't actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don't have to worry about connecting them to people starving in distant lands.
Here's a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that's not an issue. The idea is to score as many points as possible before that happens.
If you save someone's life on expectation, you save someone's life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.
Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.)
Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.
Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other's distress vs. sympathy is understanding other's pain as it reflects your own distress. Specifically mitigating 'feeling pain from other's distress' as applied to a broad sphere of 'others' has been a significant part of my turn away from an altruistic outlook; this wasn't hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people's distress viscerally distant.
If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with?
Wouldn't you just shrug and dismiss the misfortune as irrelevant?
If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?
I would not. This is a fair point.
Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?
Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the "pain" nerves at a given intensity.
Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.
So, considering this counterexample, I'd say that no, not every possible misfortune includes damage. Though I imagine that most do.
Much of what could be called damage in this context wouldn't necessarily happen within your body, you can take damage to your reputation for example.
You can certainly be deluded about receiving damage especially in the social game.
That is true; but it's enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).
Yes. I didn't mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.
No need for sci-fi.
Let's say you cut your finger while chopping vegetables. If you don't feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.
Thank You for this write-up; I really like the structure of it actually managing to present the evolution of an idea. Agreeing with more or less of the content, I often find myself posing the question whether I - and seven billion others - could save the world with my, our own hands. (I am beginning to see utilons even in my work as an artist, but that belongs into a wholly different post) This is a question for the ones like me, not earning much, and - without further and serious reclusion, reinvention and reorientation - not going to earn much, ever: Do I a) maximise and donate the small amounts I receive now, b) maximise my future income while minimising donations for now to spend on self-improvement and donate some highly uncertain, possibly huge sum in the future or c) use my resources to directly change something now? Let's not make it an overly complex discussion, so feel free to message me instead of commenting.
Concerning mother Theresa and other saints, I think we all know somebody who was an especially vociferous denier of her sanctity. I think it helps if I model myself as an instinctly selfish creature, and then go on and use my selfish instincts to push myself in a good direction. (I did this - on a small scale - with my smoking problem and told myself: Ok, so you wanna smoke, hm?? So go on and smoke - when you have won the next competition. So here's what I do whenever I feel the urge: Oh, I wanna smoke; Oh I can't, so how do I optimise my chance of smoking? Oh, I should go and work on my project) I think this technique - how darksided and dangerous it ever may be - can be used to propel myself towards even bigger goals.
Two possible responses that a person could have after recognizing that their care-o-meter is broken and deciding to pursue important causes anyways:
Option 1: Ignore their care-o-meter, treat its readings as nothing but noise, and rely on other tools instead.
Option 2: Don't naively trust their care-o-meter, and put effort into making it so that their care-o-meter will be engaged when it's appropriate, will be not-too-horribly calibrated, and will be useful as they pursue the projects that they've identified as important (despite its flaws).
Parts of this post seem to gesture towards option 2 (like the Daniel story, and section 8), while other parts seem to gesture towards option 1 (like the courage analogy, and section 5).
I definitely don't suggest ignoring the care-o-meter entirely. Emotions are the compass.
Rather, I advocate not trusting the care-o-meter on big numbers, because it's not calibrated for big numbers. Use it on small things where it is calibrated, and then multiply yourself if you need to deal with big problems.
Attempting to process this post in light of being on my anti-anxiety medication is weird.
There are specific parts in your post where I thought 'If I was having these thoughts, it would probably be a sign I had not yet taken my pill today.' and I get the distinct feeling I would read this entirely differently when not on medication.
It's kind of like 'I notice I'm confused' except... In this case I know why I'm confused and I know that this particular kind of confusion is probably better than the alternative (Being a sleep deprived mess from constant worry) so I'm not going to pick at it. Which is not a feeling I usually get, which is why I said it was weird.
However, pretending to view this from the perspective of someone who can handle anxiety effectively, I would say this an excellent post and I upvoted it even though I can't really connect to it on a personal level.
That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.
If anybody knows how to fix that, please tell me.
Obviously your mileage may vary, but I find it helps to imagine a stranger as someone else's family/friend. If I think of how much I care about people close to me, and imagine that that stranger has people who care about them as much as I can about my brother, then I find it easier to do things to help that person.
I guess you could say I don't really care about them, but care about the feelings of caring other people have towards them.
If that doesn't work, this is how I originally though of it. If a stranger passed by me on the street and collapsed, I would care about their well being (I know this empirically). I know nothing about them, I only care about them due to proximity. It offends me rationally that my sense of caring is utter dependent on something as stupid as proximity, so I simply create a rule that says "If I would care about this person if they were here, I have to act like I care if they are somewhere else". Thus, utilitarianism (or something like it).
It's worth noting that another, equally valid rule would be "If I wouldn't care about someone if they were far away, there's no reason to care about them when they happen to be right here". I don't like that rule as much, but it does resolve what I see as an inconsistency.
Thank you. That seems like a good way of putting it. I seem to have problems thinking of all 7 billion people as individuals. I will try to think about people I see outside as having a life of their own even if I don't know about it. Maybe that helps.
Why do you think it needs fixing?
I think this might be holding me back. People talk about "support" from friends and family which I don't seem to have, most likely because I don't return that sentiment.
Holding you back from what?
Also, you said (emphasis mine) "incapable of even imagining the thought process that would lead me to care for people I don't know" -- you do know your friends and family, right?
excellent question. I think I'm on the wrong track and something else entirely might be going on in my brain. Thank you.
What makes you care about caring?
I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".
So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.
Why should I act this way?
To better approximate a perfectly-rational Bayesian reasoner (with your values.)
Which, presumably, would be able to model the universe correctly complete with large numbers.
That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.
Interesting article, sounds a very good introduction to scope insensitivity.
Two points where I disagree :
I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.
I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."
I very strongly agree with your point here, but would like to add that the problem of finding a political structure which properly maximises the happiness of the people living under it is a very difficult one, and missteps are easy.
Birds are the classic example, both in the literature and (through the literature) here.
This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?
(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)
Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.
It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.
That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.
It's also worth mentioning that cleaning birds after an oil spill isn't always even helpful. Some birds, like gulls and penguins, do pretty well. Others, like loons, tend to do poorly. Here are some articles concerning cleaning oiled birds.
http://www.npr.org/templates/story/story.php?storyId=127749940
http://news.discovery.com/animals/experts-kill-dont-clean-oiled-birds.htm
And I know that the oiled birds issue was only an example, but I just wanted to point out that this issue, much like the "Food and clothing aid to Africa" examples you often see, isn't necessarily a good idea even ignoring opportunity cost.
I would like to subscribe to your newsletter!
I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.
Fifty thousand times the marginal utility of a dollar, which is probably much less than the utility difference between the status quo and having fifty thousand dollars less unless Daniel is filthy rich.
Yeah it's actually a huge pain in the ass to try to value things given that people tend to be short on both time and money. (For example, an EA probably rates a dollar going towards de-oiling a bird as negative value due to the opportunity cost, even if they feel that de-oiling a bird has positive value in some "intrinsic" sense.)
I didn't really want to go into my thoughts on how you should try to evaluate "intrinsic" worth (or what that even means) in this post, both for reasons of time and complexity, but if you're looking for an easier way to do the evaluation yourself, consider queries such as "would I prefer that my society produce, on the margin, another bic lighter or another bird deoiling?". This analysis is biased in the opposite direction from "how much of my own money would I like to pay", and is definitely not a good metric alone, but it might point you in the right direction when it comes to finding various metrics and comparisons by which to probe your intrinsic sense of bird-worth.
I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible. But I don't focus much effort on it. I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.
For example: - The best way for me to improve math and science education is to work on my own teaching ability. - The best way for me to improve the mental health of college students is to make time to support friends that struggle with depression and suicidal thoughts. - The best way for me to stop racism or sexism is to first learn to recognize and quash it in myself, and then to expose it when I encounter it around me.
Changing my own actions and attitudes is hard, but it's also the one area where I have the most control. And as I've worked on this for the past few years, I've managed to create a positive feedback loop by slowly increasing the size of my care-o-meter. Empathy is a useful habit that can be trained, just as much as rationality can be.
I realize that it's hard to get an accurate sense of the impact a donation can have for someone on the other side of the world. It's possible that I'm being led astray by my care-o-meter to focus on people near at hand. I do in principle care equally about people in other parts of the world, even if my care-o-meter hasn't figured that out yet. So if you'd like to prove to me that I can be more effective by focusing my efforts elsewhere, I'd be happy to listen. (I am a poor grad student, so donating large amounts of money isn't really feasible for me yet, although I do realize I still make far more than the world average.) For now, I'm doing the best that I can in the way that I know how.
To conclude, I wouldn't call myself an effective altruist, but I do count them as allies. And I wouldn't want to convert everyone to my perspective; as others have mentioned already, it's good to have a wide range of different approaches.
I would love to see a splinter group, Efficient Altruism. I have no desire to give as much as I can afford, but feel VERY strongly about giving as efficiently to the causes I support as I can. When I read, I think from EA themselves, the estimated difference in efficiency of African aid organizations, it changed my whole perspective on charity.
Cross commented from the EA forum
First of all. Thanks Nate. An engaging outlook on overcoming point and shoot morality.
Moral Tribes, Joshua Greene`s book, addresses the question of when to do this manual switch. Interested readers may want to check it out.
Some of us - where "us" here means people who are really trying - take your approach. They visualize the sinking ship, the hanging souls silently glaring at them in desperation, they shut up and multiply, and to the extent possible, they let go of the anchoring emotions that are sinking the ship.
They act.
This approach is invaluable, and I see it working for some of the heroes of our age, you, Geoff Anders, Bastien Stern, Brian Tomasik, Julian Savulescu, yet I don't think it's the only way to help a lot - and we need all the approaches we can get - so I'll expose the other one, currently a minority, best illustrated by Anders Sandberg.
Like those you address, some people really want to care, however, the emotional bias that is stopping them from doing so is not primarily scope insensitivity, but something akin to loss aversion, except it manifests as a distaste for negative motivation and an overwhelming drive for positive motivation. When facing a choice between
they will always pick one of the top two, because they are framed positively. The bottom two may sound more pressing, but they mention negative, undesirable, uncomfortable forces. They are staged in a frame where we feel overpowered by nature. Nature is a force trying to change our state into a worse state, and you are asked to join the gate keepers who will contain the destructive invasion that is to come.
The top two however, are not only more cheerful, they are set in a completely different frame: you are given a grandiose vision of a possible future, and told you can be part of the force that will sculpt it. What they tell you is we have the tools for you, join us, and with our amazing equipment, we will reshape the earth.
I am one of these people, Stephen Frey, João Fabiano, Anders Sandberg, being some other examples. David Pearce once attentively noticed this underlying characteristic, and jokingly attributed to this category the welcoming name of "Positive Utilitarian".
Some of us, who are driven by this cheerful positive idea, have found a way to continue our efforts on the right lane despite that strong inclination to go towards the riches instead of away from darkness.
We are driven by the awesomeness of it all.
Pretend for an instant the problems of the world are shades, pitch black shades. They are spread around everywhere. The world is mostly dark. You now find yourself in a world illuminated in exact proportion to the good things it has, all you see around you are faint glimpses of beauty and awesome here and there, candles of good intention, and the occasional lamps of concerted effort. What moves you is an exploratory urge. You want to see more, to feel more. Those dark areas are not helping you with that. Since they are problems, your job is to be inventive, to find solutions. You are told once upon a time it was all dark, your ancestors were able to ignite the first twigs into a bone fire. Sat by the fire you hear from wise sages' stories of the dark age that lies behind us, Hans Rosling, Robert Wright, Jared Diamond and Steve Pinker show how all the gadgets, symbols and technologies we created gave light to all we see now. By now we have lamps of many kinds and shapes, but you know more can be found. With diligence, smarts and help, you know we can beam lasers and create floodlights, we can solve things at scale, we can cause the earth to shine. But you are not stopping there, you are ambitious. You want to harness the sun.
It so happens that there's a million billion billion suns out there, so we too, shut up and multiply.
Why do we look at the world this way, why do we feel energized by this metaphor but not the prevention one? I don't know. As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn't matter. At the end of the day, we act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
Did the oil bird mental exercise. Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits. Sad.
What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?
If there's no difference we arrive at the general problem of wireheading. I suspect very few people who identify themselves as altruists would choose being wireheaded for altruistic high. What are the parameters that would keep them from doing so?
Yes. Let me change my question. If (absent imaginary interventions with electrodes or drugs that don't currently exist) an altruistic high is, literally, what it feels like when you care about others and act to help them, then saying "I don't care about them, I just wanted the high" is like saying "I don't enjoy sex, I just do it for the pleasure", or "A stubbed toe doesn't hurt, it just gives me a jolt of pain." In short, reductionism gone wrong, angst at contemplating the physicality of mind.
It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem.
It's not clear to me where the physicality of mind or reductionism gone wrong enter the picture, not to mention angst. Oversimplification is aesthetics gone wrong.
ETA: I suppose it would be appropriately generous to assume that you meant altruistic high as one of the many mind states that caring feels like, but in many instances caring in the sense that I'm motivated to do something doesn't seem to feel like anything at all. Perhaps there's plenty of automation involved and only novel stimuli initiate noticeable perturbations. It would be an easy mistake to only count the instances where caring feels like something, which I think happened in timujin's case. It would also be a mistake to think you only actually care about something when it doesn't feel like anything.
I was addressing timujin's original comment, where he professed to desiring the altruistic high while being indifferent to other people, which on the face of it is paradoxical. Perhaps, I speculate, noticing that the feeling is a thing distinct from what the feeling is about has led him to interpret this as discovering that he doesn't care about the latter.
Or, it also occurs to me, perhaps he is experiencing the physical feeling without the connection to action, as when people taking morphine report that they still feel the pain, but it no longer hurts.
Brains can go wrong in all sorts of ways.
Because I wouldn't actually care if my actions actually help, as long as my brain thinks they do.
Are you favouring wireheading then? (See hyporational's comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?
I think he wants to do things for which his brain whispers "this is altruistic" right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.
No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn't wirehead into constantly feeling sweetness in my mouth.
I find this a confusing position. Please expand
Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.
The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.
The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.
I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.
From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.
Upvoted for clarity and relevance. You touched on the exact reason why many people I know can't/won't become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.
I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I'd note how much I was going to donate and then donate to one of Givewell's top charities.
If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?
Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.
People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.
I can think of two categories of responses.
One is something like "I care by induction". Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it's not much of a leap to "I should just start caring about people before I meet them". After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.
The other is something like "The caring is much better calibrated than the not-caring". Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I've never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.
I've also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. "I don't care much about billions of people" is an almost totally unfounded wild guess, but "I care lots about individual people" has lots of solid evidence, so when they collide, the latter wins.
(Neither of these are ironclad, at least not as I've presented them, but hopefully I've managed to gesture in a useful direction.)
Your second category of response seems to say "my intuitions about considering a group of people, taken billions at a time, aren't reliable, but my intuitions about considering the same group of people, one at a time, are". You then conclude that you care because taking the billions of people one at a time implies that you care about them.
But it seems that I could apply the same argument a little differently--instead of applying it to how many people you consider at a time, apply it to the total size of the group. "my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good." The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups--that is, the second argument refutes the first. Going from "I care about the people in my life" to "I would care about everyone if I met them" is as inappropriate as going from "I know what happens to clocks at slow speeds" to "I know what happens to clocks at near-light speeds".
I'll go a more direct route:
The next time you are in a queue with strangers, imagine the two people behind you (that you haven't met before and don't expect to meet again and didn't really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.
If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.
To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group "people I've never met" that I don't care about, and into the group of people that I do care about.
Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.
An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:
Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.
According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.
I know the name is just a coincidence, but I'm going to pretend that you wrote this about me.
I see suffering the whole day in healthcare but I'm actually pretty much numbed to it. Nothing really gets to me, and if it did it could be quite crippling. Sometimes I watch sad videos or read dramatizations of real events to force myself to care for a while, to keep me from forgetting why I show up at work. Reading certain types of writings by rationalists helps too.
You shouldn't get more than glimpses of the weight of the world, or rather you shouldn't let them through the defences, to be able to function.
"Will the procedure hurt?" asked the patient. "Not if you don't sting yourself by accident!" answered the doctor with the needle.
Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.
Because Daniel has been thinking of scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of dollars: the internal feeling of satisfaction with gaining money can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about making lots of money, he shuts up and multiplies the joy of finding $20 by a million....
Um, that's nonsense. His brain does not misreport how much he actually cares -- it's just that his brain thinks that it should care more. It's a conflict between "is" and "should", not a matter of misreporting "is".
After which he goes and robs a bank.
You do realize that what I said is a restatement of one of the examples in the original article, except substituting "caring about money" for "caring about birds"? And snarles' post was a somewhat more indirect version of that as well? Being nonsense is the whole point.
Yes, I do, and I think it's nonsense there as well. The care-o-meter is not broken, it's just that your brain would prefer you to care more about all these numbers. It's like preferring not have a fever and saying the thermometer is broken because it shows too high a temperature.
Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds) -- and it'd be a very weird utility function indeed if the difference between the first and the second is much larger than one-thousandth the difference between the second and the third. And even if U did have such kinks, the fact that you don't know exactly how many oiled birds are there would smooth them away when computing EU(one fewer oiled bird) etc.
(IIRC EY said something similar in the sequences, using starving children rather than oiled birds as the example, but I can't seem to find it right now.)
Unless you also care about who is saving the birds -- but you aren't considering saving them with your own hands, you're considering giving money to save them, and money is fungible, so it'd be weird to care about who is giving the money.
Nonlinear in what?
Daniel's utility for dollars is nonlinear in the total number of dollars that he has, not in the total number of dollars in the world. Likewise, his utility for birds is nonlinear in the total number of birds that he has saved, not in the total number of birds that exist in the world.
(Actually, I'd expect it to have two components, one of which is nonlinear in the number of birds he has saved and another of which is nonlinear in the total number of birds in the world. However, the second factor would be negligibly small in most situations.)
IOW he doesn't actually care about the birds, he cares about himself.
He has a utility function that is larger when more birds are saved. If this doesn't count as caring about the birds, your definition of "cares about the birds" is very arbitrary.
He has a utility function that is larger when he saves more birds; birds saved by other people don't count.
If it has two components, they do count, just not by much.
I think there's some good points to be made about the care-o-meter as a heuristic.
Basically, let's say that the utility associated with altruistic effort has a term something like this:
U = [relative amount of impact I can have on the problem] * [absolute significance of the problem]
To some extent, one's care-o-meter is a measurement of the latter term, i.e. the "scope" of the problem, and the issue of scope insensitivity demonstrates that it fails miserably in this regard. However, that isn't entirely an accurate criticism, because as a rough heuristic your care-o-meter isn't simply a measure of the second term; it also includes some aspects of the first term. Indeed, if one views the care-o-meter as a "call to action", then it would make much more sense for it to be a heuristic estimate of U than of absolute problem significance.
For example, if your care-o-meter says you care more about your friends than about people far away, or don't care much more about large disasters than smaller ones, then any combination of three things could be going on:
(1) I can't have as much relative impact on those problems.
(2) Those problems are simply less important.
(3) My care-o-meter is simply wrong.
I don't agree at all with (2), and I can see a lot of merit in the suggestion of (3). However, I think that for most people in most of human history, (1) has been relatively applicable. If you, personally, are only capable of helping other people a single person at a time, then it doesn't really matter if that person is a single person who has been hurt, or one out of a million suffering due to a major disaster. Also, you are in a unique position to help your friends more so than other people, and thus it makes plenty of sense to spend effort on your friends more so than on random strangers.
Of course, it is nonetheless true that this kind of care-o-meter miscalibration has always been an issue. At the very least, there have always been people who have had much more power than others, and thus have been able to make larger impacts on larger problems.
More importantly, in modern times (1) is far less true than it used to be for a great many people. It is genuinely possible for many people in the world to have a significant impact on what you refer to as distant invisible problems, and thus good care-o-meter calibration is essential.
Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.