Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

On Caring

99 Post author: So8res 15 October 2014 01:59AM

This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.

1

I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".

Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.

The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.

I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.

This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.

For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.

The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.

Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.

And this is a problem.

2

It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.

Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.

When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to grasp the gravity of the situation.

Saving a person's life feels great, and it would probably feel just about as good to save one life as it would feel to save the world. It surely wouldn't be many billion times more of a high to save the world, because your hardware can't express a feeling a billion times bigger than the feeling of saving a person's life. But even though the altruistic high from saving someone's life would be shockingly similar to the altruistic high from saving the world, always remember that behind those similar feelings there is a whole world of difference.

Our internal care-feelings are woefully inadequate for deciding how to act in a world with big problems.

3

There's a mental shift that happened to me when I first started internalizing scope insensitivity. It is a little difficult to articulate, so I'm going to start with a few stories.

Consider Alice, a software engineer at Amazon in Seattle. Once a month or so, those college students with show up on street corners with clipboards, looking ever more disillusioned as they struggle to convince people to donate to Doctors Without Borders. Usually, Alice avoids eye contact and goes about her day, but this month they finally manage to corner her. They explain Doctors Without Borders, and she actually has to admit that it sounds like a pretty good cause. She ends up handing them $20 through a combination of guilt, social pressure, and altruism, and then rushes back to work. (Next month, when they show up again, she avoids eye contact.)

Now consider Bob, who has been given the Ice Bucket Challenge by a friend on facebook. He feels too busy to do the ice bucket challenge, and instead just donates $100 to ALSA.

Now consider Christine, who is in the college sorority ΑΔΠ. ΑΔΠ is engaged in a competition with ΠΒΦ (another sorority) to see who can raise the most money for the National Breast Cancer Foundation in a week. Christine has a competitive spirit and gets engaged in fund-raising, and gives a few hundred dollars herself over the course of the week (especially at times when ΑΔΠ is especially behind).

All three of these people are donating money to charitable organizations… and that's great. But notice that there's something similar in these three stories: these donations are largely motivated by a social context. Alice feels obligation and social pressure. Bob feels social pressure and maybe a bit of camaraderie. Christine feels camaraderie and competitiveness. These are all fine motivations, but notice that these motivations are related to the social setting, and only tangentially to the content of the charitable donation.

If you took any of Alice or Bob or Christine and asked them why they aren't donating all of their time and money to these causes that they apparently believe are worthwhile, they'd look at you funny and they'd probably think you were being rude (with good reason!). If you pressed, they might tell you that money is a little tight right now, or that they would donate more if they were a better person.

But the question would still feel kind of wrong. Giving all your money away is just not what you do with money. We can all say out loud that people who give all their possessions away are really great, but behind closed doors we all know that people are crazy. (Good crazy, perhaps, but crazy all the same.)

This is a mindset that I inhabited for a while. There's an alternative mindset that can hit you like a freight train when you start internalizing scope insensitivity.

4

Consider Daniel, a college student shortly after the Deepwater Horizon BP oil spill. He encounters one of those college students with the clipboards on the street corners, soliciting donations to the World Wildlife Foundation. They're trying to save as many oiled birds as possible. Normally, Daniel would simply dismiss the charity as Not The Most Important Thing, or Not Worth His Time Right Now, or Somebody Else's Problem, but this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can. They simply don't have the resources to clean all the available birds. A pathetic young bird flops towards his feet, slick with oil, eyes barely able to open. He kneels down to pick it up and help it onto the table. One of the bird-cleaners informs him that they won't have time to get to that bird themselves, but he could pull on some gloves and could probably save the bird with three minutes of washing.

blog.bird-rescue.org

Daniel decides that he would spend three minutes of his time to save the bird, and that he would also be happy to pay at least $3 to have someone else spend a few minutes cleaning the bird. He introspects and finds that this is not just because he imagined a bird right in front of him: he feels that it is worth at least three minutes of his time (or $3) to save an oiled bird in some vague platonic sense.

And, because he's been thinking about scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of birds: the internal feeling of caring can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.

Thousands and thousands of birds were oiled by the BP spill alone. After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars. And that's not even counting wildlife threatened by other oil spills.

And if he cares that much about de-oiling birds, then how much does he actually care about factory farming, nevermind hunger, or poverty, or sickness? How much does he actually care about wars that ravage nations? About neglected, deprived children? About the future of humanity? He actually cares about these things to the tune of much more money than he has, and much more time than he has.

For the first time, Daniel sees a glimpse of of how much he actually cares, and how poor a state the world is in.

This has the strange effect that Daniel's reasoning goes full-circle, and he realizes that he actually can't care about oiled birds to the tune of 3 minutes or $3: not because the birds aren't worth the time and money (and, in fact, he thinks that the economy produces things priced at $3 which are worth less than the bird's survival), but because he can't spend his time or money on saving the birds. The opportunity cost suddenly seems far too high: there is too much else to do! People are sick and starving and dying! The very future of our civilization is at stake!

Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.

Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?

In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.

In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

Alice and Bob and Christine usually aren't spending time solving all the world's problems because they forget to see them. If you remind them — put them in a social context where they remember how much they care (hopefully without guilt or pressure) — then they'll likely donate a little money.

By contrast, Daniel and others who have undergone the mental shift aren't spending time solving all the world's problems because there are just too many problems. (Daniel hopefully goes on to discover movements like effective altruism and starts contributing towards fixing the world's most pressing problems.)

5

I'm not trying to preach here about how to be a good person. You don't need to share my viewpoint to be a good person (obviously).

Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.

The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.

But this is an error — prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters.

Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.

You don't get to feel the appropriate amount of "care", in your body. Sorry — the world's problems are just too large, and your body is not built to respond appropriately to problems of this magnitude. But if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control.

6

This, of course, leads us to the question of "what the hell do you then?"

And I don't really know yet. (Though I'll plug the Giving What We Can pledge, GiveWell, MIRI, and The Future of Humanity Institute as a good start).

I think that at least part of it comes from a certain sort of desperate perspective. It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.

I'm not trying to guilt you into giving more money away — becoming a philanthropist is really really hard. (If you're already a philanthropist, then you have my acclaim and my affection.) First it requires you to have money, which is uncommon, and then it requires you to throw that money at distant invisible problems, which is not an easy sell to a human brain. Akrasia is a formidable enemy. And most importantly, guilt doesn't seem like a good long-term motivator: if you want to join the ranks of people saving the world, I would rather you join them proudly. There are many trials and tribulations ahead, and we'd do better to face them with our heads held high.

7

Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid.

And similarly, addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face.

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Nobody gets to comprehend the scope of these problems. The closest we can get is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings.

Because our feelings lie to us.

When you do the multiplication, you realize that addressing global poverty and building a brighter future deserve more resources than currently exist. There is not enough money, time, or effort in the world to do what we need to do.

There is only you, and me, and everyone else who is trying anyway.

8

You can't actually feel the weight of the world. The human mind is not capable of that feat.

But sometimes, you can catch a glimpse.

Comments (272)

Comment author: blacktrance 20 October 2014 12:14:57AM 12 points [-]

Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.

Comment author: timujin 12 October 2014 08:27:53AM 12 points [-]

Did the oil bird mental exercise. Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits. Sad.

Comment author: Capla 21 October 2014 01:10:13AM 7 points [-]

If you acctully think it's sad (Do you?), then you have a higher order set of values that wants you to want to care about others.

If you want to want to care, you can do things to change yourself so that you do care. Even more importantly, you can begin to act act *as if *you care, because "caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway."

All I know is that I want to the sort of person who cares. So, I act as that sort of person, and thereby become her.

Comment author: Philip_W 09 December 2014 03:39:32PM 1 point [-]

you can do things to change yourself so that you do care.

Would you care to give examples or explain what to look for?

Comment author: Capla 09 December 2014 05:07:40PM *  5 points [-]

The biggest thing is just to act like you are already the sort of person who does care. Go do the good work.

Find people who are better than you. Hang out with them. "You become like the 6 people you spend the most time with" and all that. (I remember reading the chapter on penetrating Azkaban in HP:MoR, and feeling how much I didn't care. I knew that there are places in the world where the suffering is as great as in that fictional place, but that it didn't bother me, I would just go about my day and go to sleep, where the fictional Harry is deeply shaken by his experience. I felt, "I'm not Good [in the moral sense] enough" and then thought that if I'm not good enough, I need to find people who are, who will help me be better. I need to find my Hermiones.)

I'm trying to find the most Good people of my generation, but I realized long ago that I shouldn't be looking for Good people, so much as I should be looking for people who are actively seeking to be better than they are. (If you want to be as Good as you can be, please message me. Maybe we can help each other.)

My feeling of moral inadequacy compared to Harry's feelings towards Azkaban (fictional) aren't really fair. My brain isn't designed to be moved abstract concepts. Harry (fictional) saw that suffering first hand and was changed by it, I only mentally multiply. I'm thinking that I need to put myself in situations where I can experience the awfulness of the world viscerally. People make fun of teenagers going to "help" build houses in the third world: it's pretty massively inefficient to ship untrained teenagers to Mexico to do manual labor (or only sort of do it), when their hourly output would be much higher if they just got a college degree and donated. Yet I know at least one person (someone who I respect, one of my "Hermines") who went to build houses in Mexico for a month and was heavily impacted by it and it spurred her to be of service more generally. (She told me that on the flight back to the states she was emotionally upset because, while she was homesick and tired of eating beans and rice for every meal (she's vegan), she knew that life would get in the way, and she would lose the perspective she had in Mexico. The test tomorrow has a way of seeming all important, and she was afraid of losing that perspective of how much worse other people had it, and what the Truly important things are. She got a tattoo that reads "Gratitude" in Spanish, as a permanent and perpetual reminder.)

Maybe you need to go see squalor? I haven't, so I can't say. I have thought that I have chose someone concrete to help, perhaps on a weekly basis, so that when I'm considering buying something I don't need, my thought process isn't "If I buy this, that's 4 dollars less that I can give to charity", but instead, "If I buy this, I Annie won't get that vaccine." I haven't implemented this yet, so I can't say how effective will be. Social pressure might help: let me know if you want to try something like this with me.

Does that help?

Comment author: Lumifer 09 December 2014 06:30:00PM 2 points [-]

Maybe you need to go see squalor? I haven't, so I can't say.

I have seen squalor, and in my particular case it did not recalibrate my care-o-meter at all. YMMV, of course.

Comment author: TomStocker 14 May 2015 01:01:45PM 0 points [-]

living in pain sent my carometer from below average to full. Seeing squalor definitely did something. I think it probably depends how you see it - did you talk to people as equals or see them as different types of people you couldn't relate to / didn't fit a certain criteria? Being surrounded by suffering from a young age doesn't seem to make people care - its being shocked by suffering after not having had much of it around that is occasionally very powerful - Like the story about the Buddha growing up in the palace then seeing sickness, death and age for the first time?

Comment author: RichardKennaway 12 October 2014 07:11:00PM 4 points [-]

Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits.

What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

Comment author: PeterisP 15 October 2014 04:07:25PM 5 points [-]

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

Comment author: JoshuaMyer 19 October 2014 09:54:19PM 0 points [-]

I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.

From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.

Comment author: timujin 13 October 2014 06:26:50AM 0 points [-]

Because I wouldn't actually care if my actions actually help, as long as my brain thinks they do.

Comment author: RichardKennaway 13 October 2014 08:17:12AM 0 points [-]

Are you favouring wireheading then? (See hyporational's comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?

Comment author: Jiro 13 October 2014 02:34:50PM 3 points [-]

I think he wants to do things for which his brain whispers "this is altruistic" right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.

Comment author: timujin 15 October 2014 09:04:15AM 1 point [-]

No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn't wirehead into constantly feeling sweetness in my mouth.

Comment author: lmm 17 October 2014 09:03:59PM *  0 points [-]

I find this a confusing position. Please expand

Comment author: timujin 18 October 2014 06:42:43PM 8 points [-]

Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.

Comment author: Philip_W 09 December 2014 10:38:06AM 0 points [-]

Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally).

When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: "I want to help people/be a good person" and "I want it to actually be real" are separate urges. What the OP seems to be appealing to is a system which says "I want to actually help people" in one go - sympathy, perhaps, as opposed to satisfying your altruism self-image.

Comment author: hyporational 13 October 2014 05:39:23AM *  0 points [-]

What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

If there's no difference we arrive at the general problem of wireheading. I suspect very few people who identify themselves as altruists would choose being wireheaded for altruistic high. What are the parameters that would keep them from doing so?

Comment author: RichardKennaway 13 October 2014 08:17:02AM 1 point [-]

If there's no difference we arrive at the general problem of wireheading.

Yes. Let me change my question. If (absent imaginary interventions with electrodes or drugs that don't currently exist) an altruistic high is, literally, what it feels like when you care about others and act to help them, then saying "I don't care about them, I just wanted the high" is like saying "I don't enjoy sex, I just do it for the pleasure", or "A stubbed toe doesn't hurt, it just gives me a jolt of pain." In short, reductionism gone wrong, angst at contemplating the physicality of mind.

Comment author: hyporational 13 October 2014 03:01:39PM *  0 points [-]

It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem.

It's not clear to me where the physicality of mind or reductionism gone wrong enter the picture, not to mention angst. Oversimplification is aesthetics gone wrong.

ETA: I suppose it would be appropriately generous to assume that you meant altruistic high as one of the many mind states that caring feels like, but in many instances caring in the sense that I'm motivated to do something doesn't seem to feel like anything at all. Perhaps there's plenty of automation involved and only novel stimuli initiate noticeable perturbations. It would be an easy mistake to only count the instances where caring feels like something, which I think happened in timujin's case. It would also be a mistake to think you only actually care about something when it doesn't feel like anything.

Comment author: RichardKennaway 15 October 2014 08:14:58AM 1 point [-]

It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem.

I was addressing timujin's original comment, where he professed to desiring the altruistic high while being indifferent to other people, which on the face of it is paradoxical. Perhaps, I speculate, noticing that the feeling is a thing distinct from what the feeling is about has led him to interpret this as discovering that he doesn't care about the latter.

Or, it also occurs to me, perhaps he is experiencing the physical feeling without the connection to action, as when people taking morphine report that they still feel the pain, but it no longer hurts.

Brains can go wrong in all sorts of ways.

Comment author: NancyLebovitz 07 October 2014 02:07:52PM 12 points [-]

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

Comment author: [deleted] 07 October 2014 02:45:03PM 9 points [-]

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

Yes, that is how adults help in real life. In science we chop off little sub-sub-problems we think we can address to do our part to address larger questions whose answers no one person will ever find alone, and thus end up doing enormous work on the shoulders of giants. It works roughly the same in activism.

Comment author: Swimmer963 07 October 2014 02:00:08PM 8 points [-]

Wow this post is pretty much exactly what I've been thinking about lately.

Saving a person's life feels great.

Yup. Been there. Still finding a way to use that ICU-nursing high as motivation for something more generalized than "omg take all the overtime shifts."

Also, I think that my brain already runs on something like virtue ethics, but that the particular thing I think is virtuous changes based on my beliefs about the world, and this is probably a decent way to do things for reasons other than visceral caring. (I mean, I do viscerally care about being virtuous...)

Comment author: VAuroch 08 October 2014 09:01:54AM *  18 points [-]

I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where

everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.

I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.

Comment author: Kaj_Sotala 09 October 2014 09:00:30AM 13 points [-]

This is why I prefer to frame EA as something exciting, not burdensome.

Comment author: NancyLebovitz 15 October 2014 02:18:58PM 6 points [-]

Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.

This just might have more general application.

Comment author: Capla 21 October 2014 01:17:50AM 1 point [-]

If I'm working at my capacity, I don't see how it's my fault for not having the world fixed immediately. I can't do any more than I can do and I don't see how I'm responsible for more than what my efforts could change.

Comment author: John_Maxwell_IV 09 October 2014 11:32:06PM 3 points [-]

Do we have any data on which EA pitches tend to be most effective?

Comment author: NancyLebovitz 08 October 2014 03:44:23PM 6 points [-]

I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.

Comment author: RichardKennaway 09 October 2014 09:07:05AM 5 points [-]

Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?

Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.

Comment author: Lumifer 09 October 2014 02:56:38PM 4 points [-]

To buy an expensive pair of shoes (he says) is morally equivalent to killing a child.

I always find it curious how people forget that equality is symmetrical and works in both directions.

So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...

Comment author: [deleted] 10 October 2014 04:02:22PM 7 points [-]

I always find it curious how people forget that equality is symmetrical and works in both directions.

See also http://xkcd.com/1035/, last panel.

So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...

One man's modus ponens... I don't lose much sleep when I hear that a child I had never heard of before was killed.

Comment author: RichardKennaway 09 October 2014 04:32:18PM 1 point [-]

No, except by interpreting the words "morally equivalent" in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn't matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).

The only symmetry here is that of "equal and opposite".

Did anyone actually need that spelled out?

Comment author: Lumifer 09 October 2014 05:13:35PM 2 points [-]

These verbal contortions do not look convincing.

The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.

Comment author: [deleted] 15 October 2014 10:50:30PM 2 points [-]

A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it's much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.

For instance, what do you think George Bush Sr's worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn't fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.

This is utterly bizarre on many levels, but I'm grateful too -- I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.

Comment author: gjm 15 October 2014 11:26:38PM 6 points [-]

When you ask how bad an action is, you can mean (at least) two different things.

  • How much harm does it do?
  • How strongly does it indicate that the person who did it is likely to do other bad things in future?

Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.

In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.

That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.

Comment author: dthunt 09 October 2014 05:35:25PM 0 points [-]

Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.

I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.

Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.

Comment author: tog 09 October 2014 07:21:43PM *  5 points [-]

Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.

In case anyone else was curious about this, here's a quote:

Barbara Ann Radnofsky, a Houston lawyer and Democratic candidate for attorney general, says that a 22-word clause in a 2005 constitutional amendment designed to ban gay marriages erroneously endangers the legal status of all marriages in the state.

The amendment, approved by the Legislature and overwhelmingly ratified by voters, declares that “marriage in this state shall consist only of the union of one man and one woman.” But the troublemaking phrase, as Radnofsky sees it, is Subsection B, which declares:

“This state or a political subdivision of this state may not create or recognize any legal status identical or similar to marriage.”

Oops.

Comment author: Dentin 17 October 2014 04:12:56PM 0 points [-]

The biggest problem I have with 'dead baby' arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don't find that babies have a whole lot of intrinsic value until they're properly programmed.

Comment author: NancyLebovitz 21 October 2014 03:08:49AM *  0 points [-]

If you don't take care of babies, you'll eventually run out of adults. If you don't have adults, the babies won't be taken care of.

I don't know what a balanced approach to the problem would look like.

Comment author: tog 09 October 2014 07:18:55PM 0 points [-]

NancyLebovitz:

I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.

RichardKennaway:

Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?

Richard's question is a good one, but even if there's no good answer it's a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending - and that this is an easier sell. So the framing of EA that Nancy describes has practical value.

Comment author: torekp 10 October 2014 02:05:00AM *  3 points [-]

Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.)

Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.

Comment author: SaidAchmiz 13 October 2014 04:25:15PM 0 points [-]

apply the sympathy-without-personal-distress trick to yourself

If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with?

Wouldn't you just shrug and dismiss the misfortune as irrelevant?

Comment author: hyporational 13 October 2014 06:40:57PM 3 points [-]

If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?

Comment author: SaidAchmiz 13 October 2014 10:25:54PM 2 points [-]

I would not. This is a fair point.

Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?

Comment author: CCC 14 October 2014 07:32:55AM 2 points [-]

Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the "pain" nerves at a given intensity.

Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.

So, considering this counterexample, I'd say that no, not every possible misfortune includes damage. Though I imagine that most do.

Comment author: Lumifer 14 October 2014 06:00:21PM 2 points [-]

Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series).

No need for sci-fi.

Comment author: hyporational 14 October 2014 09:53:01AM *  0 points [-]

Much of what could be called damage in this context wouldn't necessarily happen within your body, you can take damage to your reputation for example.

You can certainly be deluded about receiving damage especially in the social game.

Comment author: CCC 14 October 2014 02:29:33PM 0 points [-]

That is true; but it's enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).

Comment author: hyporational 14 October 2014 02:58:57PM 0 points [-]

Yes. I didn't mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.

Comment author: torekp 13 October 2014 09:01:38PM 0 points [-]

Let's say you cut your finger while chopping vegetables. If you don't feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.

Comment author: John_Maxwell_IV 09 October 2014 11:51:18PM *  1 point [-]

Here's a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that's not an issue. The idea is to score as many points as possible before that happens.

If you save someone's life on expectation, you save someone's life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.

Comment author: Gunnar_Zarncke 08 October 2014 09:00:40PM 1 point [-]

and find it literally unbearable.

But you don't have to bear it alone. It's not as if one person has to care about everything (nor each single person has to care for all).

Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).

Comment author: Lumifer 09 October 2014 12:32:23AM -2 points [-]

But you don't have to bear it alone.

That's one way for people to become religious.

Comment author: Weedlayer 09 October 2014 08:14:55AM 0 points [-]

I'm not sure what point is being made here. Distributing burdens is a part of any group, why is religion exceptional here?

Comment author: Lumifer 09 October 2014 02:36:41PM 2 points [-]

Theory of mind, heh... :-)

The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don't look at you strangely), it's not your church community which shares your burden. It's Jesus who lifts this burden off your shoulders.

Comment author: Weedlayer 09 October 2014 03:49:49PM 1 point [-]

Ah, that's probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).

Comment author: Lumifer 09 October 2014 03:51:55PM 1 point [-]

that's probably not what the parent meant then

Yes, of course. I pointed out another way through which you don't have to bear it alone.

Comment author: Weedlayer 09 October 2014 04:48:04PM 0 points [-]

Ah, I understand. Thanks for clearing up my confusion.

Comment author: AnthonyC 08 October 2014 04:18:53PM *  1 point [-]

I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4.

Ditto, though I diverged differently. I said, "Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?"

I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at "zero, despair, crying at every news story." But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won't add or subtract much from the total utility of human civilization.

Comment author: [deleted] 05 May 2015 08:24:34PM 0 points [-]

I concluded that I am not a good person and won't be for the foreseeable future

Super relevant slatestarcodex post: Nobody Is Perfect, Everything is Commensurable.

Comment author: VAuroch 10 May 2015 05:48:30AM 0 points [-]

Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.

Comment author: [deleted] 11 May 2015 04:20:04AM *  0 points [-]

But what about the quantitative way? :(

Edit: Forget that... I finally get it. Like, really get it. You said:

and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders

Oh, my gosh... I think that's why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what's true, but that's probably not true. Honestly, I probably gave it up because having the power to impact someone else's eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone's eternal soul.

And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don't seem nearly as important as they probably would had I not come from the perspective that people's lives on earth are just specks, just one-infinitieth total existence.

Comment author: hyporational 16 October 2014 01:39:06AM *  6 points [-]

I see suffering the whole day in healthcare but I'm actually pretty much numbed to it. Nothing really gets to me, and if it did it could be quite crippling. Sometimes I watch sad videos or read dramatizations of real events to force myself to care for a while, to keep me from forgetting why I show up at work. Reading certain types of writings by rationalists helps too.

You shouldn't get more than glimpses of the weight of the world, or rather you shouldn't let them through the defences, to be able to function.

"Will the procedure hurt?" asked the patient. "Not if you don't sting yourself by accident!" answered the doctor with the needle.

Comment author: diegocaleiro 10 October 2014 06:13:37PM 5 points [-]

Cross commented from the EA forum

First of all. Thanks Nate. An engaging outlook on overcoming point and shoot morality.

You can stop trusting the internal feelings to guide your actions and switch over to manual control.

Moral Tribes, Joshua Greene`s book, addresses the question of when to do this manual switch. Interested readers may want to check it out.

Some of us - where "us" here means people who are really trying - take your approach. They visualize the sinking ship, the hanging souls silently glaring at them in desperation, they shut up and multiply, and to the extent possible, they let go of the anchoring emotions that are sinking the ship.

They act.

This approach is invaluable, and I see it working for some of the heroes of our age, you, Geoff Anders, Bastien Stern, Brian Tomasik, Julian Savulescu, yet I don't think it's the only way to help a lot - and we need all the approaches we can get - so I'll expose the other one, currently a minority, best illustrated by Anders Sandberg.

Like those you address, some people really want to care, however, the emotional bias that is stopping them from doing so is not primarily scope insensitivity, but something akin to loss aversion, except it manifests as a distaste for negative motivation and an overwhelming drive for positive motivation. When facing a choice between

  • Join our team of Transhumanists who will improve the human condition
  • Help us transform the world into a place as happy as possible
  • Help us prevent catastrophe, hurry up, people are suffering
  • Join our cause, we will decrease risks that humanity will be extinct

they will always pick one of the top two, because they are framed positively. The bottom two may sound more pressing, but they mention negative, undesirable, uncomfortable forces. They are staged in a frame where we feel overpowered by nature. Nature is a force trying to change our state into a worse state, and you are asked to join the gate keepers who will contain the destructive invasion that is to come.

The top two however, are not only more cheerful, they are set in a completely different frame: you are given a grandiose vision of a possible future, and told you can be part of the force that will sculpt it. What they tell you is we have the tools for you, join us, and with our amazing equipment, we will reshape the earth.

I am one of these people, Stephen Frey, João Fabiano, Anders Sandberg, being some other examples. David Pearce once attentively noticed this underlying characteristic, and jokingly attributed to this category the welcoming name of "Positive Utilitarian".

Some of us, who are driven by this cheerful positive idea, have found a way to continue our efforts on the right lane despite that strong inclination to go towards the riches instead of away from darkness.

We are driven by the awesomeness of it all.

Pretend for an instant the problems of the world are shades, pitch black shades. They are spread around everywhere. The world is mostly dark. You now find yourself in a world illuminated in exact proportion to the good things it has, all you see around you are faint glimpses of beauty and awesome here and there, candles of good intention, and the occasional lamps of concerted effort. What moves you is an exploratory urge. You want to see more, to feel more. Those dark areas are not helping you with that. Since they are problems, your job is to be inventive, to find solutions. You are told once upon a time it was all dark, your ancestors were able to ignite the first twigs into a bone fire. Sat by the fire you hear from wise sages' stories of the dark age that lies behind us, Hans Rosling, Robert Wright, Jared Diamond and Steve Pinker show how all the gadgets, symbols and technologies we created gave light to all we see now. By now we have lamps of many kinds and shapes, but you know more can be found. With diligence, smarts and help, you know we can beam lasers and create floodlights, we can solve things at scale, we can cause the earth to shine. But you are not stopping there, you are ambitious. You want to harness the sun.

It so happens that there's a million billion billion suns out there, so we too, shut up and multiply.

Why do we look at the world this way, why do we feel energized by this metaphor but not the prevention one? I don't know. As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn't matter. At the end of the day, we act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.

Comment author: tjohnson314 10 October 2014 11:12:30AM 5 points [-]

I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible. But I don't focus much effort on it. I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

For example: - The best way for me to improve math and science education is to work on my own teaching ability. - The best way for me to improve the mental health of college students is to make time to support friends that struggle with depression and suicidal thoughts. - The best way for me to stop racism or sexism is to first learn to recognize and quash it in myself, and then to expose it when I encounter it around me.

Changing my own actions and attitudes is hard, but it's also the one area where I have the most control. And as I've worked on this for the past few years, I've managed to create a positive feedback loop by slowly increasing the size of my care-o-meter. Empathy is a useful habit that can be trained, just as much as rationality can be.

I realize that it's hard to get an accurate sense of the impact a donation can have for someone on the other side of the world. It's possible that I'm being led astray by my care-o-meter to focus on people near at hand. I do in principle care equally about people in other parts of the world, even if my care-o-meter hasn't figured that out yet. So if you'd like to prove to me that I can be more effective by focusing my efforts elsewhere, I'd be happy to listen. (I am a poor grad student, so donating large amounts of money isn't really feasible for me yet, although I do realize I still make far more than the world average.) For now, I'm doing the best that I can in the way that I know how.

To conclude, I wouldn't call myself an effective altruist, but I do count them as allies. And I wouldn't want to convert everyone to my perspective; as others have mentioned already, it's good to have a wide range of different approaches.

Comment author: Ixiel 10 October 2014 09:44:27PM 5 points [-]

I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible.

I would love to see a splinter group, Efficient Altruism. I have no desire to give as much as I can afford, but feel VERY strongly about giving as efficiently to the causes I support as I can. When I read, I think from EA themselves, the estimated difference in efficiency of African aid organizations, it changed my whole perspective on charity.

Comment author: Philip_W 09 December 2014 01:51:30PM 1 point [-]

(separated from the other comment, because they're basically independent threads).

I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the side-effects of ~600 people not feeling ill all the time. So if you primarily care about helping people live better lives, $50/yr to SCI ought to equal your stated current efforts.

However, that doesn't count flow-through effects. EA is rare enough that you might actually get a large portion of the credit of convincing someone to donate to a more effective charity, or even become an effective altruist: expected marginal utility isn't conserved across multiple agents (if you have five agents who can press a button, and all have to press their buttons to save one person's life, each of them has the full choice of saving or failing to save someone, assuming they expect the others to press the button too, so each of them has the expected marginal utility of saving a life). Since it's probably more likely that you convince someone else to donate more effective than that one of the dewormed people will be able to have a major impact because of their deworming, flow-through effects should be very strong for advocacy relative to direct donation.

To quantify: Americans give 1% of their incomes to poverty charities, so let's make that $0.5k/yr/student. Let's say that convincing one student to donate to SCI would get them to donate that much more effectively about 5 years sooner than otherwise (those willing would hopefully be roped in eventually regardless). Let's also say SCI is five times more effective than their current charities. That means you win $2k to SCI for every student you convince to alter their donation patterns.

You probably enjoy helping people directly (making you happy, which increases your productivity and credibility, and is also just nice), and helping them will earn you social credit which is more likely to convince them, so you could mostly keep doing what you're doing, just adding the advocacy bit in the best way you see fit. Suppose you manage to convince 2.5% of each class, that means you get around $5k/year to SCI, or about 100 times more impact than what you're doing now, just by doing the same AND advocating people to donate more effectively. That's six thousand sick people, more than a third of them children and teens, you would be curing extra every year.

Note: this is a rough first guess. Better numbers and the addition of ignored or forgotten factors may influence the results by more than one order of magnitude. If you decide to consider this advice, check the results thoroughly and look for things I missed. 80000hours has a few pages on advocacy, if you're interested.

Comment author: tjohnson314 26 December 2014 06:27:28PM 0 points [-]

(Sorry, I didn't see this until now.)

I'll admit I don't really have data for this. But my intuitive guess is that students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them. At least for me, that's a large part of why I'm in the field that I chose.

It's possible that I'm being misled by the warm fuzzy feelings I get from helping someone face-to-face, which I don't get from sending money halfway across the world. But it seems like there's many things that matter in life that don't have a price tag.

Comment author: Philip_W 02 January 2015 04:19:50PM 3 points [-]

I'll admit I don't really have data for this. But my intuitive guess is that ...

Have you made efforts to research it? Either by trawling papers or by doing experiments yourself?

students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them.

Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150.

But it seems like there's many things that matter in life that don't have a price tag.

That sounds deep, but is obviously false: would you condemn yourself to a year of torture so that you get one unit of the thing that allegedly doesn't have a price tag (for example a single minute of a conversation with a student where you feel a real connection)? Would you risk a one in a million chance to get punched on the arm in order to get the same unit? If the answer to these questions is [no] and [yes] respectively, as I would expect them to be, those are outer limits on the price range. Getting to the true value is just a matter of convergence.

Perhaps more to the point, though, those people you would help halfway across the world are just as real, and their lives just as filled with "things that don't have a price tag" as people in your environment. For $3000, one family is not torn apart by a death from malaria. For $3, one child more attends grade school regularly for a year because they are no longer ill from parasitic stomach infections. These are not price tags, these are trades you can actually make. Make the trades, and you set a lower limit. Refuse them, and the maximum price tag you put on a child's relationship with their teacher is set, period.

It does seem very much like you're guided by your warm fuzzies.

Comment author: tjohnson314 05 January 2015 09:52:59PM 0 points [-]

Have you made efforts to research it?

This is based on my own experience, and on watching my friends progress through school. I believe that the majority of successful people find their life path because someone inspired them. I don't know where I could even look to find hard numbers on whether that's true or not, but I'd like to be that person for as many people as I can.

That sounds deep, but is obviously false... It does seem very much like you're guided by your warm fuzzies.

My emotional brain is still struggling to accept that, and I don't know why. I'll see if I can coax a coherent reason from it later. But my rational brain says that you're right and I was wrong. Thanks.

Comment author: Philip_W 09 December 2014 01:49:28PM 0 points [-]

Empathy is a useful habit that can be trained, just as much as rationality can be.

Could you explain how? My empathy is pretty weak and could use some boosting.

Comment author: tjohnson314 26 December 2014 06:46:56PM 0 points [-]

For me it works in two steps: 1) Notice something that someone would appreciate. 2) Do it for them.

As seems to often be the case with rationality techniques, the hard part is noticing. I'm a Christian, so I try to spend a few minutes praying for my friends each day. Besides the religious reasons, which may or may not matter to you, I believe it puts me in the right frame of mind to want to help others. A non-religious time of focused meditation might serve a similar purpose.

I've also worked on developing my listening skills. Friends frequently mention things that they like or dislike, and I make a special effort to remember them. I also occasionally write them down, although I try not to mention that too often. For most people, there's a stronger signaling effect if they think you just happened to remember what they liked.

Comment author: Philip_W 02 January 2015 04:35:27PM 0 points [-]

You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.

Comment author: Capla 21 October 2014 01:35:36AM 0 points [-]

Empathy is a useful habit that can be trained, just as much as rationality can be.

This is key.

Comment author: kilobug 09 October 2014 12:27:04PM 12 points [-]

Interesting article, sounds a very good introduction to scope insensitivity.

Two points where I disagree :

  1. I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.

  2. I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."

Comment author: MugaSofer 10 October 2014 01:47:24PM 6 points [-]

I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.

This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?

(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)

Comment author: dthunt 17 October 2014 06:08:22PM 2 points [-]

Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.

It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.

Comment author: MugaSofer 18 October 2014 05:32:29PM 4 points [-]

That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.

Comment author: AmagicalFishy 23 November 2014 08:12:14PM *  1 point [-]

It may be more accurate to say something along the lines of "I mind large numbers of animals dying for no good reason. Food is a good reason, and thus do not mind eating chicken. An oil spill is not a good reason."

Comment author: Vaniver 09 October 2014 03:22:25PM 4 points [-]

I don't think birds are a good example of it, at least not for me.

Birds are the classic example, both in the literature and (through the literature) here.

Comment author: CCC 09 October 2014 01:54:11PM 3 points [-]

I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics.

I very strongly agree with your point here, but would like to add that the problem of finding a political structure which properly maximises the happiness of the people living under it is a very difficult one, and missteps are easy.

Comment author: [deleted] 09 October 2014 08:52:12PM 4 points [-]

After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars.

Fifty thousand times the marginal utility of a dollar, which is probably much less than the utility difference between the status quo and having fifty thousand dollars less unless Daniel is filthy rich.

Comment author: So8res 10 October 2014 06:59:17AM *  4 points [-]

Yeah it's actually a huge pain in the ass to try to value things given that people tend to be short on both time and money. (For example, an EA probably rates a dollar going towards de-oiling a bird as negative value due to the opportunity cost, even if they feel that de-oiling a bird has positive value in some "intrinsic" sense.)

I didn't really want to go into my thoughts on how you should try to evaluate "intrinsic" worth (or what that even means) in this post, both for reasons of time and complexity, but if you're looking for an easier way to do the evaluation yourself, consider queries such as "would I prefer that my society produce, on the margin, another bic lighter or another bird deoiling?". This analysis is biased in the opposite direction from "how much of my own money would I like to pay", and is definitely not a good metric alone, but it might point you in the right direction when it comes to finding various metrics and comparisons by which to probe your intrinsic sense of bird-worth.

Comment author: Gunnar_Zarncke 07 October 2014 09:03:24PM *  7 points [-]

I'm not sure what to make out of it, but one could run the motivating example backwards:

this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can.

"He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he'd endure that and quickly arrives at the conclusion that he doesn't care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low."

What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he'd feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.

It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.

Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can't solve. But maybe societies can.

Comment author: So8res 10 October 2014 06:27:04AM *  2 points [-]

That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it's implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small (not really what I mean, but I'm short on time right now, I hope you can see what I'm pointing at), and obviously mine aren't going to work for everyone.

(As a matter of fact, my actual "bird death" thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.)

If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.

Comment author: Gunnar_Zarncke 10 October 2014 06:54:32AM 1 point [-]

That sounds like a failure of the thought experiment to me.

You didn't give details on how or how not to set up the thought experiment. I took it to mean 'your spontaneous valuation when imagining the situation' followed by n objective'multiplication'. Now my reaction wasn't that of aversion, but I tried to think of possible reactions and what would follow from that.

The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small. But the 'intrinsic' value appears to heavily depend on the setup of the thought experiment. And it humans value small things nonlinearly more than large/many things one can hack the valuation by constraining the thought experiment to only small things.

Nothing wrong with mind hacks per se. I have read your productivity post. But I don't think they don't help in establishing 'intrinsic' value. For personal self-modification (motivation) it seems to work nice.

Comment author: Weedlayer 09 October 2014 12:40:33PM *  3 points [-]

It's also worth mentioning that cleaning birds after an oil spill isn't always even helpful. Some birds, like gulls and penguins, do pretty well. Others, like loons, tend to do poorly. Here are some articles concerning cleaning oiled birds.

http://www.npr.org/templates/story/story.php?storyId=127749940

http://news.discovery.com/animals/experts-kill-dont-clean-oiled-birds.htm

And I know that the oiled birds issue was only an example, but I just wanted to point out that this issue, much like the "Food and clothing aid to Africa" examples you often see, isn't necessarily a good idea even ignoring opportunity cost.

Comment author: [deleted] 09 October 2014 06:20:47AM 3 points [-]

Many of us go through life understanding that we should care about people suffering far away from us, but failing to.

That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.

If anybody knows how to fix that, please tell me.

Comment author: Lumifer 09 October 2014 02:52:44PM 3 points [-]

Why do you think it needs fixing?

Comment author: [deleted] 09 October 2014 04:18:57PM 2 points [-]

I think this might be holding me back. People talk about "support" from friends and family which I don't seem to have, most likely because I don't return that sentiment.

Comment author: Lumifer 09 October 2014 04:24:03PM *  3 points [-]

Holding you back from what?

Also, you said (emphasis mine) "incapable of even imagining the thought process that would lead me to care for people I don't know" -- you do know your friends and family, right?

Comment author: [deleted] 11 October 2014 08:39:29PM 1 point [-]

excellent question. I think I'm on the wrong track and something else entirely might be going on in my brain. Thank you.

Comment author: Weedlayer 09 October 2014 08:49:05AM 3 points [-]

Obviously your mileage may vary, but I find it helps to imagine a stranger as someone else's family/friend. If I think of how much I care about people close to me, and imagine that that stranger has people who care about them as much as I can about my brother, then I find it easier to do things to help that person.

I guess you could say I don't really care about them, but care about the feelings of caring other people have towards them.

If that doesn't work, this is how I originally though of it. If a stranger passed by me on the street and collapsed, I would care about their well being (I know this empirically). I know nothing about them, I only care about them due to proximity. It offends me rationally that my sense of caring is utter dependent on something as stupid as proximity, so I simply create a rule that says "If I would care about this person if they were here, I have to act like I care if they are somewhere else". Thus, utilitarianism (or something like it).

It's worth noting that another, equally valid rule would be "If I wouldn't care about someone if they were far away, there's no reason to care about them when they happen to be right here". I don't like that rule as much, but it does resolve what I see as an inconsistency.

Comment author: [deleted] 09 October 2014 04:13:23PM 3 points [-]

Thank you. That seems like a good way of putting it. I seem to have problems thinking of all 7 billion people as individuals. I will try to think about people I see outside as having a life of their own even if I don't know about it. Maybe that helps.

Comment author: MugaSofer 10 October 2014 01:57:39PM 0 points [-]

I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".

So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Comment author: SaidAchmiz 13 October 2014 04:21:48PM 1 point [-]

you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Why should I act this way?

Comment author: MugaSofer 18 October 2014 04:02:07PM *  0 points [-]

To better approximate a perfectly-rational Bayesian reasoner (with your values.)

Which, presumably, would be able to model the universe correctly complete with large numbers.

That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.

Comment author: hyporational 09 October 2014 05:51:04PM 1 point [-]

What makes you care about caring?

Comment author: shminux 07 October 2014 06:50:49PM *  23 points [-]

I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.

Going section by section:

  1. I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to the main point.)

  2. No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.

  3. I occasionally donate modest amounts to this cause or that, if I feel like it. I don't think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.

  4. I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I "can't possibly do enough". I help people when I feel like it and I don't pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.

  5. I have read a fair amount of EA propaganda, and I still do not feel that I "should care about people suffering far away", sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.

  6. I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the "biggest problems", as opposed to doing what I am good at, though I am happy that some people feel that way; humanity's strength is in its diversity.

  7. Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like "Gandhi, Mother Theresa, Nelson Mandela" tend to have more effect than those of us within 1 standard deviation. Some people address "global poverty", others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

  8. I don't feel the weight of the world. Because it does not weigh on me.

Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.

Comment author: So8res 10 October 2014 06:51:20AM *  17 points [-]

I don't disagree, and I don't think you're a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)

A few things, though:

No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won't (e.g. after a syn bio proof of concept that kills 1/4 of the race I would hope that diversity in problem-selection would decrease). "It is best to diversify and hope" seems like a platitude that dodges the fun parts.

I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know.

I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the "you must be Fundamentally Different" fallacy. Part of the theme behind this post is "you can interpret the internal caring feelings differently if you want", and while I interpret my care-senses differently, I do empathize with this sentiment.

That's not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:

  1. Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
  2. Would you care more about humans in a context where humanity is treated as the 'in-group'? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
  3. I assume that you wouldn't push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?

In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don't matter less just because I'm not yet close to them). There's also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.

Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don't know most of them yet; see also the expanding circle).

I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don't "care" about most of the nameless masses who are out of my sight (in that I don't have feelings for them), but there's a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.

Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.

Comment author: Jiro 16 October 2014 07:06:24PM 1 point [-]

Do you care only about the people who are currently close friends, or also the people who could be close friends?

There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.

In that case, saying "any individual could become a close friend, so I should multiply 'caring for one friend' by the the number of individuals in the group" is wrong. Instead, I should multiply "caring for one friend' by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn't increase at all.

Comment author: lackofcheese 17 October 2014 03:01:36PM *  0 points [-]

The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.

Let's say, for example, that:
1. If X was my close friend, I would care about X
2. If Y was my close friend, I would care about Y
3. X and Y could not both be close friends of mine simultaneously.

Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn't I just care about both X and Y regardless of whether they are my close friends or not?

Comment author: Lumifer 17 October 2014 03:24:33PM 1 point [-]

the fact that someone happens to be your close friend seems like the wrong reason to care about them

Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.

Comment author: Kaj_Sotala 08 October 2014 07:28:32AM 6 points [-]

I feel like I'm somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.

Comment author: RichardKennaway 08 October 2014 12:18:06AM 5 points [-]

Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :))

On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.

Comment author: kalium 08 October 2014 09:19:53AM 9 points [-]

My view is similar to yours, but with the following addition:

I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I've effectively stolen their money and spent it on something they wouldn't have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.

Comment author: pianoforte611 08 October 2014 11:38:17PM 3 points [-]

This is exactly how I feel. I would slightly amend 1 to "I care about family, friends, some other people I know, and some other people I don't know but I have some other connection to". For example, I care about people who are where I was several years ago and I'll offer them help if we cross paths - there are TDT reasons for this. Are the they the "best" people for me to help under utilitarian grounds? No, and so what?

Comment author: [deleted] 08 October 2014 07:45:32PM *  5 points [-]

Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):

I do not think I am a worse person than you because of that.

I think you are.

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless.

You are heartless.

I care about the humanity surviving and thriving, in the abstract

Here's my question, and I hope you take the time to answer as honestly as you wrote your comment:

Why?

After all you've rejected to care about, why in the world would you care about something as abstract as "humanity surviving and thriving"? It's just an ape species, and there have already been billions of them. In addition, you clearly don't care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.

I don't mean to convince you otherwise, but it seems arbitrary - and surprisingly common - that someone who doesn't care about the suffering or lives of strangers would care about that one thing out of the blue.

Comment author: TheOtherDave 08 October 2014 09:01:02PM 11 points [-]

I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?

Comment author: [deleted] 09 October 2014 02:08:36AM 1 point [-]

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

Yes, rereading it, I think I misinterpreted response 2 as saying it doesn't matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn't matter, just durability and surivival. I thought this defeated the usual Big Future argument.

But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only "nicer" if they don't suffer (translation: their wellbeing doesn't really matter), then in what way would the Big Future matter?

I care a lot about humanity's future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.

Comment author: shminux 08 October 2014 09:57:08PM 6 points [-]

...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.

First, as usual, TheOtherDave has already put it better than I could.

Maybe to elaborate just a bit.

First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.

Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.

Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.

Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.

Comment author: Weedlayer 09 October 2014 08:32:50AM 9 points [-]

I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?

That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.

Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.

Comment author: Lumifer 09 October 2014 02:51:42PM 2 points [-]

but doesn't everyone think their morals are better than other people?

I don't.

Comment author: hyporational 09 October 2014 05:55:52PM 1 point [-]

Would you make that a normative statement?

Comment author: Lumifer 09 October 2014 06:06:16PM *  2 points [-]

Well, kinda-sorta. I don't think the subject is amenable to black-and-white thinking.

I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don't feel that people who think their morals are bad are to be admired and emulated either.

There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn't good either.

Comment author: hyporational 09 October 2014 06:17:53PM 5 points [-]

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)

Comment author: Lumifer 09 October 2014 06:22:27PM 0 points [-]

So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems?

In one particular aspect, yes. There are many aspects.

The barber shaves everyone who doesn't shave himself..? X-)

Comment author: Weedlayer 09 October 2014 03:44:22PM 0 points [-]

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

I know it's possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.

Comment author: hyporational 09 October 2014 06:06:36PM 2 points [-]

It's not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.

So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is?

So while I wouldn't murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn't seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn't seem to provide any new information.

Comment author: Lumifer 09 October 2014 03:55:39PM 2 points [-]

You are confused between two very different statements:

(1) I don't think that my morals are (always, necessarily) better than other people's.

(2) I have no basis whatsoever for judging morality and/or behavior of other people.

Comment author: Weedlayer 09 October 2014 05:07:11PM 1 point [-]

What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

I mean, it's the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I'm evaluating others' beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.

Which of course is similar to the argument people sometimes bring up about "moral progress", claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).

My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?

Comment author: Lumifer 09 October 2014 05:29:35PM *  2 points [-]

if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own?

No, I don't think so.

Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.

When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.

Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there's no reason to consider your own value system to be the very best there is, especially given that it's your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.

Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.

Comment author: Weedlayer 09 October 2014 09:40:08PM *  2 points [-]

This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I'll make two points and see if they move the conversation forward:

1: "There's no reason to consider your own value system to be the very best there is"

This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren't the absolute best there is. The same logic holds true for morals. I know I'm making some mistakes, but I don't know where those mistakes are. On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong. This is what I mean by "thinking that one's own morals are the best". I know I might not be right on everything, but I think I'm right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but "least wrong").

Which brings me to point 2.

2: "Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were."

I'm absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I've been equivocating between the two, that's why). I know I can't alter my moral beliefs on a whim, but that's because I have no reason to want to. Consider self-modifying to want to murder innocents. I can't do this, primarily because I don't want to, and CAN'T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn't get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that's an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying "You should do X". So wishing I had a different morality is like saying "I wish I though I should do X". What does that mean?

Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.

In short, I'm with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there's a discontinuity either in my reading or your writing.

Comment author: pianoforte611 08 October 2014 11:55:01PM *  6 points [-]

It's interesting because people will often accuse a low status out group of "thinking they are better than everyone else" *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw .... until I saw Hedonic Treader's comment.

I do sort of understand the attitude of the utilitarian EA's. If you really believe that everyone must value everyone else's life equally, then you'd be horrified by people's brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.

*I've seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc

Comment author: gjm 09 October 2014 12:41:27PM *  8 points [-]

the accusation was always made of straw

I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:

  • A thinks her group is better than others.
  • A's thinking this is obvious enough for B to be able to discern it with some confidence.
  • A never explicitly says that her group is better than others.

and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.

I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:

  • A really does think that, to some extent and in some respects, her group is better than others.
  • But so does everyone else.
  • B imagines that he's discerned unusual or unreasonable opinions of this sort in A.
  • But really he hasn't; at most he's picked up on something that he could find anywhere if he chose to look.

[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn't say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Comment author: CCC 09 October 2014 01:51:19PM 1 point [-]

I do imagine that the first situation is more common, in general, than the second.

This is entirely because of the point:

  • But so does everyone else.

A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.

Comment author: gjm 09 October 2014 01:54:27PM 2 points [-]

Sorry, I wasn't clear enough. By "so does everyone else" I meant "everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others".

Comment author: CCC 09 October 2014 06:17:58PM *  1 point [-]

Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I'm not sure that it's actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.

Comment author: [deleted] 11 October 2014 09:38:12AM 0 points [-]

but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that's partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).

Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream -- by my standards; it doesn't follow that I also believe it's better by your standards or by objective standards (whatever they might be) and feel smug about it.

Comment author: gjm 11 October 2014 12:33:28PM 2 points [-]

Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.

(Even food preferences -- a thing so notoriously subjective that the very word "taste" is used in other contexts to indicate something subjective and personal -- can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)

Comment author: gjm 08 October 2014 11:33:34PM 10 points [-]

inflammatory and judgmental comments

It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.

And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.

(For what it's worth, I don't think your comments indicate that you're heartless.)

Comment author: Bugmaster 08 October 2014 11:20:18PM 2 points [-]

You are saying that shminux is "a worse person than you" and also "heartless", but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, "whoever cares about more people is better", then all you're saying is, "shminux cares about fewer people because he cares about fewer people". This is true, but tautologically so.

Comment author: Jiro 08 October 2014 10:40:10PM 2 points [-]

It would be nice if fewer people died and suffered, sure. But "nice" is all it is. Call me heartless. You are heartless.

Then every human being in existence is heartless.

Comment author: CBHacking 29 November 2014 01:21:12PM 0 points [-]

I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, "nice" is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such "non-zero benefit" or "decent" (which I, and probably most people, would rank below "nice") and "excellent", "wonderful", "the best thing in the world" (in the hyperbolic "best thing I have in mind right now" sense), or "literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time"; I suspect most native English speakers would agree that those are stronger sentiments than "nice". I can certainly think of things that are more important than merely "nice" yet less important than a reduction in death and suffering.

For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there's any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn't spend it on one... I don't actually need a car, in fact don't have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example.

Leaving aside the fact that "every human being in existence" appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it "nice". Maybe not proportionately more positively - as the post says, our care-o-meters don't scale that far - but there would still be a major difference. I don't know how common, in actual numbers, that reaction is vs. the "It would be nice" reaction (not to mention other possible reactions), but it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal.

Comment author: Jiro 29 November 2014 06:37:20PM 0 points [-]

Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering.

In that sense, it's nice that death and suffering are alleviated, but that's all.

it is absolutely a significant number of people even among those who aren't devoting their whole life towards that goal

"Not devoting their whole life towards stopping death and suffering" equates to "thinks something else is more important than stopping death and suffering".

Comment author: CBHacking 01 December 2014 08:43:25AM *  0 points [-]

False dichotomy. You can have (many!) things which are more than merely "nice" yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates' view on malaria eradication is that "it's nice that death and suffering are alleviated, but that's all"?

We should probably taboo the word "nice" here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn't even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.

Comment author: gjm 01 December 2014 12:39:49PM 3 points [-]

Let's be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans' lives less than you value having a second yacht. I'm sure that second yacht is fun to have, but I think it's reasonable to categorize something that you value less than 1/10000 of the increment from "one yacht" to "two yachts" as no more important than "nice".

This is of course not a problem unique to billionaire playboys, but it's maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1/100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some -- e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There's still something kinda indecent about valuing your second car more than a person's life, but at least to my mind it's substantially less indecent than valuing your second megayacht more than 10000 people's lives.

Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending -- but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it's even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do.

Pedantic notes: I'm supposing that your second yacht costs you $100M and that you can save one African's life for $10k; billionaires' yachts are often more expensive and the best estimates I've heard for saving poor people's lives are cheaper. Presumably if you focus on ending poverty rather than on e.g. preventing malaria then you think that's a more efficient way of helping the global poor, which makes your luxury trade off against more lives. I am using "saving lives" as a shorthand; presumably what you actually care about is something more like time-discounted aggregate QALYs. Your billionaire playboy's luxury purchase might be something other than a yacht. Offer void where prohibited by law. Slippery when wet.

And, for the avoidance of doubt, I strongly endorse devoting half your resources to ending poverty and half to being a billionaire playboy, if the alternative is putting it all into being a billionaire playboy. The good you can do that way is tremendous, and I'd take my hat off to you if I were wearing one. I just don't think it's right to describe that situation by saying that poverty is the most important thing to you.

Comment author: Jiro 01 December 2014 03:49:16PM 1 point [-]

Thank you, that's what I would have said.

Comment author: RichardKennaway 01 December 2014 12:24:57PM 0 points [-]

You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.

What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing.

I don't myself act on the conclusion, but I also don't see a flaw in the argument.

Comment author: ShardPhoenix 10 October 2014 01:24:39PM *  3 points [-]

Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they're probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they're only middle class to begin with I feel more pity than admiration.

* Meaning the extreme, "all human lives are equally valuable to me" version, rather than just a desire to not waste charity money.

Comment author: leplen 27 October 2014 04:44:18PM 1 point [-]

I don't understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me?

I don't have a good logical reason for why my life is a lot more valuable than anyone else's. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can't come up with a good reason to have a dominantly large "Life of leplen" term in my utility function. Much of the data suggests that happiness/life quality isn't well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn't I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn't seem efficient to keep investing in myself for little to no return.

I wouldn't feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I'd probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn't mean I'm entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky.

With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.

Comment author: ShardPhoenix 28 October 2014 08:44:17AM *  0 points [-]

I don't understand this. Why should my utility function value me having a large income or having a large amount of money?

With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.

Sounds like you answered your own question!

(It's one thing to have some simplistic far-mode argument about how this or that doesn't matter, or how we should sacrifice ourselves for others, but the near-mode nitty-gritty of the real-world is another thing).

Comment author: [deleted] 07 October 2014 11:29:48AM 4 points [-]

Nice write-up. I'm one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year... and then ends up giving away bits of spending money for other things anyway, but that's warm-fuzzies.

So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes) of income, around the end of the year or tax time, every year, in "JUST F-ING DO IT" mode.

At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.

This is the only thing I actually object to here. Any choice we make that influences the future at all could be said to reallocate probability between one set of future people and another set. There will only be one real future, though. While I vastly prefer for it to be a good one, I don't consider abortion to be murder, and so I don't feel any moral compulsion to maximize future people, or even to direct the future population towards a particular number. That would imply, to my view, that I'm already deciding the destinies of next year's people, let alone next aeon's, and that's already deeply immoral.

Comment author: TrE 07 October 2014 04:12:54PM 2 points [-]

We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that's worth living in, not an arid desert, neither a universe tiled in smiley faces.

While I agree that, since future people will never be concrete entities, like shadowy figures, we don't get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It's easier in our days than it's ever been!

Additionally, I personally value a universe filled with humans higher than a universe filled with ■.

Comment author: 27chaos 08 October 2014 01:47:23AM *  2 points [-]

My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar's number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe's energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don't want to have changed.

Comment author: NancyLebovitz 08 October 2014 02:08:07AM 2 points [-]

Bioengineering might lead to humans who are much less similar to each other.

Comment author: 27chaos 09 October 2014 08:37:50PM 0 points [-]

Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other's values or not value each other. Difficult potential balance to maintain, though that doesn't necessarily mean it should be rejected as an option.

Comment author: NancyLebovitz 09 October 2014 10:56:33PM 1 point [-]

Bioengineering makes CEV a lot harder.

Comment author: AnthonyC 09 October 2014 01:16:58PM 0 points [-]

And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar's number.

Comment author: NancyLebovitz 09 October 2014 02:14:29PM 0 points [-]

That's something I've wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar's numbers.

Comment author: Decius 15 October 2014 07:32:26AM 0 points [-]

Or a breeding population selecting for higher Dunbar's numbers.

Or does that qualify as bioengineering?

Comment author: NancyLebovitz 15 October 2014 02:12:00PM 0 points [-]

I suppose it should count as bioengineering for purposes of this discussion.

Comment author: LawrenceC 14 October 2014 02:55:53PM 2 points [-]

Upvoted for clarity and relevance. You touched on the exact reason why many people I know can't/won't become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.

I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I'd note how much I was going to donate and then donate to one of Givewell's top charities.

Comment author: Unnamed 08 October 2014 07:06:30PM 2 points [-]

Two possible responses that a person could have after recognizing that their care-o-meter is broken and deciding to pursue important causes anyways:

Option 1: Ignore their care-o-meter, treat its readings as nothing but noise, and rely on other tools instead.

Option 2: Don't naively trust their care-o-meter, and put effort into making it so that their care-o-meter will be engaged when it's appropriate, will be not-too-horribly calibrated, and will be useful as they pursue the projects that they've identified as important (despite its flaws).

Parts of this post seem to gesture towards option 2 (like the Daniel story, and section 8), while other parts seem to gesture towards option 1 (like the courage analogy, and section 5).

Comment author: So8res 10 October 2014 06:54:06AM 5 points [-]

I definitely don't suggest ignoring the care-o-meter entirely. Emotions are the compass.

Rather, I advocate not trusting the care-o-meter on big numbers, because it's not calibrated for big numbers. Use it on small things where it is calibrated, and then multiply yourself if you need to deal with big problems.

Comment author: mwengler 30 November 2014 10:31:55PM 3 points [-]

I wonder if in some interesting way the idea that the scope of what needs doing for other people is so massive as to preclude any rational response then to work full time on it is related to the insight that voting doesn't matter. In both cases, the math seems to preclude bothering to do something which will be easy, but will help in the aggregate.

My dog recently tore both of her ACL's, and required two operations and a total of about 10 weeks recovery. My vet suggested I had a choice as to whether to do the 2X $3100 operations on the knees. I realized with the amount of money that I have, $6200 just simply wasn't an important enough amount for me to consider killing my dog at the age of 7 because she couldn't walk. But I was also acutely aware of being goddamn glad that I had only two dogs I cared about, because I sure as hell wasn't interested in discovering the upper limit to how much I would spend before I would start killing off my dogs. Meanwhile, I can live with all the dogs in shelters that will be killed even though they can walk just fine, because they are not my dogs.

I don't want to care any more about the billions of poor people in the world than I already do. I am willing to "blame" their parents: those parents did know, or should have, what they were dooming their children to, approximately when they decided to have them. If I spend my resources to help these poor people, they will be that much healthier that they will proceed to generate that many poor people in the next generation tugging at the heart strings or mind strings of my children. What kind of a father would I be to dump that kind of problem in my kids' lap?

I don't consider it rational to let my moral sentiments run roughshod over my own self interest. I donate, essentially, when I can't help myself, when my sentiments are already involved. To me it seems irrational to spend one iota more effort or money on problems than my sentimental moral self already requires.

Comment author: TheOtherDave 30 November 2014 11:40:37PM 0 points [-]

"I don't consider it rational to let my moral sentiments run roughshod over my own self interest."

To be clear, do you consider the choice to repair your dog's knees an expression of what you're labelling "moral sentiments" here, or what you're labelling "self-interest"?

Comment author: mwengler 03 December 2014 04:06:14PM 2 points [-]

Spending $6200 to fix my 7 year old dog's knees was primarily moral sentiments at work. I could get a healthy 1 year old dog for a fraction of that price. My 7 year old dog will die very likely within the next 3 or 4 years, larger dogs don't tend to live that long. So I haven't saved myself from experiencing the loss of her death, I've just put that off. The dog keeps me from doing all sorts of other things I'd like to do, I have to come home to check on her and feed her and so on, precluding just going on and doing social stuff after work when I want to.

Its important to keep in mind that we are not "homo economicus." We do not have a single utility function with a crank that can be turned to determine the optimum thing to do, and even if in some formal sense we did have such a thing, our reaction to it would not be a deep acceptance of its results.

What we do have is a mess and a mass of competing impulses. I want to do stuff after work. I want to "take care" of those in my charge. My urge to take care of those in my charge presumably arises in me because my humans before me who had less of that urge got competed out of the gene pool.

100,000 years ago, some wolves started hacking humans and as part of that hack, got themselves triggering the stuff that humans have for taking care of their babies. Including the fact that these wolves were pretty good "kids," able to help with a variety of things, we hacked them back and made them even more to our liking by selective killing of the ones we didn't like, and then selective breeding of the ones we did like. At this point, we love our babies more than our dogs, but our babies grow into teenagers. But our dogs always stay baby like in their hacked relationship with us.

My wife took my human children and left me a few years ago, but she left the dogs she had bought. I'm not going to abandon them, the hack is strong in me. Don't get me wrong, I love them. That doesn't mean I am happy about it, or at least not consistently happy about it.

Comment author: TheOtherDave 03 December 2014 05:43:42PM 0 points [-]

(nods)
Thanks for clarifying.

Comment author: 27chaos 07 October 2014 10:54:27PM *  2 points [-]

I don't have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don't have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.

Because of this, and because I'm basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on "more rational" values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I'm not sure why many people here understand that idea in some cases but ignore it in others. If we're going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.

Gunnar Zarncke makes a good point as well, one I think complements my argument. There's no standard with which to choose between helping all the birds and helping none, once you've thrown the care-o-meter away.

Comment author: AnthonyC 09 October 2014 01:22:20PM 2 points [-]

I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right.

But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).

Comment author: 27chaos 09 October 2014 08:46:13PM *  1 point [-]

I worry that such consistency isn't possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.

Comment author: William_Quixote 07 October 2014 02:36:00PM 2 points [-]

I think this is a really good post and extreamly clear. The idea of of the broken care-O-meter is a very compelling metaphor. It might be worthwhile to try to put this somewhere higher exposure where people who have money and are not allready familiar with the LW memeplex would see it

Comment author: So8res 07 October 2014 04:02:20PM 3 points [-]

I'm open to circulating it elsewhere. Any ideas? I've crossposted it on the EA forum, but right now that seems like lower exposure than LW.

Comment author: John_Maxwell_IV 09 October 2014 11:52:23PM 1 point [-]

Submitting things to reddit/metafilter/etc. can work surprisingly well.

Comment author: So8res 10 October 2014 07:12:42AM *  1 point [-]

I'm slightly averse to submitting my own content on reddit, but you (John_Maxwell_IV, to combat the bystander effect, unless you decline) are encouraged to do so.

My preference would be for the Minding Our Way version over the EA forum version over the LW version.

Comment author: UriKatz 31 October 2014 05:48:48AM 1 point [-]

I think we need to consider another avenue in which our emotions are generated, and effect our lives. An immediate, short to medium term high is, in a way, the least valuable personal return we can expect from our actions. However, there is a more subtle yet long lasting emotional effect, which is more strongly correlated to our belief system, and our rationality. I refer to a feeling of purpose we can have on a daily basis, a feeling of maximizing personal potential, and even long term happiness. This is created when we believe we are doing the right thing, when we know there is till more to be done, and continue to make an effort. A good example of this is the difference between falling in love and being in love for a lifetime. Another example is raising children. Every few months I sit in front of my computer and punch in a bunch of numbers, which result in a donation to GiveWell. The immediate emotional impact of this is about on par with eating a mediocre sandwich. However, every day I remind myself that that day's work contributes to my ability to make bigger and bigger donation. Also, every so often I am hit with the realization that I, insignificant little me, have saved people's lives, and can save more. That perhaps my existence on this planet will do more good than harm. The contribution of this to my overall emotional well being cannot be overstated. I think we can redefine caring along these lines. Then we will see that we do care, not only in action, but also in feeling. Any emotion that actually matters is not a momentary peak or trough.

Comment author: snarles 16 October 2014 01:58:48PM *  1 point [-]

Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.

Comment author: [deleted] 17 October 2014 11:29:57AM *  0 points [-]

Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds) -- and it'd be a very weird utility function indeed if the difference between the first and the second is much larger than one-thousandth the difference between the second and the third. And even if U did have such kinks, the fact that you don't know exactly how many oiled birds are there would smooth them away when computing EU(one fewer oiled bird) etc.

(IIRC EY said something similar in the sequences, using starving children rather than oiled birds as the example, but I can't seem to find it right now.)

Unless you also care about who is saving the birds -- but you aren't considering saving them with your own hands, you're considering giving money to save them, and money is fungible, so it'd be weird to care about who is giving the money.

Comment author: Jiro 17 October 2014 04:37:04PM *  0 points [-]

Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds)

Nonlinear in what?

Daniel's utility for dollars is nonlinear in the total number of dollars that he has, not in the total number of dollars in the world. Likewise, his utility for birds is nonlinear in the total number of birds that he has saved, not in the total number of birds that exist in the world.

(Actually, I'd expect it to have two components, one of which is nonlinear in the number of birds he has saved and another of which is nonlinear in the total number of birds in the world. However, the second factor would be negligibly small in most situations.)

Comment author: [deleted] 18 October 2014 07:49:21AM 0 points [-]

IOW he doesn't actually care about the birds, he cares about himself.

Comment author: Jiro 18 October 2014 09:23:49AM 0 points [-]

He has a utility function that is larger when more birds are saved. If this doesn't count as caring about the birds, your definition of "cares about the birds" is very arbitrary.

Comment author: [deleted] 19 October 2014 12:45:19PM 0 points [-]

He has a utility function that is larger when more birds are saved.

He has a utility function that is larger when he saves more birds; birds saved by other people don't count.

Comment author: Jiro 19 October 2014 03:33:59PM 0 points [-]

If it has two components, they do count, just not by much.

Comment author: Jiro 16 October 2014 05:35:09PM 0 points [-]

Because Daniel has been thinking of scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of dollars: the internal feeling of satisfaction with gaining money can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about making lots of money, he shuts up and multiplies the joy of finding $20 by a million....

Comment author: Lumifer 16 October 2014 05:55:15PM *  2 points [-]

he expects his brain to misreport how much he actually cares

Um, that's nonsense. His brain does not misreport how much he actually cares -- it's just that his brain thinks that it should care more. It's a conflict between "is" and "should", not a matter of misreporting "is".

he shuts up and multiplies the joy of finding $20 by a million....

After which he goes and robs a bank.

Comment author: Jiro 16 October 2014 06:39:16PM 1 point [-]

Um, that's nonsense.

You do realize that what I said is a restatement of one of the examples in the original article, except substituting "caring about money" for "caring about birds"? And snarles' post was a somewhat more indirect version of that as well? Being nonsense is the whole point.

Comment author: Lumifer 16 October 2014 06:45:49PM 1 point [-]

You do realize that what I said is a restatement of one of the examples in the original article

Yes, I do, and I think it's nonsense there as well. The care-o-meter is not broken, it's just that your brain would prefer you to care more about all these numbers. It's like preferring not have a fever and saying the thermometer is broken because it shows too high a temperature.

Comment author: DanielLC 15 October 2014 09:20:59PM 1 point [-]

I know the name is just a coincidence, but I'm going to pretend that you wrote this about me.

Comment author: PeterisP 15 October 2014 04:01:23PM *  1 point [-]

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.

Comment author: Decius 15 October 2014 07:27:02AM 1 point [-]

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.

People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.

Comment author: Wes_W 15 October 2014 08:32:01AM 2 points [-]

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

I can think of two categories of responses.

One is something like "I care by induction". Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it's not much of a leap to "I should just start caring about people before I meet them". After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.

The other is something like "The caring is much better calibrated than the not-caring". Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I've never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.

I've also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. "I don't care much about billions of people" is an almost totally unfounded wild guess, but "I care lots about individual people" has lots of solid evidence, so when they collide, the latter wins.

(Neither of these are ironclad, at least not as I've presented them, but hopefully I've managed to gesture in a useful direction.)

Comment author: Jiro 15 October 2014 03:55:10PM 3 points [-]

Your second category of response seems to say "my intuitions about considering a group of people, taken billions at a time, aren't reliable, but my intuitions about considering the same group of people, one at a time, are". You then conclude that you care because taking the billions of people one at a time implies that you care about them.

But it seems that I could apply the same argument a little differently--instead of applying it to how many people you consider at a time, apply it to the total size of the group. "my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good." The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups--that is, the second argument refutes the first. Going from "I care about the people in my life" to "I would care about everyone if I met them" is as inappropriate as going from "I know what happens to clocks at slow speeds" to "I know what happens to clocks at near-light speeds".

Comment author: Decius 16 October 2014 04:44:34AM 0 points [-]

I'll go a more direct route:

The next time you are in a queue with strangers, imagine the two people behind you (that you haven't met before and don't expect to meet again and didn't really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.

If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.

Comment author: Decius 16 October 2014 12:27:57AM *  0 points [-]

To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group "people I've never met" that I don't care about, and into the group of people that I do care about.

Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.

Comment author: AmagicalFishy 23 November 2014 10:39:05PM *  1 point [-]

I second this question: Maybe I'm misunderstanding something, but part of me craves a set of axioms to justify the initial assumptions. That is: Person A cares about a small number of people who are close to them. Why does this equate to Person A having to care about everyone who isn't?

Comment author: lalaithion 23 November 2014 11:32:52PM 1 point [-]

For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn't changed anything about them. Similarly, becoming friends with someone doesn't usually change the person that much, but increases how much I care about them an awful lot.

Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn't

Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.

Comment author: AmagicalFishy 24 November 2014 05:30:36AM *  2 points [-]

For the most part, I follow—but there's something I'm missing. I think it lies somewhere in: "It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn't."

Is the underlying "axiom" here that you wish to maximize the number of effects that come from the caring you give to people, because that's what an altruist does? Or that you wish to maximize your caring for people?

To contextualize the above question, here's a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.

I know it's silly, but let me explain: A person usually doesn't want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario's conclusion seems silly. Similarly, I don't feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn't make me feel as if I should.

If, on the other hand, the axiom is true—then why bother considering your intuitive "care-o-meter" in the first place?

I think there's something fundamental I'm missing.

(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

Comment author: Decius 24 November 2014 11:59:59PM 0 points [-]

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

I care about self-consistency, but being self-consistent is something that must happen naturally; I can't self-consistently say "This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent"

Comment author: AmagicalFishy 25 November 2014 01:12:50AM 0 points [-]

... Oh.

Hm. In that case, I think I'm still missing something fundamental.

Comment author: Decius 28 November 2014 06:11:40AM 0 points [-]

I care about self-consistency because an inconsistent self is very strong evidence that I'm doing something wrong.

It's not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away.

The general case of "find a self-inconsistency, make the minimum change to remove it" is not error-correcting.

Comment author: lalaithion 25 November 2014 05:11:07PM 0 points [-]

I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you're aware that you're being inconsistent, you can still alter your actions to try and correct for that fact.

Comment author: Decius 24 November 2014 11:58:13PM 0 points [-]

I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.

It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.

Comment author: dthunt 09 October 2014 05:49:56PM 1 point [-]

I would like to subscribe to your newsletter!

I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.

Comment author: [deleted] 30 November 2014 10:05:13AM 0 points [-]

Sorry I was rude, I just know how it is, to stand in the rain and try to get someone do something painless for the greater good and have them turn away for whatever reason.

On another point, here's a case study of lesser proportions.

Suppose you generally want to fight social injustice, save Our Planet, uphold peace, defend women's rights etc. (as many do when they just begin deciding what to do with themselves). A friend subscribes you to a NGO for nature conservation, and you think it might be a good place to start, since you don't have much money to donate, you are vaguely afraid of catching a disease from poor people, and it's safer to explain to your parents anyway. (I'm not saying it is morally right, I'm saying it is common.)

You, like Paris, are presented with a choice. You can give your all (money, career, way of living) to one out of three goals, each one (considered by many other people) noble in its own right, and be quietly damned for not picking any of the other.

Your criteria of choice are: Beauty, Harmony and Kinship (again, this is just empirical - I've seen many people start with these). Note that each of them gives you equally strong feeling of being in the right.

Beauty means you would protect flowers and birds and aesthetically pleasing things. It is easy to devise a way to target certain species if you know something about the threats to them. Let us say this is the species-oriented approach. Harmony means you would urge people to 'live green', educate masses about the value of the Earth, maybe rail against nuclear power plants and other clearly dangerous projects. This will be the people-oriented approach. Kinship means you would fight for abused animals (pets, victims of scientific experiments, circus animals, large mammals going extinct from poaching, etc.) This will be the problem-oriented approach (I know, lousy naming).

However, efficient nature conservation turns out to be quite different from your visions. Your priors turn out to be biases.

Because protecting single species (Beauty) almost always fallso short of the objective, since the major cause of species extinction (at least in terrestrial ecosystems) is habitat destruction. Curiously, many beginners find it easier to invest efforts into saving individual lives but not into ensuring there is a place for the organisms to live, propagate and disperse. A life (or often, one season of it) is something tangible; a possibility is not. (And it is statistically hard to fight against land developing companies and win more than a season's delay before the habitat in question is razed to the ground. Also, the danger to the activist is proportional to his impact. I should think it is so for human-oriented initiatives, too. It's one thing to raise funds for cancer treatment, it's another to investigate illegal trade in human organs.)

Because pursuing Harmony mostly gets you to discuss misconceptions about concervation (the deeper you dig the wilder they get), and protesting against power plants rarely succeeds at all. Not to mention that this way doesn't begin to cover the more common (and tawdry) threats to biodiversity.

Because Kinship is not about nature, it's about virtuousness and kindness.

In the end, you either shrug and say, 'I've tried' or specialize in some branch of ecology. Very few people start with science, but they are more likely to continue working. 'It is the good thing to do' is not a strong enough motivation for most. It's simply not efficient.

Comment author: lackofcheese 18 October 2014 08:19:42AM *  0 points [-]

I think there's some good points to be made about the care-o-meter as a heuristic.

Basically, let's say that the utility associated with altruistic effort has a term something like this:
U = [relative amount of impact I can have on the problem] * [absolute significance of the problem]

To some extent, one's care-o-meter is a measurement of the latter term, i.e. the "scope" of the problem, and the issue of scope insensitivity demonstrates that it fails miserably in this regard. However, that isn't entirely an accurate criticism, because as a rough heuristic your care-o-meter isn't simply a measure of the second term; it also includes some aspects of the first term. Indeed, if one views the care-o-meter as a "call to action", then it would make much more sense for it to be a heuristic estimate of U than of absolute problem significance.

For example, if your care-o-meter says you care more about your friends than about people far away, or don't care much more about large disasters than smaller ones, then any combination of three things could be going on:
(1) I can't have as much relative impact on those problems.
(2) Those problems are simply less important.
(3) My care-o-meter is simply wrong.

I don't agree at all with (2), and I can see a lot of merit in the suggestion of (3). However, I think that for most people in most of human history, (1) has been relatively applicable. If you, personally, are only capable of helping other people a single person at a time, then it doesn't really matter if that person is a single person who has been hurt, or one out of a million suffering due to a major disaster. Also, you are in a unique position to help your friends more so than other people, and thus it makes plenty of sense to spend effort on your friends more so than on random strangers.

Of course, it is nonetheless true that this kind of care-o-meter miscalibration has always been an issue. At the very least, there have always been people who have had much more power than others, and thus have been able to make larger impacts on larger problems.

More importantly, in modern times (1) is far less true than it used to be for a great many people. It is genuinely possible for many people in the world to have a significant impact on what you refer to as distant invisible problems, and thus good care-o-meter calibration is essential.

Comment author: spatiality 08 October 2014 03:58:56PM *  0 points [-]

Thank You for this write-up; I really like the structure of it actually managing to present the evolution of an idea. Agreeing with more or less of the content, I often find myself posing the question whether I - and seven billion others - could save the world with my, our own hands. (I am beginning to see utilons even in my work as an artist, but that belongs into a wholly different post) This is a question for the ones like me, not earning much, and - without further and serious reclusion, reinvention and reorientation - not going to earn much, ever: Do I a) maximise and donate the small amounts I receive now, b) maximise my future income while minimising donations for now to spend on self-improvement and donate some highly uncertain, possibly huge sum in the future or c) use my resources to directly change something now? Let's not make it an overly complex discussion, so feel free to message me instead of commenting.

Concerning mother Theresa and other saints, I think we all know somebody who was an especially vociferous denier of her sanctity. I think it helps if I model myself as an instinctly selfish creature, and then go on and use my selfish instincts to push myself in a good direction. (I did this - on a small scale - with my smoking problem and told myself: Ok, so you wanna smoke, hm?? So go on and smoke - when you have won the next competition. So here's what I do whenever I feel the urge: Oh, I wanna smoke; Oh I can't, so how do I optimise my chance of smoking? Oh, I should go and work on my project) I think this technique - how darksided and dangerous it ever may be - can be used to propel myself towards even bigger goals.