This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.

1

I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".

Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.

The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.

I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.

This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.

For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.

The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.

Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.

And this is a problem.

2

It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.

My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.

Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.

When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to grasp the gravity of the situation.

Saving a person's life feels great, and it would probably feel just about as good to save one life as it would feel to save the world. It surely wouldn't be many billion times more of a high to save the world, because your hardware can't express a feeling a billion times bigger than the feeling of saving a person's life. But even though the altruistic high from saving someone's life would be shockingly similar to the altruistic high from saving the world, always remember that behind those similar feelings there is a whole world of difference.

Our internal care-feelings are woefully inadequate for deciding how to act in a world with big problems.

3

There's a mental shift that happened to me when I first started internalizing scope insensitivity. It is a little difficult to articulate, so I'm going to start with a few stories.

Consider Alice, a software engineer at Amazon in Seattle. Once a month or so, those college students will show up on street corners with clipboards, looking ever more disillusioned as they struggle to convince people to donate to Doctors Without Borders. Usually, Alice avoids eye contact and goes about her day, but this month they finally manage to corner her. They explain Doctors Without Borders, and she actually has to admit that it sounds like a pretty good cause. She ends up handing them $20 through a combination of guilt, social pressure, and altruism, and then rushes back to work. (Next month, when they show up again, she avoids eye contact.)

Now consider Bob, who has been given the Ice Bucket Challenge by a friend on facebook. He feels too busy to do the ice bucket challenge, and instead just donates $100 to ALSA.

Now consider Christine, who is in the college sorority ΑΔΠ. ΑΔΠ is engaged in a competition with ΠΒΦ (another sorority) to see who can raise the most money for the National Breast Cancer Foundation in a week. Christine has a competitive spirit and gets engaged in fund-raising, and gives a few hundred dollars herself over the course of the week (especially at times when ΑΔΠ is especially behind).

All three of these people are donating money to charitable organizations… and that's great. But notice that there's something similar in these three stories: these donations are largely motivated by a social context. Alice feels obligation and social pressure. Bob feels social pressure and maybe a bit of camaraderie. Christine feels camaraderie and competitiveness. These are all fine motivations, but notice that these motivations are related to the social setting, and only tangentially to the content of the charitable donation.

If you took any of Alice or Bob or Christine and asked them why they aren't donating all of their time and money to these causes that they apparently believe are worthwhile, they'd look at you funny and they'd probably think you were being rude (with good reason!). If you pressed, they might tell you that money is a little tight right now, or that they would donate more if they were a better person.

But the question would still feel kind of wrong. Giving all your money away is just not what you do with money. We can all say out loud that people who give all their possessions away are really great, but behind closed doors we all know that people are crazy. (Good crazy, perhaps, but crazy all the same.)

This is a mindset that I inhabited for a while. There's an alternative mindset that can hit you like a freight train when you start internalizing scope insensitivity.

4

Consider Daniel, a college student shortly after the Deepwater Horizon BP oil spill. He encounters one of those college students with the clipboards on the street corners, soliciting donations to the World Wildlife Foundation. They're trying to save as many oiled birds as possible. Normally, Daniel would simply dismiss the charity as Not The Most Important Thing, or Not Worth His Time Right Now, or Somebody Else's Problem, but this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can. They simply don't have the resources to clean all the available birds. A pathetic young bird flops towards his feet, slick with oil, eyes barely able to open. He kneels down to pick it up and help it onto the table. One of the bird-cleaners informs him that they won't have time to get to that bird themselves, but he could pull on some gloves and could probably save the bird with three minutes of washing.

blog.bird-rescue.org

Daniel decides that he would spend three minutes of his time to save the bird, and that he would also be happy to pay at least $3 to have someone else spend a few minutes cleaning the bird. He introspects and finds that this is not just because he imagined a bird right in front of him: he feels that it is worth at least three minutes of his time (or $3) to save an oiled bird in some vague platonic sense.

And, because he's been thinking about scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of birds: the internal feeling of caring can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.

Thousands and thousands of birds were oiled by the BP spill alone. After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars. And that's not even counting wildlife threatened by other oil spills.

And if he cares that much about de-oiling birds, then how much does he actually care about factory farming, nevermind hunger, or poverty, or sickness? How much does he actually care about wars that ravage nations? About neglected, deprived children? About the future of humanity? He actually cares about these things to the tune of much more money than he has, and much more time than he has.

For the first time, Daniel sees a glimpse of of how much he actually cares, and how poor a state the world is in.

This has the strange effect that Daniel's reasoning goes full-circle, and he realizes that he actually can't care about oiled birds to the tune of 3 minutes or $3: not because the birds aren't worth the time and money (and, in fact, he thinks that the economy produces things priced at $3 which are worth less than the bird's survival), but because he can't spend his time or money on saving the birds. The opportunity cost suddenly seems far too high: there is too much else to do! People are sick and starving and dying! The very future of our civilization is at stake!

Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.

Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?

In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.

In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

Alice and Bob and Christine usually aren't spending time solving all the world's problems because they forget to see them. If you remind them — put them in a social context where they remember how much they care (hopefully without guilt or pressure) — then they'll likely donate a little money.

By contrast, Daniel and others who have undergone the mental shift aren't spending time solving all the world's problems because there are just too many problems. (Daniel hopefully goes on to discover movements like effective altruism and starts contributing towards fixing the world's most pressing problems.)

5

I'm not trying to preach here about how to be a good person. You don't need to share my viewpoint to be a good person (obviously).

Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.

The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.

But this is an error — prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters.

Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.

You don't get to feel the appropriate amount of "care", in your body. Sorry — the world's problems are just too large, and your body is not built to respond appropriately to problems of this magnitude. But if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control.

6

This, of course, leads us to the question of "what the hell do you then?"

And I don't really know yet. (Though I'll plug the Giving What We Can pledge, GiveWell, MIRI, and The Future of Humanity Institute as a good start).

I think that at least part of it comes from a certain sort of desperate perspective. It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.

I'm not trying to guilt you into giving more money away — becoming a philanthropist is really really hard. (If you're already a philanthropist, then you have my acclaim and my affection.) First it requires you to have money, which is uncommon, and then it requires you to throw that money at distant invisible problems, which is not an easy sell to a human brain. Akrasia is a formidable enemy. And most importantly, guilt doesn't seem like a good long-term motivator: if you want to join the ranks of people saving the world, I would rather you join them proudly. There are many trials and tribulations ahead, and we'd do better to face them with our heads held high.

7

Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid.

And similarly, addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face.

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Nobody gets to comprehend the scope of these problems. The closest we can get is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings.

Because our feelings lie to us.

When you do the multiplication, you realize that addressing global poverty and building a brighter future deserve more resources than currently exist. There is not enough money, time, or effort in the world to do what we need to do.

There is only you, and me, and everyone else who is trying anyway.

8

You can't actually feel the weight of the world. The human mind is not capable of that feat.

But sometimes, you can catch a glimpse.

On Caring
New Comment
277 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Shmi360

I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.

Going section by section:

  1. I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to

... (read more)
[-]So8res330

I don't disagree, and I don't think you're a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)

A few things, though:

No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.

This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won't (e.g. after a syn bio proof of concept that kills 1/4 of the race I would hope that diversity in problem-selection would decrease). "It is best to diversify and hope" seems like a platitude that dodges the fun parts.

I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know.

I also ha... (read more)

2Jiro
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously. In that case, saying "any individual could become a close friend, so I should multiply 'caring for one friend' by the the number of individuals in the group" is wrong. Instead, I should multiply "caring for one friend' by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn't increase at all.
2lackofcheese
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them. Let's say, for example, that: 1. If X was my close friend, I would care about X 2. If Y was my close friend, I would care about Y 3. X and Y could not both be close friends of mine simultaneously. Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn't I just care about both X and Y regardless of whether they are my close friends or not?
3Jiro
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, "only X or Y could be my close friend, but not both" may be an effect of that. It's not "they're my close friend, and that's the reason to care about them", it's "they're under my caring limit, and that allows me to care about them". "Is my close friend" is just another way to express "this person happened, by chance, to be added while I was still under my limit". There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don't affect their merit as a person, such as living closer to you). Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren't; if you had lived somewhere else or had different random experiences, you'd have had different close friends.
6Vaniver
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as 'time devoted to a common activity' or 'emotional availability.') Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there's an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can't. (Friendships in this view don't have to be symmetric- there are people that I'd listen to them complain that I don't expect they'd listen to me complain, and the reverse exists as well.) I think that it's reasonable to call facts 'special' relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
5lackofcheese
That's a solid point, and to a significant extent I agree. There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed "allies", which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being. This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it's simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of "selfishness"--ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is. However, this is not the same thing as "caring" in the sense So8res is using the term; I think he's using the term more in the sense of "value". For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you're going to be better able to help your close friends than you are a random stranger on the street.
0lackofcheese
The way you put it, it seems like you want to care for both X and Y but are unable to. However, if that's the case then So8res's point carries, because the core argument in the post translates to "if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y".
0Jiro
"I want to care for an arbitrarily chosen person from the set of X and Y" is not "I want to care for X and Y". It's "I want to care for X or Y".
0Lumifer
Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.
3lackofcheese
I think it depends on what you mean by "care". If you mean "devote time and effort to", sure; I completely agree that it makes a lot of sense to do this for your friends, and you can't do that for everyone. If you mean "value as a human being and desire their well-being", then I think it's not justifiable to afford special privilege in this regard to close friends.
0Lumifer
By "care" I mean allocating a considerably higher value to his particular human compared to a random one. Yes, I understand you do, but why do you think so?
4lackofcheese
I don't think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality. If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that's so then there are many, many other people in the world who have similar qualities but are not my friends.
2Jiro
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
0lackofcheese
I can't pay everyone's mortgage, and nor can anyone else, so different people will need to pay for different mortgages. Which approach works better, me paying my mortgage and you paying yours, or me paying your mortgage and you paying mine?
3Jiro
If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people's mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself. And even if it's impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own. But you don't. And you can't. And everyone doesn't and can't, not just for mortgages, but for, say, food or malaria nets. You don't send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don't care for them and yourself equally.
0lackofcheese
Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion? Perhaps I don't do those things, but that doesn't mean I can't and it doesn't mean I shouldn't.
1Jiro
You can say either 1. You ought to value other human beings equally, but you don't. 2. You do value other human beings equally, and you ought to act in accordance with that valuation, but you don't. You appear to be claiming 2 and denying 1. However, I don't see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.
0lackofcheese
I agree; I don't see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it's still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases. That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it's (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post. If it's (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help. Of course, (1) and (2) aren't the only possibilities here; there's at least two more that are important.
3Jiro
You seem to be agreeing by not really agreeing. What does it even mean to say "I value other people equally but I don't act on that"? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It's like saying "I prefer chocolate over vanilla ice cream, but if you give me them I'll always pick the vanilla". Then you don't really prefer chocolate over vanilla, because that's what it means to prefer something.
3lackofcheese
My actions alone don't necessarily imply a valuation, or at least not one that makes any sense. There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.
2hyporational
Is this basically another way of saying that you're not the king of your brain, or something else?
2lackofcheese
That's one way to put it, yes.
1elharo
As usual, the word "better" hides a lot of relevant detail. Better for whom? By what measure? Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles
1Lumifer
Ah. It seems we have been talking about somewhat different things. You are talking about the worth of a human being. I'm talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value. I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on "their proximity and/or relation to myself".
0lackofcheese
I'm not entirely sure what a "personal perception of the value of a human being" is, as distinct from the value or worth of a human being. Surely the latter is what the former is about? Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. "they make me happy"), but I don't think that's really the main thrust of what "caring" is.
3Lumifer
The "worth a human being" implies that there is one, correct, "objective" value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the "true" value. The worth of a human being is a function with one argument: that human being. The "personal perception of the value of a human being" implies that there are multiple, different, "subjective" values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.
0lackofcheese
So, either there is such a thing as the "objective" value and hence, implicitly, you should seek to approach that value, or there is not. I don't see any reason to believe in an objective worth of this kind, but I don't really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as "passing judgement on the worth of humans", because it's the only thing those words could refer to; you can't avoid the issue simply by calling it a subjective matter. In my view, regardless of whether the value in question is "subjective" or "objective", I don't think it should be determined by the mere circumstance of whether I happened to meet that person or not.
3Lumifer
So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth -- right? It's a "mere circumstance" that this particular human happens to be her child.
2lackofcheese
Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults. However, I do think that the mother should hold other peoples' children as being of equal value to her own. That doesn't mean valuing her own children less, it means valuing everyone else's more. Sure, it's not very realistic to expect this of people, but that doesn't mean they shouldn't try.
1hyporational
One can reasonably argue the other way too. New children are easier to make than new adults. Since she has finite resources, is there a practical difference? It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
3lackofcheese
True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society. Earlier I specifically drew a distinction between devoting time and effort and valuation; you don't have to value your own children more to devote yourself to them and not to other peoples' children. That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples' children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively. If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by "extreme altruism"?
2hyporational
A good point. By abuse I wouldn't necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people. Valuing people equally by default when their instrumental value isn't considered. I hope I didn't misunderstand you. That's about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
1lackofcheese
Sure, and there isn't really anything wrong with that as long as the person receiving the resources really needs them. The term "altruism" is often used to refer to the latter, so the clarification is necessary; I definitely don't agree with that extreme. In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.
[-]kalium140

My view is similar to yours, but with the following addition:

I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I've effectively stolen their money and spent it on something they wouldn't have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.

I feel like I'm somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.

9Richard_Kennaway
Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :)) On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.
5[anonymous]
Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity): I think you are. You are heartless. Here's my question, and I hope you take the time to answer as honestly as you wrote your comment: Why? After all you've rejected to care about, why in the world would you care about something as abstract as "humanity surviving and thriving"? It's just an ape species, and there have already been billions of them. In addition, you clearly don't care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries. I don't mean to convince you otherwise, but it seems arbitrary - and surprisingly common - that someone who doesn't care about the suffering or lives of strangers would care about that one thing out of the blue.

I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.

I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?

It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?

1[anonymous]
Yes, rereading it, I think I misinterpreted response 2 as saying it doesn't matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn't matter, just durability and surivival. I thought this defeated the usual Big Future argument. But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only "nicer" if they don't suffer (translation: their wellbeing doesn't really matter), then in what way would the Big Future matter? I care a lot about humanity's future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.
[-]Shmi110

...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.

First, as usual, TheOtherDave has already put it better than I could.

Maybe to elaborate just a bit.

First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.

Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.

Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.

Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.

I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?

That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.

Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.

3Lumifer
I don't.
1hyporational
Would you make that a normative statement?
2Lumifer
Well, kinda-sorta. I don't think the subject is amenable to black-and-white thinking. I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don't feel that people who think their morals are bad are to be admired and emulated either. There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn't good either.
8hyporational
So would you say that moral systems that don't think they're better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
0Lumifer
In one particular aspect, yes. There are many aspects. The barber shaves everyone who doesn't shave himself..? X-)
1Weedlayer
So if my morality tells me that murdering innocent people is good, then that's not worse than whatever your moral system is? I know it's possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
4Lumifer
You are confused between two very different statements: (1) I don't think that my morals are (always, necessarily) better than other people's. (2) I have no basis whatsoever for judging morality and/or behavior of other people.
3Weedlayer
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren't you really just checking for similarity to your own? I mean, it's the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I'm evaluating others' beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison. Which of course is similar to the argument people sometimes bring up about "moral progress", claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs). My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
3Lumifer
No, I don't think so. Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems. When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you. Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there's no reason to consider your own value system to be the very best there is, especially given that it's your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect. Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
4Weedlayer
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I'll make two points and see if they move the conversation forward: 1: "There's no reason to consider your own value system to be the very best there is" This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren't the absolute best there is. The same logic holds true for morals. I know I'm making some mistakes, but I don't know where those mistakes are. On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong. This is what I mean by "thinking that one's own morals are the best". I know I might not be right on everything, but I think I'm right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but "least wrong"). Which brings me to point 2. 2: "Also don't forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were." I'm absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I've been equivocating between the two, that's why). I know I can't alter my moral beliefs on a whim, but that's because I have no reason to want to. Consider self-modifying to want to murder innocents. I can't do this, primarily because I don't want to, and CAN'T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn't get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance mot
3Lumifer
That's already an excellent start :-) Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics -- it's a system of "hard" facts and, generally speaking, they are either correct or not. As you say, "On any individual issue, I think I'm right, and therefore logically if someone disagrees with me, I think they're wrong." To me morals is more like preferences -- a system of flexible way to evaluate choices. You can have multiple ways to do that and they don't have to be either correct or not. Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let's take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong. My moral position different from (in fact, diametrically opposed to) Alice's, but I'm not going to say that Alice's morals are wrong. They are just different and she has full right to have her own. That does not apply to everything, of course. There are "zones" where I'm fine with opposite morals and there are "zones" where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not. No, I don't think so. Morals are values, not desires. It's not particularly common to wish to hold different values (I think), but I don't see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn't too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less. In general, people sometimes wish to radically change themselves (religious (de)conve
1Weedlayer
You do realize she's implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I'm having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn't accept murder as "just different", and something they have the full right to have. I don't think your example works. He values success, AND he values other things (family, companionship, ect.) I'm not sure why you're calling different values "Different sides" as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn't sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn't seem like a case of my "Don't murder" side wanting me to value immortality less, it's more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I'm willing to accept. It's a straight calculation, no value readjustment required. As for your last point, I've never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can't remember caring too much about it). I actually don't know what makes other people adopt ideologies. For me, I'm a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deont
1Lumifer
I think the terms "acceptable" and "not acceptable" are much better here than right and wrong. If the positions were reversed, I might find Alice's morality unacceptable to me, but I still wouldn't call it wrong. No, I'm not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob's (let's call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob's conscious personality, but it still is having difficulty controlling his drive to win. Or let's take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are... let's say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands -- consciously -- that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain't in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time. Sure, but do you accept that other people have?
1hyporational
I think akrasia could also be an issue of being mistaken about your beliefs, all of which you're not conscious of at any given time.
2hyporational
It's not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems. So while I wouldn't murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn't seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn't seem to provide any new information.
0Weedlayer
There's no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"? I think this is basically the point EY makes about the "psychological unity of humankind". Of course, this dream goes out the window with UFAI and aliens. Lets hope we don't have to deal with those.
0Decius
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality "Has empathy and values survival and survival is impaired by murder". We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and "Has a moral system that prohibits murder" is a quality that successfully creates offspring that typically have the quality "Has a moral system that prohibits murder". The different quality "Commits wanton murder" is less successful at creating offspring in modern society, because convicted murderers don't get to teach children that committing wanton murder is something to do.
0A1987dM
I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.
[-]gjm160

inflammatory and judgmental comments

It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.

And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.

(For what it's worth, I don't think your comments indicate that you're heartless.)

6pianoforte611
It's interesting because people will often accuse a low status out group of "thinking they are better than everyone else" *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw .... until I saw Hedonic Treader's comment. I do sort of understand the attitude of the utilitarian EA's. If you really believe that everyone must value everyone else's life equally, then you'd be horrified by people's brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism. *I've seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
[-]gjm130

the accusation was always made of straw

I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:

  • A thinks her group is better than others.
  • A's thinking this is obvious enough for B to be able to discern it with some confidence.
  • A never explicitly says that her group is better than others.

and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.

I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:

  • A really does think that, to some extent and in some respects, her group is better than others.
  • But so does everyone else.
  • B imagines that he's discerned unusual or unreasonable opinions of this sort in A.
  • But really he hasn't; at most he's picked up on something that he could find anywhere if he chose to look.

[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) ever... (read more)

1CCC
I do imagine that the first situation is more common, in general, than the second. This is entirely because of the point: * But so does everyone else. A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
3gjm
Sorry, I wasn't clear enough. By "so does everyone else" I meant "everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others".
1CCC
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I'm not sure that it's actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
0A1987dM
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream -- by my standards; it doesn't follow that I also believe it's better by your standards or by objective standards (whatever they might be) and feel smug about it.
3gjm
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance. (Even food preferences -- a thing so notoriously subjective that the very word "taste" is used in other contexts to indicate something subjective and personal -- can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
-3[anonymous]
Perhaps to avoid confusion, my comment wasn't intended as an in-group out-group thing or even as a statement about my own relative status. "Better than" and "worse than" are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It's a rather simple statement of relative moral status. Here's the problem: If we pretend - like some in the rationalist community do - that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse. Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
3Lumifer
That's a strawman. I haven't seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn't make them equal, by the way). Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved. Why is that?
1A1987dM
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
2Lumifer
By morality I mean a particular part of somebody's system of values. Roughly speaking, morality is the socially relevant part of the value system (though that's not a hard definition, but rather a pointer to the area where you should search for it).
0hyporational
It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.
0Jiro
One can judge "judgmentalism on set A" without being "judgemental on set A" (while, of course, still being judgmental on set B).
6Bugmaster
You are saying that shminux is "a worse person than you" and also "heartless", but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, "whoever cares about more people is better", then all you're saying is, "shminux cares about fewer people because he cares about fewer people". This is true, but tautologically so.
0roryokane
All morals are axioms, not theorems, and thus all moral claims are tautological. Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.
6lackofcheese
I don't agree. Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms. Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.
2Jiro
Then every human being in existence is heartless.
0CBHacking
I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, "nice" is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such "non-zero benefit" or "decent" (which I, and probably most people, would rank below "nice") and "excellent", "wonderful", "the best thing in the world" (in the hyperbolic "best thing I have in mind right now" sense), or "literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time"; I suspect most native English speakers would agree that those are stronger sentiments than "nice". I can certainly think of things that are more important than merely "nice" yet less important than a reduction in death and suffering. For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there's any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn't spend it on one... I don't actually need a car, in fact don't have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example. Leaving aside the fact that "every human being in existence" appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it "nice". Maybe not proportionately more positively - as the post says, our care-o-meters don't scale that far - but there would still be a major difference. I don't know how common, in actual numbers, that reaction is vs. the "It would be nice" reaction (not to mention other possible reactions), but it is absolutely a significant number of people e
0Jiro
Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering. In that sense, it's nice that death and suffering are alleviated, but that's all. "Not devoting their whole life towards stopping death and suffering" equates to "thinks something else is more important than stopping death and suffering".
0CBHacking
False dichotomy. You can have (many!) things which are more than merely "nice" yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates' view on malaria eradication is that "it's nice that death and suffering are alleviated, but that's all"? We should probably taboo the word "nice" here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on. Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn't even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.
6gjm
Let's be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans' lives less than you value having a second yacht. I'm sure that second yacht is fun to have, but I think it's reasonable to categorize something that you value less than 1/10000 of the increment from "one yacht" to "two yachts" as no more important than "nice". This is of course not a problem unique to billionaire playboys, but it's maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1/100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some -- e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There's still something kinda indecent about valuing your second car more than a person's life, but at least to my mind it's substantially less indecent than valuing your second megayacht more than 10000 people's lives. Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending -- but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it's even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do. Pedantic notes: I'm supposing that your second yacht cos
1Jiro
Thank you, that's what I would have said.
0Richard_Kennaway
What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing. I don't myself act on the conclusion, but I also don't see a flaw in the argument.
4pianoforte611
This is exactly how I feel. I would slightly amend 1 to "I care about family, friends, some other people I know, and some other people I don't know but I have some other connection to". For example, I care about people who are where I was several years ago and I'll offer them help if we cross paths - there are TDT reasons for this. Are the they the "best" people for me to help under utilitarian grounds? No, and so what?
1ShardPhoenix
Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they're probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they're only middle class to begin with I feel more pity than admiration. * Meaning the extreme, "all human lives are equally valuable to me" version, rather than just a desire to not waste charity money.
3leplen
I don't understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me? I don't have a good logical reason for why my life is a lot more valuable than anyone else's. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can't come up with a good reason to have a dominantly large "Life of leplen" term in my utility function. Much of the data suggests that happiness/life quality isn't well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn't I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn't seem efficient to keep investing in myself for little to no return. I wouldn't feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I'd probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn't mean I'm entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky. With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.
0ShardPhoenix
Sounds like you answered your own question! (It's one thing to have some simplistic far-mode argument about how this or that doesn't matter, or how we should sacrifice ourselves for others, but the near-mode nitty-gritty of the real-world is another thing).

I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where

everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.

and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.

I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.

This is why I prefer to frame EA as something exciting, not burdensome.

Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.

This just might have more general application.

1Capla
If I'm working at my capacity, I don't see how it's my fault for not having the world fixed immediately. I can't do any more than I can do and I don't see how I'm responsible for more than what my efforts could change.
0[anonymous]
From my perspective, it's "I have to think about all the problems in the world and care about them." That's burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don't actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don't have to worry about connecting them to people starving in distant lands.
3John_Maxwell
Do we have any data on which EA pitches tend to be most effective?
1VAuroch
I've read that. It's definitely been the best argument for convincing me to try EA that I've encountered. Not convincing, currently, but more convincing than anything else.
7NancyLebovitz
I've seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
5Richard_Kennaway
Once you've decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money? Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.
4Lumifer
I always find it curious how people forget that equality is symmetrical and works in both directions. So, killing a child is morally equivalent to buying an expensive pair of shoes? That's interesting...
9A1987dM
See also http://xkcd.com/1035/, last panel. One man's modus ponens... I don't lose much sleep when I hear that a child I had never heard of before was killed.
1Richard_Kennaway
No, except by interpreting the words "morally equivalent" in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn't matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing). The only symmetry here is that of "equal and opposite". Did anyone actually need that spelled out?
3Lumifer
These verbal contortions do not look convincing. The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.
5[anonymous]
A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it's much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death. For instance, what do you think George Bush Sr's worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn't fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine. This is utterly bizarre on many levels, but I'm grateful too -- I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.
[-]gjm160

When you ask how bad an action is, you can mean (at least) two different things.

  • How much harm does it do?
  • How strongly does it indicate that the person who did it is likely to do other bad things in future?

Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.

In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.

That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.

0[anonymous]
But this recurses until all the leaf nodes are "how much harm does it do?" so it's exactly equivalent to how much harm we expect this person to inflict over the course of their lives. By the same token, it's easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn't change the analysis that the embargo was ten thousand times worse than the school shooting.
2gjm
For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.) That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all. For the avoidance of doubt, I'm not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn't perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.) But if the question is who is more evil*, GHWB or the Columbine killers?", the answer depends on what you mean by "evil" and most people most of the time don't mean "causing harm"; they mean something they probably couldn't express in words but that probably ends up being close to "having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with" -- which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn't include a tendency to callousness when dealing with the lives of strangers thousands of miles away.
0dthunt
Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive. I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs. Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.
9tog
In case anyone else was curious about this, here's a quote: Oops.
0pianoforte611
Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.
0Lumifer
Are some cases of killing a child equivalent to buying expensive shoes?
0gjm
Those in which the way you kill the child is by spending money on luxuries rather than saving the child's life with it.
0Lumifer
Do elaborate. How exactly does that work? For example, I have some photographic equipment. When I bought, say, a camera, did I personally kill a child by doing this?
6gjm
(I have the impression that you're pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we're discussing. But I'm going to take what you say at face value anyway.) The context here is the idea (stated forcefully by Peter Singer, but he's by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities. In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that's comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died. (Not necessarily specifically a child. It may be more expensive to save children's lives, in which case it would need to be a more expensive camera.) Of course there isn't a specific child you have killed all by yourself personally, but no one suggested there is. So, that was the original claim that Richard Kennaway described. Your objection to this wasn't to argue with the moral principles involved but to suggest that there's a symmetry problem: that "killing a child is morally equivalent to buying an expensive luxury" is less plausible than "buying an expensive luxury is morally equivalent to killing a child". Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer's claim is something like "for all expensive luxury purchases, there exists a morally equivalent case of killing a child"; your proposed reversal is something like "for all cases of killing a child, there exists a morally equivalent case of buy
2Lumifer
Nope. I express my rhetorical contempt in, um, more obvious ways. It's not exactly that I don't understand, it's rather that I see multiple ways of proceeding and I don't know which one do you have in mind (you, of course, do). By they way, as a preface I should point out that we are not discussing "right" and "wrong" which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We're talking mostly about implications of certain moral positions and how they might or might not conflict with other values. Yes, I accept that. Not quite. I don't think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions -- for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years. However in this thread I didn't focus on that issue -- for the purposes of this argument I accepted the thesis and looked into its implications. Correct. It's not an issue of plausibility. It's an issue of bringing to the forefront the connotations and value conflicts. Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what's commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions -- making the heinous more normal works just as well. I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer's proposition and if you don't think it does, I would like to know why it doesn't. Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don't see what "luxuries" have to do with it -- you kill children by failing to max out your credit car
3gjm
Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die -- but I don't see how it's a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you're as responsible for the consequences of your inactions as of your actions.) Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1/5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can't be known to be the rest of your action.) Which do you prefer? I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths. In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first. Incidentally, I'm happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I'm content to choose a world in which people drive as much as I do rather than one in which there's much less driving, much more inconvenience, and fewer deaths on the road. (I'll add that I don't drive very much, and drive quite carefully.) I think that when you come at it from that direction, what you're doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen tha
2Lumifer
I said upfront that human morality is not coherent. However I think that the root issue here is whether you can do morality math. You're saying you can -- take the suffering of one person, multiply it by a thousand and you have a moral force that's a thousand times greater! And we can conveniently think of it as a number, abstracting away the details. I'm saying morality math doesn't work, at least it doesn't work by normal math rules. "A single death is a tragedy; a million deaths is a statistic" -- you may not like the sentiment, but it is a correct description of human morality. Let me illustrate. First, a simple example of values/preferences math not working (note: it's not a seed of a new morality math theory, it's just an example). Imagine yourself as an interior decorator and me as a client. You: Welcome to Optimal Interior Decorating! How can I help you? I: I would like to redecorate my flat and would like some help in picking a colour scheme. You: Very well. What is your name? I: Lumifer! You: What is your quest? I: To find out if strange women lyin' in ponds distributin' swords are a proper basis for a system of government! You: What is your favourite colour? I: Purple! You: Excellent. We will paint everything in your flat purple. I: Errr... You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness. And now back to the serious matters of death and dismemberment. You offered me a hypothetical: Let me also suggest one for you. You're in a boat, somewhere offshore. Another boat comes by and it's skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: "Hey! I've got an offer for you!" Joker's offer looks as follows. Sometime ago he put a bomb with a timer under a children's orphanage. He can switch off the bomb with a radio signal, but if he doesn't, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed
0lackofcheese
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done "morality math". Granted, people can't really be relied upon to actually do it right, and it may not be a good idea to "shut up and multiply" if you can expect to get it wrong... but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise. However, none of this talk of failure modes represents a solid counterargument to Singer's main point. I agree with you that there is no strict moral equivalence to killing a child, but I don't think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.
0A1987dM
Now that the funding gap of the AMF has closed, I'm not sure this is still the case.
3gjm
Yeah, I wondered about adding a note to that effect. But it seems unlikely to me that the AMF is that much more effective than everything else out there. Maybe it's $4000 now. Maybe it always was $4000. Or $1000. I don't think the exact numbers are very critical.
0Capla
Then tell me where I can most cheaply save a life.
0A1987dM
I don't know, and I wouldn't be surprised if there's no way to reliably do it with less than $5000.
0William_Quixote
Presumably if you stole a child's lunch money and bought a pair of shoes with it
1tog
NancyLebovitz: RichardKennaway: Richard's question is a good one, but even if there's no good answer it's a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending - and that this is an easier sell. So the framing of EA that Nancy describes has practical value.
0Dentin
The biggest problem I have with 'dead baby' arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don't find that babies have a whole lot of intrinsic value until they're properly programmed.
0NancyLebovitz
If you don't take care of babies, you'll eventually run out of adults. If you don't have adults, the babies won't be taken care of. I don't know what a balanced approach to the problem would look like.
1VAuroch
I'm not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn't improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn't seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
3hyporational
For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It's difficult to enjoy a delusion you're conscious of.
4torekp
Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I've usually seen called "sympathy" and "personal distress" in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one's own. Sympathy involves seeing it as that person's. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever - I feel your pain. Sorry, couldn't resist.) Hey I just realized - if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.
0Said Achmiz
If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don't feel distress, what, exactly, is there to sympathize with? Wouldn't you just shrug and dismiss the misfortune as irrelevant?
6hyporational
If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?
3Said Achmiz
I would not. This is a fair point. Follow-up question: are all things that we consider misfortunes similar to the "burn yourself" situation, in that there is some sort of "damage" that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?
3CCC
Consider a possible invention called a neuronic whip (taken from Asimov's Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the "pain" nerves at a given intensity. Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage. So, considering this counterexample, I'd say that no, not every possible misfortune includes damage. Though I imagine that most do.
1Lumifer
No need for sci-fi.
0hyporational
Much of what could be called damage in this context wouldn't necessarily happen within your body, you can take damage to your reputation for example. You can certainly be deluded about receiving damage especially in the social game.
0CCC
That is true; but it's enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).
0hyporational
Yes. I didn't mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.
0torekp
Let's say you cut your finger while chopping vegetables. If you don't feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.
-2VAuroch
Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other's distress vs. sympathy is understanding other's pain as it reflects your own distress. Specifically mitigating 'feeling pain from other's distress' as applied to a broad sphere of 'others' has been a significant part of my turn away from an altruistic outlook; this wasn't hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people's distress viscerally distant.
2Gunnar_Zarncke
But you don't have to bear it alone. It's not as if one person has to care about everything (nor each single person has to care for all). Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).
2VAuroch
Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can't let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent. Or in short, blur out the details and this is me:
0AnthonyC
Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.
2dthunt
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash. That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.
1dthunt
It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes) I don't think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.
0Jiro
"A part of the system that you cannot change" is a vague term (and it's a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like "if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can't change?" Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can't change affects the amounts that you assign to the parts that you can change.
-5Lumifer
1John_Maxwell
Here's a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that's not an issue. The idea is to score as many points as possible before that happens. If you save someone's life on expectation, you save someone's life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.
1AnthonyC
Ditto, though I diverged differently. I said, "Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?" I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at "zero, despair, crying at every news story." But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won't add or subtract much from the total utility of human civilization.
0[anonymous]
Super relevant slatestarcodex post: Nobody Is Perfect, Everything is Commensurable.
0VAuroch
Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
1[anonymous]
But what about the quantitative way? :( Edit: Forget that... I finally get it. Like, really get it. You said: Oh, my gosh... I think that's why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what's true, but that's probably not true. Honestly, I probably gave it up because having the power to impact someone else's eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone's eternal soul. And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don't seem nearly as important as they probably would had I not come from the perspective that people's lives on earth are just specks, just one-infinitieth total existence.

Interesting article, sounds a very good introduction to scope insensitivity.

Two points where I disagree :

  1. I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.

  2. I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."

9MugaSofer
This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism? (Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)
2AmagicalFishy
It may be more accurate to say something along the lines of "I mind large numbers of animals dying for no good reason. Food is a good reason, and thus do not mind eating chicken. An oil spill is not a good reason."
2dthunt
Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so. It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.
7MugaSofer
That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.
0[anonymous]
Double-post.
8Vaniver
Birds are the classic example, both in the literature and (through the literature) here.
5CCC
I very strongly agree with your point here, but would like to add that the problem of finding a political structure which properly maximises the happiness of the people living under it is a very difficult one, and missteps are easy.

Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.

Did the oil bird mental exercise. Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits. Sad.

9Capla
If you acctully think it's sad (Do you?), then you have a higher order set of values that wants you to want to care about others. If you want to want to care, you can do things to change yourself so that you do care. Even more importantly, you can begin to act act as if you care, because "caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway." All I know is that I want to the sort of person who cares. So, I act as that sort of person, and thereby become her.
1Philip_W
Would you care to give examples or explain what to look for?
7Capla
The biggest thing is just to act like you are already the sort of person who does care. Go do the good work. Find people who are better than you. Hang out with them. "You become like the 6 people you spend the most time with" and all that. (I remember reading the chapter on penetrating Azkaban in HP:MoR, and feeling how much I didn't care. I knew that there are places in the world where the suffering is as great as in that fictional place, but that it didn't bother me, I would just go about my day and go to sleep, where the fictional Harry is deeply shaken by his experience. I felt, "I'm not Good [in the moral sense] enough" and then thought that if I'm not good enough, I need to find people who are, who will help me be better. I need to find my Hermiones.) I'm trying to find the most Good people of my generation, but I realized long ago that I shouldn't be looking for Good people, so much as I should be looking for people who are actively seeking to be better than they are. (If you want to be as Good as you can be, please message me. Maybe we can help each other.) My feeling of moral inadequacy compared to Harry's feelings towards Azkaban (fictional) aren't really fair. My brain isn't designed to be moved abstract concepts. Harry (fictional) saw that suffering first hand and was changed by it, I only mentally multiply. I'm thinking that I need to put myself in situations where I can experience the awfulness of the world viscerally. People make fun of teenagers going to "help" build houses in the third world: it's pretty massively inefficient to ship untrained teenagers to Mexico to do manual labor (or only sort of do it), when their hourly output would be much higher if they just got a college degree and donated. Yet I know at least one person (someone who I respect, one of my "Hermines") who went to build houses in Mexico for a month and was heavily impacted by it and it spurred her to be of service more generally. (She told me that on the flight back to the st
1Lumifer
I have seen squalor, and in my particular case it did not recalibrate my care-o-meter at all. YMMV, of course.
0TomStocker
living in pain sent my carometer from below average to full. Seeing squalor definitely did something. I think it probably depends how you see it - did you talk to people as equals or see them as different types of people you couldn't relate to / didn't fit a certain criteria? Being surrounded by suffering from a young age doesn't seem to make people care - its being shocked by suffering after not having had much of it around that is occasionally very powerful - Like the story about the Buddha growing up in the palace then seeing sickness, death and age for the first time?
6Richard_Kennaway
What is the difference between an altruistic high and caring about other people? Isn't the former what the latter feels like?

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

0JoshuaMyer
I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds. From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.
0timujin
Because I wouldn't actually care if my actions actually help, as long as my brain thinks they do.
0Richard_Kennaway
Are you favouring wireheading then? (See hyporational's comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?
4Jiro
I think he wants to do things for which his brain whispers "this is altruistic" right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.
2timujin
No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn't wirehead into constantly feeling sweetness in my mouth.
1lmm
I find this a confusing position. Please expand

Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.

0Philip_W
Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally). When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: "I want to help people/be a good person" and "I want it to actually be real" are separate urges. What the OP seems to be appealing to is a system which says "I want to actually help people" in one go - sympathy, perhaps, as opposed to satisfying your altruism self-image.
0hyporational
If there's no difference we arrive at the general problem of wireheading. I suspect very few people who identify themselves as altruists would choose being wireheaded for altruistic high. What are the parameters that would keep them from doing so?
1Richard_Kennaway
Yes. Let me change my question. If (absent imaginary interventions with electrodes or drugs that don't currently exist) an altruistic high is, literally, what it feels like when you care about others and act to help them, then saying "I don't care about them, I just wanted the high" is like saying "I don't enjoy sex, I just do it for the pleasure", or "A stubbed toe doesn't hurt, it just gives me a jolt of pain." In short, reductionism gone wrong, angst at contemplating the physicality of mind.
1hyporational
It seems to me you can care about having sex without having the pleasure as well as care about not stubbing your toe without the pain. Caring about helping other people without the altruistic high? No problem. It's not clear to me where the physicality of mind or reductionism gone wrong enter the picture, not to mention angst. Oversimplification is aesthetics gone wrong. ETA: I suppose it would be appropriately generous to assume that you meant altruistic high as one of the many mind states that caring feels like, but in many instances caring in the sense that I'm motivated to do something doesn't seem to feel like anything at all. Perhaps there's plenty of automation involved and only novel stimuli initiate noticeable perturbations. It would be an easy mistake to only count the instances where caring feels like something, which I think happened in timujin's case. It would also be a mistake to think you only actually care about something when it doesn't feel like anything.
2Richard_Kennaway
I was addressing timujin's original comment, where he professed to desiring the altruistic high while being indifferent to other people, which on the face of it is paradoxical. Perhaps, I speculate, noticing that the feeling is a thing distinct from what the feeling is about has led him to interpret this as discovering that he doesn't care about the latter. Or, it also occurs to me, perhaps he is experiencing the physical feeling without the connection to action, as when people taking morphine report that they still feel the pain, but it no longer hurts. Brains can go wrong in all sorts of ways.

It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

[-][anonymous]130

Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.

Yes, that is how adults help in real life. In science we chop off little sub-sub-problems we think we can address to do our part to address larger questions whose answers no one person will ever find alone, and thus end up doing enormous work on the shoulders of giants. It works roughly the same in activism.

I see suffering the whole day in healthcare but I'm actually pretty much numbed to it. Nothing really gets to me, and if it did it could be quite crippling. Sometimes I watch sad videos or read dramatizations of real events to force myself to care for a while, to keep me from forgetting why I show up at work. Reading certain types of writings by rationalists helps too.

You shouldn't get more than glimpses of the weight of the world, or rather you shouldn't let them through the defences, to be able to function.

"Will the procedure hurt?" asked the patient. "Not if you don't sting yourself by accident!" answered the doctor with the needle.

I'm not sure what to make out of it, but one could run the motivating example backwards:

this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.

He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can.

"He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he'd endure that and quickly arrives at the conclusion that he doesn't care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low."

What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he'd feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.

It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.

Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can't solve. But maybe societies can.

4So8res
That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it's implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the "intrinsic" value of something small (not really what I mean, but I'm short on time right now, I hope you can see what I'm pointing at), and obviously mine aren't going to work for everyone. (As a matter of fact, my actual "bird death" thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.) If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.
1Gunnar_Zarncke
You didn't give details on how or how not to set up the thought experiment. I took it to mean 'your spontaneous valuation when imagining the situation' followed by n objective'multiplication'. Now my reaction wasn't that of aversion, but I tried to think of possible reactions and what would follow from that. Nothing wrong with mind hacks per se. I have read your productivity post. But I don't think they don't help in establishing 'intrinsic' value. For personal self-modification (motivation) it seems to work nice.

Wow this post is pretty much exactly what I've been thinking about lately.

Saving a person's life feels great.

Yup. Been there. Still finding a way to use that ICU-nursing high as motivation for something more generalized than "omg take all the overtime shifts."

Also, I think that my brain already runs on something like virtue ethics, but that the particular thing I think is virtuous changes based on my beliefs about the world, and this is probably a decent way to do things for reasons other than visceral caring. (I mean, I do viscerally care about being virtuous...)

Cross commented from the EA forum

First of all. Thanks Nate. An engaging outlook on overcoming point and shoot morality.

You can stop trusting the internal feelings to guide your actions and switch over to manual control.

Moral Tribes, Joshua Greene`s book, addresses the question of when to do this manual switch. Interested readers may want to check it out.

Some of us - where "us" here means people who are really trying - take your approach. They visualize the sinking ship, the hanging souls silently glaring at them in desperation, they shut up... (read more)

I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible. But I don't focus much effort on it. I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

For example:

  • The best way for me to improve math and science education is to work on my own teaching ability.
  • The best way for me to improve the mental health of college students is to make time to support friends that struggle with depression a
... (read more)
6Ixiel
I would love to see a splinter group, Efficient Altruism. I have no desire to give as much as I can afford, but feel VERY strongly about giving as efficiently to the causes I support as I can. When I read, I think from EA themselves, the estimated difference in efficiency of African aid organizations, it changed my whole perspective on charity.
1Philip_W
(separated from the other comment, because they're basically independent threads). This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the side-effects of ~600 people not feeling ill all the time. So if you primarily care about helping people live better lives, $50/yr to SCI ought to equal your stated current efforts. However, that doesn't count flow-through effects. EA is rare enough that you might actually get a large portion of the credit of convincing someone to donate to a more effective charity, or even become an effective altruist: expected marginal utility isn't conserved across multiple agents (if you have five agents who can press a button, and all have to press their buttons to save one person's life, each of them has the full choice of saving or failing to save someone, assuming they expect the others to press the button too, so each of them has the expected marginal utility of saving a life). Since it's probably more likely that you convince someone else to donate more effective than that one of the dewormed people will be able to have a major impact because of their deworming, flow-through effects should be very strong for advocacy relative to direct donation. To quantify: Americans give 1% of their incomes to poverty charities, so let's make that $0.5k/yr/student. Let's say that convincing one student to donate to SCI would get them to donate that much more effectively about 5 years sooner than otherwise (those willing would hopefully be roped in eventually regardless). Let's also say SCI is five times more effective than their current charities. That means you win $2k to SCI for every student you convince to alter their donation patterns. You probably enjoy helping people directly (
0tjohnson314
(Sorry, I didn't see this until now.) I'll admit I don't really have data for this. But my intuitive guess is that students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them. At least for me, that's a large part of why I'm in the field that I chose. It's possible that I'm being misled by the warm fuzzy feelings I get from helping someone face-to-face, which I don't get from sending money halfway across the world. But it seems like there's many things that matter in life that don't have a price tag.
4Philip_W
Have you made efforts to research it? Either by trawling papers or by doing experiments yourself? Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150. That sounds deep, but is obviously false: would you condemn yourself to a year of torture so that you get one unit of the thing that allegedly doesn't have a price tag (for example a single minute of a conversation with a student where you feel a real connection)? Would you risk a one in a million chance to get punched on the arm in order to get the same unit? If the answer to these questions is [no] and [yes] respectively, as I would expect them to be, those are outer limits on the price range. Getting to the true value is just a matter of convergence. Perhaps more to the point, though, those people you would help halfway across the world are just as real, and their lives just as filled with "things that don't have a price tag" as people in your environment. For $3000, one family is not torn apart by a death from malaria. For $3, one child more attends grade school regularly for a year because they are no longer ill from parasitic stomach infections. These are not price tags, these are trades you can actually make. Make the trades, and you set a lower limit. Refuse them, and the maximum price tag you put on a child's relationship with their teacher is set, period. It does seem very much like you're guided by your warm fuzzies.
0tjohnson314
This is based on my own experience, and on watching my friends progress through school. I believe that the majority of successful people find their life path because someone inspired them. I don't know where I could even look to find hard numbers on whether that's true or not, but I'd like to be that person for as many people as I can. My emotional brain is still struggling to accept that, and I don't know why. I'll see if I can coax a coherent reason from it later. But my rational brain says that you're right and I was wrong. Thanks.
0Philip_W
Could you explain how? My empathy is pretty weak and could use some boosting.
0tjohnson314
For me it works in two steps: 1) Notice something that someone would appreciate. 2) Do it for them. As seems to often be the case with rationality techniques, the hard part is noticing. I'm a Christian, so I try to spend a few minutes praying for my friends each day. Besides the religious reasons, which may or may not matter to you, I believe it puts me in the right frame of mind to want to help others. A non-religious time of focused meditation might serve a similar purpose. I've also worked on developing my listening skills. Friends frequently mention things that they like or dislike, and I make a special effort to remember them. I also occasionally write them down, although I try not to mention that too often. For most people, there's a stronger signaling effect if they think you just happened to remember what they liked.
0Philip_W
You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.
0Capla
This is key.

It's also worth mentioning that cleaning birds after an oil spill isn't always even helpful. Some birds, like gulls and penguins, do pretty well. Others, like loons, tend to do poorly. Here are some articles concerning cleaning oiled birds.

http://www.npr.org/templates/story/story.php?storyId=127749940

http://news.discovery.com/animals/experts-kill-dont-clean-oiled-birds.htm

And I know that the oiled birds issue was only an example, but I just wanted to point out that this issue, much like the "Food and clothing aid to Africa" examples you often... (read more)

I wonder if in some interesting way the idea that the scope of what needs doing for other people is so massive as to preclude any rational response then to work full time on it is related to the insight that voting doesn't matter. In both cases, the math seems to preclude bothering to do something which will be easy, but will help in the aggregate.

My dog recently tore both of her ACL's, and required two operations and a total of about 10 weeks recovery. My vet suggested I had a choice as to whether to do the 2X $3100 operations on the knees. I realize... (read more)

0TheOtherDave
"I don't consider it rational to let my moral sentiments run roughshod over my own self interest." To be clear, do you consider the choice to repair your dog's knees an expression of what you're labelling "moral sentiments" here, or what you're labelling "self-interest"?
3mwengler
Spending $6200 to fix my 7 year old dog's knees was primarily moral sentiments at work. I could get a healthy 1 year old dog for a fraction of that price. My 7 year old dog will die very likely within the next 3 or 4 years, larger dogs don't tend to live that long. So I haven't saved myself from experiencing the loss of her death, I've just put that off. The dog keeps me from doing all sorts of other things I'd like to do, I have to come home to check on her and feed her and so on, precluding just going on and doing social stuff after work when I want to. Its important to keep in mind that we are not "homo economicus." We do not have a single utility function with a crank that can be turned to determine the optimum thing to do, and even if in some formal sense we did have such a thing, our reaction to it would not be a deep acceptance of its results. What we do have is a mess and a mass of competing impulses. I want to do stuff after work. I want to "take care" of those in my charge. My urge to take care of those in my charge presumably arises in me because my humans before me who had less of that urge got competed out of the gene pool. 100,000 years ago, some wolves started hacking humans and as part of that hack, got themselves triggering the stuff that humans have for taking care of their babies. Including the fact that these wolves were pretty good "kids," able to help with a variety of things, we hacked them back and made them even more to our liking by selective killing of the ones we didn't like, and then selective breeding of the ones we did like. At this point, we love our babies more than our dogs, but our babies grow into teenagers. But our dogs always stay baby like in their hacked relationship with us. My wife took my human children and left me a few years ago, but she left the dogs she had bought. I'm not going to abandon them, the hack is strong in me. Don't get me wrong, I love them. That doesn't mean I am happy about it, or at least not consiste
0TheOtherDave
(nods) Thanks for clarifying.

After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars.

Fifty thousand times the marginal utility of a dollar, which is probably much less than the utility difference between the status quo and having fifty thousand dollars less unless Daniel is filthy rich.

4So8res
Yeah it's actually a huge pain in the ass to try to value things given that people tend to be short on both time and money. (For example, an EA probably rates a dollar going towards de-oiling a bird as negative value due to the opportunity cost, even if they feel that de-oiling a bird has positive value in some "intrinsic" sense.) I didn't really want to go into my thoughts on how you should try to evaluate "intrinsic" worth (or what that even means) in this post, both for reasons of time and complexity, but if you're looking for an easier way to do the evaluation yourself, consider queries such as "would I prefer that my society produce, on the margin, another bic lighter or another bird deoiling?". This analysis is biased in the opposite direction from "how much of my own money would I like to pay", and is definitely not a good metric alone, but it might point you in the right direction when it comes to finding various metrics and comparisons by which to probe your intrinsic sense of bird-worth.

I don't have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don't have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.

Because of this, and because I'm basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not comple... (read more)

4AnthonyC
I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right. But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).
227chaos
I worry that such consistency isn't possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.

I think this is a really good post and extreamly clear. The idea of of the broken care-O-meter is a very compelling metaphor. It might be worthwhile to try to put this somewhere higher exposure where people who have money and are not allready familiar with the LW memeplex would see it

4So8res
I'm open to circulating it elsewhere. Any ideas? I've crossposted it on the EA forum, but right now that seems like lower exposure than LW.
2therufs
No ideas here, but maybe ping David, Jeff or Julia?
1John_Maxwell
Submitting things to reddit/metafilter/etc. can work surprisingly well.
1So8res
I'm slightly averse to submitting my own content on reddit, but you (John_Maxwell_IV, to combat the bystander effect, unless you decline) are encouraged to do so. My preference would be for the Minding Our Way version over the EA forum version over the LW version.
[-][anonymous]40

Nice write-up. I'm one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year... and then ends up giving away bits of spending money for other things anyway, but that's warm-fuzzies.

So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes... (read more)

3TrE
We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that's worth living in, not an arid desert, neither a universe tiled in smiley faces. While I agree that, since future people will never be concrete entities, like shadowy figures, we don't get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It's easier in our days than it's ever been! Additionally, I personally value a universe filled with humans higher than a universe filled with ■.
227chaos
My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar's number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe's energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don't want to have changed.
2NancyLebovitz
Bioengineering might lead to humans who are much less similar to each other.
027chaos
Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other's values or not value each other. Difficult potential balance to maintain, though that doesn't necessarily mean it should be rejected as an option.
2NancyLebovitz
Bioengineering makes CEV a lot harder.
0AnthonyC
And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar's number.
0NancyLebovitz
That's something I've wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar's numbers.
0Decius
Or a breeding population selecting for higher Dunbar's numbers. Or does that qualify as bioengineering?
0NancyLebovitz
I suppose it should count as bioengineering for purposes of this discussion.

Thank you for writing this. I was stuck on 3, and found the answer to a question I asked myself the other day.

[-][anonymous]30

Many of us go through life understanding that we should care about people suffering far away from us, but failing to.

That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.

If anybody knows how to fix that, please tell me.

4Lumifer
Why do you think it needs fixing?
3[anonymous]
I think this might be holding me back. People talk about "support" from friends and family which I don't seem to have, most likely because I don't return that sentiment.
4Lumifer
Holding you back from what? Also, you said (emphasis mine) "incapable of even imagining the thought process that would lead me to care for people I don't know" -- you do know your friends and family, right?
1[anonymous]
excellent question. I think I'm on the wrong track and something else entirely might be going on in my brain. Thank you.
3Weedlayer
Obviously your mileage may vary, but I find it helps to imagine a stranger as someone else's family/friend. If I think of how much I care about people close to me, and imagine that that stranger has people who care about them as much as I can about my brother, then I find it easier to do things to help that person. I guess you could say I don't really care about them, but care about the feelings of caring other people have towards them. If that doesn't work, this is how I originally though of it. If a stranger passed by me on the street and collapsed, I would care about their well being (I know this empirically). I know nothing about them, I only care about them due to proximity. It offends me rationally that my sense of caring is utter dependent on something as stupid as proximity, so I simply create a rule that says "If I would care about this person if they were here, I have to act like I care if they are somewhere else". Thus, utilitarianism (or something like it). It's worth noting that another, equally valid rule would be "If I wouldn't care about someone if they were far away, there's no reason to care about them when they happen to be right here". I don't like that rule as much, but it does resolve what I see as an inconsistency.
4[anonymous]
Thank you. That seems like a good way of putting it. I seem to have problems thinking of all 7 billion people as individuals. I will try to think about people I see outside as having a life of their own even if I don't know about it. Maybe that helps.
2MugaSofer
I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear". So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.
1Said Achmiz
Why should I act this way?
1MugaSofer
To better approximate a perfectly-rational Bayesian reasoner (with your values.) Which, presumably, would be able to model the universe correctly complete with large numbers. That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.
1hyporational
What makes you care about caring?

Two possible responses that a person could have after recognizing that their care-o-meter is broken and deciding to pursue important causes anyways:

Option 1: Ignore their care-o-meter, treat its readings as nothing but noise, and rely on other tools instead.

Option 2: Don't naively trust their care-o-meter, and put effort into making it so that their care-o-meter will be engaged when it's appropriate, will be not-too-horribly calibrated, and will be useful as they pursue the projects that they've identified as important (despite its flaws).

Parts of this pos... (read more)

9So8res
I definitely don't suggest ignoring the care-o-meter entirely. Emotions are the compass. Rather, I advocate not trusting the care-o-meter on big numbers, because it's not calibrated for big numbers. Use it on small things where it is calibrated, and then multiply yourself if you need to deal with big problems.

I think we need to consider another avenue in which our emotions are generated, and effect our lives. An immediate, short to medium term high is, in a way, the least valuable personal return we can expect from our actions. However, there is a more subtle yet long lasting emotional effect, which is more strongly correlated to our belief system, and our rationality. I refer to a feeling of purpose we can have on a daily basis, a feeling of maximizing personal potential, and even long term happiness. This is created when we believe we are doing the right thin... (read more)

Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.

0A1987dM
Nice try, but even if my utility for oiled birds was as nonlinear as most people's utility for money is, the fact that there are many more oiled birds than I'm considering saving means that what you need to compare is (say) U(54,700 oiled birds), U(54,699 oiled birds), and U(53,699 oiled birds) -- and it'd be a very weird utility function indeed if the difference between the first and the second is much larger than one-thousandth the difference between the second and the third. And even if U did have such kinks, the fact that you don't know exactly how many oiled birds are there would smooth them away when computing EU(one fewer oiled bird) etc. (IIRC EY said something similar in the sequences, using starving children rather than oiled birds as the example, but I can't seem to find it right now.) Unless you also care about who is saving the birds -- but you aren't considering saving them with your own hands, you're considering giving money to save them, and money is fungible, so it'd be weird to care about who is giving the money.
0Jiro
Nonlinear in what? Daniel's utility for dollars is nonlinear in the total number of dollars that he has, not in the total number of dollars in the world. Likewise, his utility for birds is nonlinear in the total number of birds that he has saved, not in the total number of birds that exist in the world. (Actually, I'd expect it to have two components, one of which is nonlinear in the number of birds he has saved and another of which is nonlinear in the total number of birds in the world. However, the second factor would be negligibly small in most situations.)
0A1987dM
IOW he doesn't actually care about the birds, he cares about himself.
0Jiro
He has a utility function that is larger when more birds are saved. If this doesn't count as caring about the birds, your definition of "cares about the birds" is very arbitrary.
0A1987dM
He has a utility function that is larger when he saves more birds; birds saved by other people don't count.
0Jiro
If it has two components, they do count, just not by much.
0Jiro
Because Daniel has been thinking of scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of dollars: the internal feeling of satisfaction with gaining money can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about making lots of money, he shuts up and multiplies the joy of finding $20 by a million....
1Lumifer
Um, that's nonsense. His brain does not misreport how much he actually cares -- it's just that his brain thinks that it should care more. It's a conflict between "is" and "should", not a matter of misreporting "is". After which he goes and robs a bank.
2Jiro
You do realize that what I said is a restatement of one of the examples in the original article, except substituting "caring about money" for "caring about birds"? And snarles' post was a somewhat more indirect version of that as well? Being nonsense is the whole point.
0Lumifer
Yes, I do, and I think it's nonsense there as well. The care-o-meter is not broken, it's just that your brain would prefer you to care more about all these numbers. It's like preferring not have a fever and saying the thermometer is broken because it shows too high a temperature.

I know the name is just a coincidence, but I'm going to pretend that you wrote this about me.

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in th... (read more)

Upvoted for clarity and relevance. You touched on the exact reason why many people I know can't/won't become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.

I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I'd note how much I was going to donate and then donate to one of Givewell's top charities.

Thank you for this explanation. Now it helps me to understand a little bit more of why so many people I know simply feel overwhelmed and give up. Personally as I am not in position to donate money, I work to tackle one specific problem set that I think will help open up and leave the solutions to other problems.
ShiraDest

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.

People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.

4Wes_W
I can think of two categories of responses. One is something like "I care by induction". Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it's not much of a leap to "I should just start caring about people before I meet them". After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now. The other is something like "The caring is much better calibrated than the not-caring". Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I've never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition. I've also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. "I don't care much about billions of people" is an almost totally unfounded wild guess, but "I care lots about individual people" has lots of solid evidence, so when they collide, the latter wins. (Neither of these are ironclad, at least not as I've presented them, but hopefully I've managed to gesture in a useful direction.)
4Jiro
Your second category of response seems to say "my intuitions about considering a group of people, taken billions at a time, aren't reliable, but my intuitions about considering the same group of people, one at a time, are". You then conclude that you care because taking the billions of people one at a time implies that you care about them. But it seems that I could apply the same argument a little differently--instead of applying it to how many people you consider at a time, apply it to the total size of the group. "my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good." The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups--that is, the second argument refutes the first. Going from "I care about the people in my life" to "I would care about everyone if I met them" is as inappropriate as going from "I know what happens to clocks at slow speeds" to "I know what happens to clocks at near-light speeds".
0Decius
I'll go a more direct route: The next time you are in a queue with strangers, imagine the two people behind you (that you haven't met before and don't expect to meet again and didn't really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track. If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.
0Decius
To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group "people I've never met" that I don't care about, and into the group of people that I do care about. Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.
2AmagicalFishy
I second this question: Maybe I'm misunderstanding something, but part of me craves a set of axioms to justify the initial assumptions. That is: Person A cares about a small number of people who are close to them. Why does this equate to Person A having to care about everyone who isn't?
1lalaithion
For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn't changed anything about them. Similarly, becoming friends with someone doesn't usually change the person that much, but increases how much I care about them an awful lot. Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn't Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
3AmagicalFishy
For the most part, I follow—but there's something I'm missing. I think it lies somewhere in: "It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn't." Is the underlying "axiom" here that you wish to maximize the number of effects that come from the caring you give to people, because that's what an altruist does? Or that you wish to maximize your caring for people? To contextualize the above question, here's a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes. I know it's silly, but let me explain: A person usually doesn't want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario's conclusion seems silly. Similarly, I don't feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn't make me feel as if I should. If, on the other hand, the axiom is true—then why bother considering your intuitive "care-o-meter" in the first place? I think there's something fundamental I'm missing. (Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.) (Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of m
0Decius
I care about self-consistency, but being self-consistent is something that must happen naturally; I can't self-consistently say "This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent"
0AmagicalFishy
... Oh. Hm. In that case, I think I'm still missing something fundamental.
2Decius
I care about self-consistency because an inconsistent self is very strong evidence that I'm doing something wrong. It's not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away. The general case of "find a self-inconsistency, make the minimum change to remove it" is not error-correcting.
0lalaithion
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you're aware that you're being inconsistent, you can still alter your actions to try and correct for that fact.
0Decius
I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe. It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.

I would like to subscribe to your newsletter!

I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.

[-][anonymous]10

Attempting to process this post in light of being on my anti-anxiety medication is weird.

There are specific parts in your post where I thought 'If I was having these thoughts, it would probably be a sign I had not yet taken my pill today.' and I get the distinct feeling I would read this entirely differently when not on medication.

It's kind of like 'I notice I'm confused' except... In this case I know why I'm confused and I know that this particular kind of confusion is probably better than the alternative (Being a sleep deprived mess from constant worry) ... (read more)

This post is amazing, So8res! (My team and I have stumbled upon it in search for the all-time greatest articles on improving oneself and making a difference. Just in case you’re interested, you can see our selection at One Daily Nugget. We’ve featured this article in today’s issue.)

Here’s one question that we discussed, would love to get your take: You recommend that one starts with something one cares about, quantifies it, multiplies, and then trusts the result more than one’s intuition.

I love this approach. But how can we be sure that the first element i... (read more)

[-][anonymous]00

Sorry I was rude, I just know how it is, to stand in the rain and try to get someone do something painless for the greater good and have them turn away for whatever reason.

On another point, here's a case study of lesser proportions.

Suppose you generally want to fight social injustice, save Our Planet, uphold peace, defend women's rights etc. (as many do when they just begin deciding what to do with themselves). A friend subscribes you to a NGO for nature conservation, and you think it might be a good place to start, since you don't have much money to donat... (read more)

I think there's some good points to be made about the care-o-meter as a heuristic.

Basically, let's say that the utility associated with altruistic effort has a term something like this:
U = [relative amount of impact I can have on the problem] * [absolute significance of the problem]

To some extent, one's care-o-meter is a measurement of the latter term, i.e. the "scope" of the problem, and the issue of scope insensitivity demonstrates that it fails miserably in this regard. However, that isn't entirely an accurate criticism, because as a rough heu... (read more)

Thank You for this write-up; I really like the structure of it actually managing to present the evolution of an idea. Agreeing with more or less of the content, I often find myself posing the question whether I - and seven billion others - could save the world with my, our own hands. (I am beginning to see utilons even in my work as an artist, but that belongs into a wholly different post) This is a question for the ones like me, not earning much, and - without further and serious reclusion, reinvention and reorientation - not going to earn much, ever: Do ... (read more)

[+][anonymous]-70
[+][comment deleted]10