how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist?
A very, very large portion.
When I was a child, I read a tract published by Inter-Varsity Press called "The salvation of Zachary Baumkletterer". It's a story about a Christian who tries to actually live according to Christian virtues. Eventually, he concludes that he can't; in a world in which so many people are starving and suffering, he can't justify spending even the bare minimum food and money on himself that would be necessary to keep him alive.
It troubled me for years, even after I gave up religion. It's stressful living in America when you realize that every time you get your hair cut, or go to a movie, or drink a Starbucks latte, you're killing someone. (It's even more stressful now that I can actually afford to do these things regularly.)
You can rationalize that allowing yourself little luxuries will enable you to do enough more good to make up for the lives you could have saved. (Unlikely; the best you can do is buy yourself "offsets"; but you'd usually save more lives with more self-denial...
Phil,
It's not actually that hard to make a commitment to give away a large fraction of your income. I've done it, my wife has done it, several of my friends have done it etc. Even for yourself, the benefits of peace of mind and lack of cognitive dissonance will be worth the price, and by my calculations you can make the benefits for others at least 10,000 times as big as the costs for yourself. The trick is to do some big thinking and decision making about how to live very rarely (say once a year) then limit your salary through regular giving. That way you don't have to agonise at the hairdresser's etc, you just live within your reduced means. Check out my site on this, http://www.givingwhatwecan.org -- if you haven't already.
I was pondering that article about Zachary Baumkletterer again.
Summary: Zachary Baumkletterer is that guy who had so much empathy for the starving people in the world and felt so guilty about being so much more fortunate than them, that he voluntarily lowered himself to their standard of living, and donated the rest of his income and posessions to charity (which charity? that's critically important!) Unfortunately, that meant that he was starving himself to death.
One way to resolve this situation would have been for Zachary's boss to give him a budget specifically for food, explaining that he must use all of it on food, and must not give any of it away, etc. This budget qualifies as a business expense, since it directly affects Zachary's productivity. Or if the boss really can't afford to give him any raise at all, then he could allocate part of Zachary's current salary for a food budget.
Another option would have been for his boss to threaten to fire him if he refuses to eat enough to stay healthy and productive.
Another option would be for the people who know Zach to invite him to talk and eat with them. He would have had a hard time refusing an opportunity to talk with other ...
Nick Tarleton said it well, but to try it another way: Depending on how you phrase things, both to yourself and others, the situation can appear to be as bleak as you describe it, or alternatively rather good indeed. If you were to phrase it as being stuck with a brain built for chasing deer across the savanna and caring about the few dozen members of your tribe, being able to try to gain money (because it's the most effective means to whatever your ends) and investing some appreciable fraction of it in the cause with highest expected future payoff, despite being abstract or far in the future, starts to sound fairly impressive -- especially given what most people spend their time and money on.
If Starbucks lattes (or more obviously living above the subsistence level) makes it more likely for me to maintain my strategy of earning money to try to protect the things I value, my indulgences are very plausibly worth keeping. Yes, if I had another psychology I could skip that and help much more, but I don't, so I likely can't. What I can do short-term is to see what seems to happen on the margin. Can I sustain donating 1% more? Can I get by without a fancy car? House? Phone? Conversely, does eating out regularly boost my motivations enough to be worth it? Aim for the best outcome, given the state of the board you're playing on.
But ultimately, the only way I find to cope is not caring.
It's important to distinguish between emotions and decision theory. You can (try to) be perfectly altruistic in calculated decisions, while not caring on an emotional level. Better, you can care in more positive ways: feel good when you help, but don't feel guilty for not helping, or feel painfully strong empathy for the suffering, except to the extent that doing so actually motivates you sustainably. You aren't obligated to feel any emotion that doesn't win.
You aren't obligated to feel any emotion that doesn't win.
There is a flipside to this that I would like to point out: you're allowed to feel any emotion that does help you to win.
Keep in mind you evolved to be vicious, selfish and short sighted. You may as well feel guilty for not figuring out Friendly AI yet, and ignore the fact that we weren't designed to be good at math.
You can lament part of what you are and try to change it or minimize the negative effects, but much of the 'blame' is on evolution.
I just put $2 in a vending machine, we can't even optimize selfish goals very well.
:) Sorry.
In 2006, Craigslist's CEO Jim Buckmaster said that if enough users told them to "raise revenue and plow it into charity" that they would consider doing it. (source: http://blogs.zdnet.com/BTL/?p=4082 ) They really do listen to their users and the reason there is no advertising on Craigslist is that no one is asking for it.
A single banner ad on Craigslist would raise at least one billion for charity over five years. They could put a large "X" next to the ad, allowing you to permanently close it. There seems to be little objection to this idea. The optional banner is harmless, and a billion dollars could be enough to dramatically improve the lives of millions, save very real people from lifetimes of torture or slavery, or make a serious impact in the causes we take seriously around here. As a moral calculus, the decision is a no brainer. So we just need a critical mass of Craigslist users telling Jim that we need a banner ad on Craigslist. Per a somewhat recent email to Craig, they are still receptive to this idea if the users suggest it.
The numbers involved are a little insane. Fifty thousand people should count as critical mass, which means each person could effectively cause $20,000 to be generated out of nowhere and donated to charity. My mistake last time was doing it as a Facebook group rather than a Facebook fan page, where the more useful viral functions have moved. This time I would also drop the money on advertising to get an easy initial critical mass.
Initially voted down because I was sure it was going to be stupid, but this is the first crazy idea I've ever heard for generating a billion dollars out of nothing that could actually work. I mean, ever. You win some kind of award.
"Craigslist users matter. 100 million lives could be saved by a billion dollars. That's ONE banner ad on CL for five years - for charity. Craig'll do it if we ask for it. We just need to ask."
194 characters.
Hotlink the word ask to a page with a larger pitch that's one more click away from the place they need to type. The $20k per person thing goes in the larger pitch.
I think the people doing it would have to actually be regular CL people. Maybe see about checking first with CL forums in big metro areas... make a meetup out of it maybe? I don't personally think its money "out of nothing" though. Even if people don't feel the "epistemic pain" I suspect ads do impose a dust speck style cost on their viewers.
But those numbers are grossly wrong. $10 per life saved isn't true of any easy-to-explain method.
If you believe you are obligated to help others, you will instinctively come up with justifications why helping others means doing what you wanted to anyway, instead of selling off all your earthly possessions to feed the starving.
Of course, you can mitigate this by, y'know, actually trying.
You can only maximize one variable, and if you're maximizing altruism, you're not maximizing truth.
This is only necessarily the case if you're on the Pareto frontier, which no human is. There are reasons to think that there are sometimes better ways to optimize X than trying to optimize X. (I agree that an altruist and a truthseeker would and (by their preferences) should do different things, but it's not as simple as you make it sound.)
I never understood how this morality worked. The problem I see with this view is that you are double counting the value of money.
The $10 doesn't leave the system and everyone who touches it just killed a whole slew of people because they sent it somewhere other than aid. Why are you carrying the moral burden?
Even if you did send it to aid you can blame them for charging $10 for their work instead of $9. (Or whatever company that is selling the rice, nets, stoves, filters, bottles, condoms.)
You could even blame the person receiving the aid for using the aid instead of giving it to someone less fortunate. Or using less of it. Or selling it for $11 and putting the extra money back into aid.
Somewhere in here something goes horribly wrong and it gets ridiculous. Where did I misstep?
EDIT: I really don't want to give the impression that you shouldn't give money or help people less fortunate than yourself. I think these are great things. I just don't understand the jump from "I bought a latte" to "I killed people."
When your employer pays you $10, it's not as simple as him having $10 and giving it to you. You, in part, created that $10 out of nothing.
Otherwise, what would be the point of hiring you in the first place?
You speak a little as if Eliezer is literally a quadrillion times more concerned about the future of humanity than he is about a single sick child he meets on a train. This would be absolutely impossible for a human being. Though recognising the error of scope insensitivity will and should change the extent of your emotional reaction some, "shut up and multiply" can't sensibly be an injunction to actually scale them to match. We can't feel these numbers, but we can and should think them.
I haven't read the other comments here and I know this post is >10yrs old, but…
For me, (what I'll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.
That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.
I (unfortunately) keep misreading "Shut up and multiply" to instead say "Shut up and procreate", to significant humorous (at least to me) effect.
Personally, I think the correct thing to do is to recognize that a simple abstraction like "number of people involved" isn't the only thing that is relevant to deciding whether a course of action is appropriate.
Note that the behavior consequences of "shut up and multiply" and "shut up and divide" are largely the same in this particular case... both argue that one should ignore Amanda's situation because she's only one person and based on raw numbers she (as well as you, and I, and pretty much every individual person on the planet) don't really matter relative to the rest of the world in aggregate.
The big behavioral consequence of the two paths (multiplication versus division) seems to be the distinction between taking one's personal selfishness (say, the objective fact that you'd cry more if a fingernail were ripped off than if you heard of the death of 1000 strangers on the far side of the planet) to mean that you really would or should choose to preserve your fingernail over the people, if the choice was somehow actually presented to you in reality. That is, the theories have different consequences only in one's behavioral orientation towards "...
I divide likewise.
In fact, I'm more disgusted by people who care about small groups but not large groups than I am by the mass suffering of large groups itself.
Shut up? Maybe not. Divide? Yes: divide labor.
We don't all care about exactly the same things; we may have, as some philosopher has doubtless put it, different "moral tastes". But these tastes probably vary continuously, and there's bound to be enough overlap to make effective cooperation possible.
There probably isn't anybody else here who cares about the Knox case to quite the same extent that I do; but there are a fair number who care about it enough to have had a discussion about it. And I expect that even those (such as yourself) who don't ca...
I don't know what kind of person this makes me but I was interested in the Knox case because it was a fun puzzle that involved sex, drugs, murder, conspiracy, abuse of power, and Satanic orgies.
More on what you're actually asking, once I process it.
I think I'm with Wei in his analysis - resolving the inconsistency from the top down, not from the bottom up.
I accept that our feelings of empathy and compassion are something evolution came up with in order to make us function decently in small groups. I accept that this empathy works only for small groups, and cannot scale to groups that are too large for everyone to keep track of each other. Maintaining cohesion and functionality in larger groups requires formal mechanisms such as hierarchy and money, and empathy is at best of marginal value, or at wors...
It's also interesting to see how karma on this site falls steadily with honesty,
People downvote views that are ill-defined, poorly thought out, impolite, morally repugnant or just dumb. The fact that someone might hold such views honestly is basically irrelevant.
The only way failing to save lives can be equated with killing people is by subscribing to pure utilitarianism. But by that philosophy, contraception is also equivalent to killing people: the end result is that fewer people are alive than in the counterfactual case where you had children. The counterargument that contraception is not immoral because you aren't obliged to have children is fine, but it also applies to the other case: you aren't obliged to give your money away either. In other words, we don't actually subscribe to pure utilitarianism, so we s...
The problem with the Amanda Knox case is not scale, but distance. The farther away something is, the less influence I have over it, and the less influence it can have over me. Amanda Knox is far away in every sense - it's in a different country, a different time (the court case is already over), and a different language. It's like watching a year old YouTube video of an already-fired policeman from a town I've never heard of abusing his power - lots of people do it, but it's just getting riled up for no reason.
On the other hand, the point of the original L...
This is a form of cognitive dissonance, where you notice your actions and your values are incongruent, and the resulting discomfort motivates you to reduce the gap between them. You can change your actions and leave your values the same, leave your actions the same and change your values, or somewhere in between.
Other people much, much prefer you change your actions - this is because your values are the guilt-free way of manipulating you. If I want Albert to make a paperclip, and I know Albert also wants to make a paperclip, then I can motivate Albert by ...
I wonder if this is resolvable by biting the bullet that we don't care equally about all humans. "shut up and multiply" should more properly be called "shut up and sum".
You can't just divide the sum equally either - you have to realize that your preferences are different for different segments of the population.
The title of this post jumped out at me. From a comment of mine, long ago:
Maximize happiness in the individual ... I say, "in the individual", in strong opposition to dust specks. I remain puzzled by why the "shut up and multiply" maxim would not be accompanied by "shut up and divide". (That is, 3^^^3 specks / 3^^^3 individuals = no pain.) I remain open to good arguments to the contrary - I haven't read one yet.
EDIT: That last sentence is no longer true. I regard this comment by Eliezer as the best argument I've seen, a...
I've had both factors; a diminshed caring for individual cases, and an increased caring for humanity. Some sort of mixed divinding/multiplying going on here...
WRT to not caring about the Knox case, I think the reason I don't care is the fact that I don't have the cognitive facilities to care about everything I feel like I should care about.
That being the case, I don't see anything wrong with using things that I do care about as filters to bring things to the fore for me to care about.
For example, I care about various home-brew energy solutions for poor people in developing countries because those things are related to other things that I already care about and am interested in.
The Knox case, as far as I can tell, barely brushes things that I'm already interested in. It may be a very big injustice, but there's lots of those.
There is no general reason to be so concerned with your emotional consistency as to want to modify your emotions. You and Eliezer might simply be abnormally concerned with being consistent, or being perceived as such.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"?
Why do we have to decide between them? Long before I ever heard of "Shut Up and Multiply," I used a test that produced the same results, but worked equally well for "Shut Up and Divide." My general statement was, "Be consistent." I would put things in the appropriate context and make sure to apply similar value functions regardless of size or scope - or, perhaps to phrase it better, making sure my consist...
I definitely agree to what you are trying to imply. This issues needs to be clarified and should be taken care of with the proper values but with but not in the extent that it would be senseless. We should really justify the means because it will definitely affect the ends of a certain thing or issues. Like large corporations do for their employees bu they are not credited as the best but the company itself. If you look at the track record over the last couple of decades, one begins to question the status of manyhttp://personalmoneystore.com/moneyblog/2010...
During a recent discussion with komponisto about why my fellow LWers are so interested in the Amanda Knox case, his answers made me realize that I had been asking the wrong question. After all, feeling interest or even outrage after seeing a possible case of injustice seems quite natural, so perhaps a better question to ask is why am I so uninterested in the case.
Reflecting upon that, it appears that I've been doing something like Eliezer's "Shut Up and Multiply", except in reverse. Both of us noticed the obvious craziness of scope insensitivity and tried to make our emotions work more rationally. But whereas he decided to multiply his concern for individuals human beings by the population size to an enormous concern for humanity as a whole, I did the opposite. I noticed that my concern for humanity is limited, and therefore decided that it's crazy to care much about random individuals that I happen to come across. (Although I probably haven't consciously thought about it in this way until now.)
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought. It can't be the case that both of these ways to change how our emotions work are the right thing to do, but the apparent symmetry between them seems hard to break.
What ethical principles can we use to decide between "Shut Up and Multiply" and "Shut Up and Divide"? Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist? And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they've become an essential part of us?