One of the major problems I have with classical "greatest good for the greatest number" utilitarianism, the kind that most people think of when they hear the word, is that people act as if these are still rules handed to them from on high. When given the trolley problem, for example, people think you should save the five people rather than the one for "shut up and calculate" reasons, and that they are just supposed to count all humans exactly the same because those are "the rules".
I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea. The only way to get moral weights is from your personal preferences. Do you find that you assign more moral weight to friends and family than to complete strangers? That's perfectly fine. If someone else says they assign all humans equal weight, well, that's their decision. But when people start telling you that your weights are assigned wrong, then that's a sign that they still think morality comes from some outside source.
Morality is (or, at least, should be) just the calculus of maximizing personal utility. That we consider strangers to have moral weight is just a happy accident of social psychology and evolution.
From what little I know about EA, they tend to mix together the two issues, one is "Who to care about?" and the other "How to best care about those you care about?" Probably in part owing to the word "care" in English having multiple meanings, but certainly not entirely so.
However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).
I think, like you point out, this gets into near / far issues. How I behave around my family is tied into a lot of near mode things, and how I direct my charitable dollars is tied into a lot of far mode things. It's easy to talk far mode in an abstract way (Is it better to donate to ease German suffering or Somali suffering?) than it is to talk near mode in an abstract way (What is the optimal period for calling your mother?).
This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.
The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.
The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.
Yep, I've been reading a fair amount about it recently; I had considering first making a "prequel" post talking about that period and about how studying ancient China can be fairly interesting, in that it shows us a pretty alien society that still had similar debates.
I had heard from various sources how Confucius said it was normal to care more about some than others, and it took me a bit of work to dig up what that notion was called exactly.
How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?
If you have a desert-adjusted moral system, especially if combined with risk aversion, then it might make sense to care for friends and family more than others.
You want to spend your “caring units” on those who deserve them, you know enough about your friends and family to determine they deserve caring units, and you are willing to accept a lower expected return on your caring units to reduce the risk of giving to a stranger who doesn’t deserve them.
Now to debate myself…
What about that unbearable cousin? A family member, but not deserving of your caring units.
Also, babies. If an infant family member and a poor Third World infant both have unknown levels of desert, shouldn’t you give to the poor Third World infant, assuming this will have a greater impact?
The impression I get when reading posts like these is that people should read up on the morality of self-care. If I'm not "allowed" to care for my friends/family/-self, not only would my quality if life decrease, it would decrease in such a way that would it harder and less efficient to actively care (e.g. donate) about people I don't know.
But is caring for yourself and your friends and family an instrumental value that helps you stay sane so that you can help others more efficiently, or is it a terminal value? It sure feels like a terminal value, and your "morality of self-care" sounds like a roundabout way of explaining why people care so much about it by making it instrumental.
I don't know. I also don't know if terminal values for utility maximizers and terminal values for fallible human beings perfectly line up, even if humans might strive to be perfectly selfless utility maximizers.
What I do know is that for a lot of people the practical utility increase they can manage goes up when they have friends and family they can care about. If you forbid people from self-care, you create a net decrease of utility in the world.
I think ultimately, we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother. What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?
What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?
If that was the real reason you would treat your brother better than one kid in Africa, than you would be willing to sacrifice a good relationship with your brother in exchange for saving two good brother-relationships between poor kids in Africa.
I agree you could evaluate impersonally how much good the institution of the family (and other similar things, like marriages, promises, friendship, nation-states, etc.) creates; and thus how "good" are natural inclinations to help our family are (on the plus side; sustains the family, an efficient form of organization and child-rearing; on the down side: can cause nepotism). But we humans aren't moved by that kind of abstract considerations nearly as much as we are by a desire to care for our family.
we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother.
We have the moral imperative to have the same care for them, but not act in accordance with equal care? This is a common meme, if rarely spelled out so clearly. A "morality" that consists of moral imperatives to have the "proper feelings" instead of the "proper doings" isn't much of a morality.
I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist
Consequentialism just means the rightness of behaviour is determined by its result. (The World's Most Reliable Encyclopaedia™ confirms this.) So you can be a partial (as in not impartial) consequentialist, a consequentialist who thinks good results for kith & kin are better than good results for distant strangers.
As for utilitarianism, it depends on which definition of utilitarianism one chooses. Partiality is compatible with what I call utilityfunctionarianism (and with additively-separable-utility-function-arianism), but contradicts egalitarian utility maximization.
Some moral questions I’ve seen discussed here:
Yet I spend time and money on my children and parents, that may be “better” spent elsewhere under many moral systems. And if I cared as much about my parents and children as I do about random strangers, many people would see me as somewhat of a monster.
In other words, “commonsense moral judgements” finds it normal to care differently about different groups; in roughly decreasing order:
In consequentialist / utilitarian discussions, a regular discussion is “who counts as agents worthy of moral concern” (humans? sentient beings? intelligent beings? those who feel pain? how about unborn beings?), which covers the later part of the spectrum. However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).
Let’s consider two rough categories of decisions:
Impartial utilitarianism and consequentialism (like the question at the head of this post) make sense for impersonal decisions (including when an individual is acting in a role that require impartiality - a ruler, a hiring manager, a judge), but clash with our usual intuitions for personal decisions. Is this because under those moral systems we should apply the same impartial standards for our personal decisions, or because those systems are only meant for discussing impersonal decisions, and personal decisions require additional standards ?
I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist (not that I mind much apart from confusion during the yearly survey; not knowing my values would be a problem, but not knowing which label I should stick on them? eh, who cares).
I also have similar ambivalence about Effective Altruism:
Scott’s “give ten percent” seems like a good compromise on the first point.
So what do you think? How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?
Other places this has been discussed:
Other related points: