I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea.
Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it's possible that we could coordinate and get them from the same source, and then agree and cooperate.
Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think ultimately, we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother. What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?
If that was the real reason you would treat your brother better than one kid in Africa, than you would be willing to sacrifice a good relationship with your brother in exchange for saving two good brother-relationships between poor kids in Africa.
I agree you could evaluate impersonally how much good the institution of the family (and other similar things, like marriages, promises, friendship, nation-states, etc.) creates; and thus how "good" are natural inclinations to help our family are (on the plus side; sustains the family, an efficient form of organization and child-rearing; on the down side: can cause nepotism). But we humans aren't moved by that kind of abstract considerations nearly as much as we are by a desire to care for our family.