To me, the main objection to morality being all-demanding is that there are too many groups who want to enlist you into their all-demanding moralities, and most such groups have been wrong. Basically having too much regard for morality received from others is a security flaw in human nature, and proselytizing faiths are the exploit of that flaw. For that reason I think everyone needs to have a little bit of selfish amorality, being able to tell anyone "don't preach at me".
The objection to this objection is "why stop there?" If we can ignore moral edicts because they're too demanding (or because we can find competing moral demands from other groups), why can't we just ignore all of them?
everyone needs to have a little bit of selfish amorality
Everyone SHOULD have, or everyone DOES have and we have to deal with it, even though it makes us less moral as a group? And since we're talking about morality, HOW MUCH selfish amorality should each agent have?
If we can ignore moral edicts because they’re too demanding (or because we can find competing moral demands from other groups), why can’t we just ignore all of them?
Because if you ignore all of them, people won't give you nice things. You've got to give something to get something. But if you tell me that I ought to give much more than I get, otherwise I'm evil, then I'm liable to say "don't preach at me" and shut the door.
That sounds a lot more like trade than morality to me. I'm personally pretty far down the anti-realism road, but I think there are different heuristic modules at play.
To me, trade and morality overlap more than they differ. For example, if someone did me good, even if they didn't put any conditions on it, I instinctively want to do them good in return, more than I want to good to other people (who might need it more).
Also, the more sensible objection to demandingness is that it alienates or exhausts people. This can lead not to moderation, but to a wholesale rejection of the ideology.
Ask for a 10%/year donation to effective charities, and a fair number of people can do it or strive for it. Ask for everyone to live at the level of refugees in order to maximize their donations, and few are going to listen to your ideas on what constitutes an effective charity.
Some comments, in no particular order:
Analogizing ethical to empirical claims here only makes sense under a “naive moral realist” view of ethics. Otherwise, the notion that “morality can be any old way” (by analogy to “reality can be any old way”) is, at the least, not obvious, and possibly just stops making sense.
“I have some terrible disease that will cause me to lose my legs if untreated, and the only available treatment for it is very expensive. But if losing my legs were very bad, that would imply that I should pay for this treatment. This is evidence that losing my legs isn’t so bad.”
There is, of course, a missing (but implied) step in this reasoning, which is something like “if something very bad is going to happen to me unless I pay for treatment, then I should pay for treatment”. This seems obvious, but if you omit it, then the reasoning no longer works, because this is the crucial connecting link between the ‘is’ of “losing my legs is very bad” and the ‘ought’ of “I should pay for this treatment”. What makes it important to notice this, is that such connecting links “screen off” the ‘is’ part of the chain of reasoning from evidence about the ‘ought’ part. In this case, the corrected reasoning goes something like this:
“I have some terrible disease that will cause me to lose my legs if untreated, and the only available treatment for it is very expensive. But if something very bad is going to happen to me unless I pay for treatment, then I should pay for treatment. Therefore, if losing my legs were very bad, that would imply that I should pay for this treatment. This is evidence that it is not the case that ‘if something is very bad for me unless I pay for treatment, then I should pay for treatment’. (It cannot be evidence that losing my legs isn’t so bad, because that’s an ‘is’ claim, screened off from evidence that bears on ‘ought’ claims by the ‘connecting link’, which is the earliest part of the chain of reasoning that can be brought into doubt by such evidence.)”
Now the reasoning is not obviously suspect. (Perhaps it is still suspect, but if so, it is only in a more subtle way.) Indeed (and unfortunately), many people in this country find themselves reasoning in just this way all the time—and, given the circumstances, I find it difficult to blame them.
As I see it, demandingness objections are best understood as some combination of two reactions to a moral claim:
If you demand so much, many/most/almost-all people simply won’t do it, and a moral rule that (almost) no one follows is vacuous. (This response implicitly takes it as axiomatic that the purpose of ethics is to guide action, and if an ethical rule does not in fact work to guide anyone’s actions, then it may as well not exist.)
“Should implies can”. You demand more than people can do. This is a fairly straightforward intuition that ethical rules that call for impossible actions cannot be “right”. It can be extended/generalized to something like “the rightness of an ethical rule varies (in part) proportionally to how successfully it can be followed”.
In short, a “demandingness objection” to a moral claim is a response that says “people can’t and/or won’t follow the ethical rule you posit [and therefore the rule is invalid and the claim is false]”.
This idea does not seem to take into account second-order desires / values / etc. One perspective on ethical theories is that they are ways to systematize our values, including our higher-order values, so as to make it easier / more effective to satisfy them. On this view, I might “really care” about something, and thus “really want to” accomplish or gain that something, but on the other hand, I might also care about other things, and care about what sorts of things I care about, and whether I’m a caring person, etc., etc. A moral claim that places upon me an unlimited demand of some sort thus wouldn’t be a mere extension of caring about the things I supposedly (morally) value; it might well oppose some competing or higher-order value of mine.
Demandingness can apply to objective claims as well. “There is a storm coming that will destroy everything”. The fact that stuff exists now makes such a storm less likely; not impossible but certainly evidence against a storm of that severity. Existence can’t be so hard as to preclude life, we really did catch a break. The anthropic principle seems directly analogous to what you are getting at.
Many people believe ethics is situated. Just as knowledge is known by someone, moral acts are done by someone and towards some end. The fact that, on the margin today, you can pay a charity to help save lives does not change the need at its base level. It is kind of weird to care as much about people you will never meet as your friends and family or even your mere neighbors. Current mainstream philosophy may agree with you but the public at large remains unconvinced.
Budgetary demands for all available resources really are different in kind than budgetary demands to be included for consideration at all. Something that can claim all available resources closes off all other values being traded against. If human lives are the only thing that matters, better not donate to the EFF. Is it unethical to care about an open internet? Being underfunded is better than being not funded at all. Once something garners no resources, whether that is time or capital, the capacity dies outright.
Good consequentialism may want to consider how things have actually played out historically. Prosperity has not come by making man a more generous animal. On the contrary modern technology seems to require a market for toys or novelty. Cell phones for jet setting egotists in the 80’s have done more to liberate today’s global south than any amount of charity dollars. It is even arguable if an agrarian society was to maximize on alms for all, they may indefinitely forestall an industrial revolution, immiserating the generations to come for present comforts.
A full discussion of the many reasons not to grab the nearest, very demanding idea that currently seems compelling to you, and to make extreme sacrifices for its sake, is beyond the scope of this post.
I would like to read such a post. I've never seen a philosopher make a serious effort to argue against totalitarian ethics.
My own response is to ignore anything that sounds like Insanity Wolf, but that isn't an argument.
This is a useful exploration of this form of objection, and is a start toward the underlying question: what is a moral claim? I think to a non-realist, who thinks moral frameworks are about heuristics to influence other agents' (and components of our own) behavior, this is not much of an issue. "too much to demand" is exactly that - it's more than the other agents are likely willing to comply with.
It's trickier for other views of morality - if you think there IS a truth to the matter beyond social perception and individual judgement, then you need some way to balance the competing desires of agents - either just recognize that most are immoral on many margins, or that the moral action is a fairly complex calculation of values and costs. Perhaps made more complicated by different moral weight of the dimensions of costs and values.
(Cross-posted from Hands and Cities)
People sometimes object to moral claims on the grounds that their implications would be too demanding. But analogous objections make little sense in empirical and prudential contexts. I find this contrast instructive. Some ways of understanding moral obligation suggest relevant differences from empiricism and prudence. But the more we see moral life as continuous with caring about stuff in general, the less it makes sense to expect “can’t-be-too-demanding” guarantees.
I. Demandingness objections
I’ll understand a “demandingness objection” as an objection of the following form:
I’ll construe “demanding,” here, as something like “very costly.” More precise definitions might shed more light.
I’m most familiar with demandingness objections in the context of various moral claims. For example, demandingness is often offered as an objection to utilitarianism — a view which implies, naively, that you’re obligated to devote all of your time and resources to helping others, up the point where you are improving the lives of others less than you are reducing the quality of your own life (I think that various non-consequentialist views actually have similar implications). But it comes up in other contexts as well. For example, people sometimes argue that we should discount the moral importance of what happens to future generations of humans, because not doing so implies, naively, that we should sacrifice a lot now in order to make the future better.
Below, I discuss a few ways of making sense of these objections: I think they often draw on important heuristics, and should be taken seriously in various ways. And I think they may make sense on some conceptions of moral obligation. But I also worry that they can obscure the basic contingency of how what matters to us can be arranged in the world.
II. Applying the objection to empirical and prudential claims
Considered abstractly, demandingness objections of the form listed above seem quite strange. Consider, for example, how such objections sound in the context of empirical claims:
The empirical world, it is generally acknowledged, doesn’t come with any “won’t-be-demanding” guarantee. It can just be, well, any old way. (Or at least, we recognize this in a cool hour. Whether our epistemology always reflects this fact is a different question; and when it seems like it doesn’t, the heuristics involved aren’t necessarily unreasonable.)
I think we implicitly recognize something similar in the context of non-moral claims about what is valuable or worth doing, even if we hold empirical facts fixed. Consider:
Indeed, I think the strangeness of demandingness objections in the context of evaluative claims like “true love is extremely awesome” is a fairly direct consequence of their strangeness in the context of empirical claims. If the empirical world can be “any old way,” then it is an empirical question what sorts of mountains you might need to climb, in order to reach your true love. But the value of true love is (basically) independent of the topography of the world’s mountain ranges, and the locations of the lovers. Your true love is just as beautiful and kind, whether he/she is on top of Everest, or just down the block. So to a first approximation, you shouldn’t change your estimate of the value of being with him/her, upon learning of his/her location. God didn’t set the value of true love, then set the mountain ranges, then adjust the value of true love to make sure that it wouldn’t be worth climbing any mountains to get.
Put more generally: if the value of a thing is independent of its cost, you shouldn’t lower your estimates of the value, upon learning that you live in a world where you have to pay high costs to get it. If a restaurant is worth driving 100 miles to get to, it stays that way, whether it turns out to be 1, 10, or 50 miles away from your house. You might think harder about how good the food really is, before setting out on a longer drive. But the food doesn’t get worse depending on how far away you happen to live.
III. Making mistakes vivid
I find it helpful, in this context, to really try to make vivid to myself what kind of mistake I would be making if I e.g. refuse to turn around the car, or to drop out of college, or to get the treatment, or to climb the mountain, assuming that the empirical and evaluative situation is the intuitively “demanding” way in the examples above, but I refuse to accept this.
Thus, for example, if I keep driving after having made a wrong turn, because the implications of having made a wrong turn are too “demanding,” I’m not doing myself any favors. I’m not “getting away with it” — defecting on the real world, with all the costs it “demands” I pay, but successfully cocooning myself in some superior fantasy world. Rather, I’m just continuing to drive in the direction opposite to where I want to go. I’m just moving further and further away from where I want to be, because I don’t want to look at where I am.
Similarly, if I would become a billionaire if I pursued that start-up idea, and this would in fact be well worth dropping out of college and braving the disapproval of my parents for, there’s nothing clever about pretending that it wouldn’t be, or that the idea wouldn’t work: I’ll just be paying a billion dollars for a college degree and more comfortable conversations at Christmas. If, upon really understanding what it would be like to lose my legs, I’d prefer to pay for the treatment, there’s no sense in not paying: I’ll just be left legless, with more money, but less of what I want overall. If true love is worth an exhausting climb, all I’m doing, by refusing to climb, is turning away from something I care about deeply, for the sake of something I care about less.
I think that naively applying “demandingness” objections to morality can risk mistakes of this kind.
IV. Is morality different?
Demandingness objections are much more common in the context of moral claims — in particular, claims about what actions are morally obligatory, as opposed to merely “good,” “admirable,” “supererogatory,” etc — than in the context of empirical or prudential claims. Why might this be? Does the moral world come with some “won’t-be-demanding” guarantee, that the empirical and prudential worlds do not?
It’s worth noting that some very common-sensical moral claims — including non-consequentialist ones — are quite demanding. Thus, for example, we tend to think that if you’re driving to the hospital to save yourself from imminent death, but getting there in time requires running someone over on the way, morality prohibits you from running them over: you have to let yourself die instead (I discuss in more depth here). Indeed, proponents of demanding moral claims sometimes try to argue for the plausibility of their view by appealing to our willingness to accept extreme moral demands in extreme circumstances, like war or emergency (see Sterri (2020) for some discussion of understanding these types of demands within a framework of “informal insurance”; and see also Shulman (2012) for discussion of ways helping out in emergencies can be in an agent’s self-interest).
In this sense, I don’t think “can morality be demanding” is really the question: we tend to be quite open to the possibility that it can, in certain special (and hopefully, unlikely) circumstances. Rather, what prompts the strongest “demandingness objection” is the idea that moral demands can or do have a certain kind of totalizing and omnipresent quality. It’s one thing, one might think, to save a child from drowning in pond; one thing, even, to donate to save the life of a child in the developing world; but it is quite another if this must become the only thing one is doing, all day, every day, up to whatever point stops being best for the children involved — if your “obligations” eclipse all other aspects of your life, and all your other interests and concerns are given space almost entirely for instrumental reasons. The idea that that’s obligatory seems, to many, quite counterintuitive. And certainly, it fits poorly with our actual patterns of social reproach.
Perhaps, then, morality comes with some sort of “won’t-be-demanding-in-normal-circumstances/in-a-totalizing-type-of-way” guarantee? If so, note that proponents of demanding moral theories can still argue that circumstances aren’t relevantly “normal” — e.g., for example, that global poverty and the rest of the world’s problems constitute a type of “ongoing emergency,” which familiar standards of conduct aren’t equipped to handle. Or, to put a similar point in somewhat different terms, a naive guarantee of this form seems strangely insensitive to what the “normal circumstances” of the actual world really are. Surely, one might think, the extent of morality’s demands should depend at least in part on what sorts of opportunities for impact are in play. Surely it matters, that is, whether every ~$3000 donation saves one life, or ten, or ten thousand, or ten million. But if, at ten thousand lives, one starts tolerating various types of “demandingness,” we might start to wonder about one life, too.
Still, though, I think that certain conceptions of moral obligation may well make room for certain “totalizing demandingness limitations” of this form. For example, if we think of morality as a set of norms and heuristics we use to govern our communal life together in mutually beneficial and/or interpersonally justifiable ways (or which we would agree to use, from some veil-of-ignorance-like epistemic perspective), violation of which we all agree to respond to with various degrees of social sanction (regardless of our personal concerns), it may well make sense to argue that norms that demand too much, and which will be predictably violated on too widespread a scale, either aren’t practically viable, or wouldn’t meet relevant standards of “mutually beneficial” or “interpersonally justifiable.” Put another way: if we think of moral obligations as akin to taxes, imposed by our communal life, it may well make sense to argue that the taxes can’t be that high. Otherwise, not enough people will pay them; and not enough people would elect, or agree to be governed by, a government that imposed them (though also, from a suitably veil-of-ignorance type perspective, maybe everyone would so agree: cf. Harsanyi (1955)).
Indeed, utilitarians who think very demanding behavior obligatory still tend to want to exempt violations of the relevant obligations from the sorts of psychological and social implications with respect to e.g. guilt, blame, punishment, reputational-damage, etc that standard sorts of moral-norm-violation involve (this is related, I think, to the sense in which standard notions of “obligation” don’t have an obviously comfortable home in consequentialist ethics in general — see section VI of my previous post for discussion). In this sense, it may be better to think of such claims as suggesting that demanding behavior is obligatory*, to be distinguished from the type of obligatory-without-the-asterisk it is to e.g. actually rescue a child drowning in the pond in front of you. Plausibly, it is obligation-without-the-asterisk that our moral intuitions are most attuned to, and from which most resistance to demandingness stems. And to the extent we’re adding asterisks in front of the “obligations” to which demandingness applies, this might suggest that we’ve changed the topic.
V. Care
So overall, I think some conceptions of morality may make expecting some type of “demandingness constraint” more reasonable in the context of moral claims than in the context of empirical or prudential ones. In particular, I think conceptions of morality as some sort of empirically-informed, compromise arrangement between agents with different values might support such constraints (though whether these conceptions will fit with our other intuitive moral commitments is a further question).
That said, the more you think of a given morally-flavored endeavor (for example, helping others, or making the world a better place) as rooted in and continuous with caring about the goals of that endeavor (as opposed to as something that constrains your pursuit of what you care about, and forces you to “sacrifice” something you care about more for something that you care about less), the less, I think, it makes sense to expect that endeavor to obey some sort of “can’t-be-too-demanding” constraint; and the more similar demandingness objections, with respect to that endeavor, start to sound to the empirical and prudential versions above — at least to my ear.
That is, just as the empirical contingency of the world might arrange what’s in your narrow self-interest in any old way, so too may it arrange what you would care about on reflection in any old way (I am assuming, here, that your caring about something doesn’t automatically class it under your “self-interest”). Just as, if you really understood what was at stake, you might want to climb the mountain, or to get the treatment, so too, if you really understood what was at stake, you might want to donate, or to put a lot of energy into helping with some cause — even when doing so involves significant trade-offs with other things you value. It would’ve been preferable if the world were arranged such that fewer of such trade-offs were required — just as it would be preferable if the treatment were less expensive, or the mountain less rugged. But high prices can be worth paying — and it’s an empirical question what’s on sale, and at what price.
I find the notion of something being “worth it” helpful in this context. The treatment is expensive, but worth it; the climb is exhausting, but worth it. And to the extent that something is “worth it,” I find it useful to remember and visualize the perspective from which it would be seen as such — a perspective that holds vividly and accurately in mind what is at stake on both sides of the scale; which feels, rightly, the weight of each; and which chooses with clarity. (Though I also think it’s important not to prejudge which choices are “worth it” and which are not, and then to seek imaginative confirmation of this judgment).
Indeed, the vibe of “demandingness” can seem ill-suited to such a perspective. Experiencing something as “demanding” suggests a kind of internal tension, maybe even an experience of being “coerced” or “forced.” One “has” to do something, even if one does not want to. But if something is really worth it, from the perspective of what you care about, then a sufficiently informed and understanding perspective would result, I think, in a kind of wholeheartedness about it — a kind of clarity and unity of purpose that “demandingness” does not connote.
And when one chooses not to engage in some “demanding” behavior, I think it’s at least worth exploring whether one can make that choice wholeheartedly, as well.
VI. Reasons to be wary of demandingness
In general, I think that the abstract arguments for very “demanding” forms of moral behavior are very strong (even for non-consequentialists). But I think that the fuzzier, harder-to-articulate heuristics, evidence, and forms of wisdom encoded in our hesitations about such behavior are important sources of information as well. And I worry that for a certain type of person, the abstract arguments — including arguments of the type I’ve gestured at here — will function (especially in a context of certain kinds of guilt and self-hatred) as bulldozers, or as tools for internal coercion, or as disruptions to a way of relating to these issues that was functional and good, even if not theoretically articulated.
Indeed, my sense is that especially in some activism/social impact-oriented communities, people’s psychological relationship with the omnipresence of the possibility of “doing more” is often somewhat delicate. Even for people who have made their peace with what they feel they can do, the peace can be somewhat uneasy: one senses, sometimes, an internal struggle that has been, at times, painful, resulting in a possibly-somewhat-fragile equilibrium that a person might reasonably desire to protect.
Intense and complicated psychological relationships to this kind of stuff make sense. The idea that one could be doing wrong, or failing to protect and promote what you care about most, in high-stakes ways and/or on a widespread scale, is a painful and scary and powerful one — one that can draw on deep fears about the world (including the social world) and about oneself. And the vision of the empirical world at stake is often horrifying in itself, regardless of its implications for our actions. This isn’t about true loves on mountaintops. This is about people dying, needlessly, all around the world; about horrific suffering in places we don’t see, and places we do; about risks that threaten the permanent destruction of everything good and worthy about our civilization; and much else. It’s not a thought experiment.
A full discussion of the many reasons not to grab the nearest, very demanding idea that currently seems compelling to you, and to make extreme sacrifices for its sake, is beyond the scope of this post. The easiest reasons to articulate, I think, are (a) instrumental considerations related to things like “burnout” and “being a healthy strong flourishing person is good in a whole lot of ways”, and (b) uncertainty of the many kinds that should be salient in the context of extreme/socially unusual/irreversible/very costly actions, aimed at sometimes-poorly-understood outcomes and mechanisms, and which you yourself feel a lot of internal resistance to performing. But I think there are a variety of other consideration as well — related, for example, to the complexity and multi-faceted-ness of what we care about; and to the sense in which something’s seeming “totalizing” warrants a lot of caution with respect to it.
The abstract point I want to make is that there are no guarantees — including guarantees about “non-demandingness” — about the way in which what we would care about on reflection might be at stake in our actions. But I think we should be wary about throwing around the weight of the world’s pain, and the stakes of our responses to it, too casually, and especially not as an intellectual bludgeon, or on a moralistic high horse — whether in relationship to others, or to ourselves. And ultimately, the most pressing questions aren’t abstract, or about the circumstances under which we will have “fulfilled our obligations” or “done enough.” They’re not about what the stakes could be in principle. They’re about what the stakes actually are.