Consequentialists see morality through consequence-colored lenses. I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.
Consequentialism1 is built around a group of variations on the following basic assumption:
- The rightness of something depends on what happens subsequently.
It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article. "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism". I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints. All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple". But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.
To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong. This may include, but is not limited to:
- The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
- The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
- Historical facts (e.g. having made a promise, sworn a vow)
- Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
- Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
- The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
Individual deontological theories will have different profiles, just like different consequentialist theories. And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3. The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:
- What would the world look like if I followed theory X?
- You ought to act in such a way as to bring about the result of step 1.
And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.
But extensional definitions are terribly unsatisfactory. Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys). You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates. The two terms will tell you "yes" to the same creatures and "no" to the same creatures.
But what "renate" means intensionally has to do with kidneys, not spines. To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory. To try to capture a non-consequentialism with a doppelganger commits the same sin. A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.
If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things. Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them. And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie. But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point. The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.
But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib. And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.") The voices' instruction "happened" before the prospective act of lying. The explosion at the North Pole is a subsequent potential event. The promise to the reindeer is in the past. The vengeful haunting comes up later.
A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor. It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight. It may even look like that to the agent. Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.
The difference is subtle, and how it gets implemented depends on one's epistemological views. Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y. The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act. His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad." (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.) Her assessment, on the other hand, is more complicated, and can branch in a few places. Does the agent know that X will lead to Y? If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences. If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.
1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general. I apologize. In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism. "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.
2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms. Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader. I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.
3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do. Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all. I will be ignoring expected utility consequentialisms for this reason.
4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.
5This is not intended to be a real model of anyone's consequentialist caveats. But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.
6As far as I know, no one seriously endorses "schizophrenic deontology". I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views. Please do not take it to be representative of deontic theories in general.
7Real epistemic state means the beliefs that the agent actually has and can in fact act on.
8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.
I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):
Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.
However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless destruction", but I'm probably going to re-name it later because "destruction" isn't quite what I'm trying to capture. Basically, it means you shouldn't go around "destroying" stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don't have a full explanation; how good the reason has to be depends on how severe the act it justifies is. ("I was bored" might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.
In cases where rights conflict, and there's no alternative that doesn't violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn't okay.) "The null action" is the one where you don't do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.
I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn't have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I'm an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory - this is the case of the would-be murderer who doesn't know his gun is unloaded. No out for him. I've been saying "acts", but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.
I find myself largely in agreement with most of this, despite being a consequentialist (and an egoist!).