Consequentialists see morality through consequence-colored lenses. I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.
Consequentialism1 is built around a group of variations on the following basic assumption:
- The rightness of something depends on what happens subsequently.
It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article. "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism". I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints. All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple". But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.
To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong. This may include, but is not limited to:
- The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
- The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
- Historical facts (e.g. having made a promise, sworn a vow)
- Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
- Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
- The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
Individual deontological theories will have different profiles, just like different consequentialist theories. And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3. The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:
- What would the world look like if I followed theory X?
- You ought to act in such a way as to bring about the result of step 1.
And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.
But extensional definitions are terribly unsatisfactory. Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys). You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates. The two terms will tell you "yes" to the same creatures and "no" to the same creatures.
But what "renate" means intensionally has to do with kidneys, not spines. To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory. To try to capture a non-consequentialism with a doppelganger commits the same sin. A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.
If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things. Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them. And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie. But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point. The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.
But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib. And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.") The voices' instruction "happened" before the prospective act of lying. The explosion at the North Pole is a subsequent potential event. The promise to the reindeer is in the past. The vengeful haunting comes up later.
A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor. It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight. It may even look like that to the agent. Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.
The difference is subtle, and how it gets implemented depends on one's epistemological views. Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y. The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act. His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad." (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.) Her assessment, on the other hand, is more complicated, and can branch in a few places. Does the agent know that X will lead to Y? If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences. If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.
1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general. I apologize. In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism. "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.
2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms. Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader. I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.
3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do. Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all. I will be ignoring expected utility consequentialisms for this reason.
4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.
5This is not intended to be a real model of anyone's consequentialist caveats. But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.
6As far as I know, no one seriously endorses "schizophrenic deontology". I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views. Please do not take it to be representative of deontic theories in general.
7Real epistemic state means the beliefs that the agent actually has and can in fact act on.
8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.
Do you think it is likely that the emotional core of your claim was captured by the statement that "everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously"?
And then assuming this question finds some measure of ground.... how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
I haven't read into your writings super extensively, but from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions. Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I'm sure there's a huge amount more, but this is my gloss that's relevant to your post.)
I've never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.
That's an interesting question. I don't think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the "wrong" others of their error, and I didn't feel any particular motivation... (read more)