Consequentialists see morality through consequence-colored lenses. I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.
Consequentialism1 is built around a group of variations on the following basic assumption:
- The rightness of something depends on what happens subsequently.
It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article. "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism". I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints. All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple". But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.
To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong. This may include, but is not limited to:
- The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
- The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
- Historical facts (e.g. having made a promise, sworn a vow)
- Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
- Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
- The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
Individual deontological theories will have different profiles, just like different consequentialist theories. And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3. The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:
- What would the world look like if I followed theory X?
- You ought to act in such a way as to bring about the result of step 1.
And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.
But extensional definitions are terribly unsatisfactory. Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys). You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates. The two terms will tell you "yes" to the same creatures and "no" to the same creatures.
But what "renate" means intensionally has to do with kidneys, not spines. To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory. To try to capture a non-consequentialism with a doppelganger commits the same sin. A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.
If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things. Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them. And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie. But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point. The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.
But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib. And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.") The voices' instruction "happened" before the prospective act of lying. The explosion at the North Pole is a subsequent potential event. The promise to the reindeer is in the past. The vengeful haunting comes up later.
A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor. It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight. It may even look like that to the agent. Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.
The difference is subtle, and how it gets implemented depends on one's epistemological views. Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y. The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act. His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad." (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.) Her assessment, on the other hand, is more complicated, and can branch in a few places. Does the agent know that X will lead to Y? If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences. If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.
1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general. I apologize. In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism. "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.
2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms. Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader. I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.
3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do. Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all. I will be ignoring expected utility consequentialisms for this reason.
4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.
5This is not intended to be a real model of anyone's consequentialist caveats. But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.
6As far as I know, no one seriously endorses "schizophrenic deontology". I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views. Please do not take it to be representative of deontic theories in general.
7Real epistemic state means the beliefs that the agent actually has and can in fact act on.
8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.
[split from parent comment due to length]
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.
I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking... and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age... she would tie me in knots, not with clever rhetoric, but with sheer insanity -- logical rudeness writ large.
But I couldn't just come out and say that to her... not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.
Huh. No wonder I feel frustrated trying to understand deontology... I get the same, "I can't even understand this craziness well enough to be able to say it's wrong" feeling.
Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness... which would certainly explain why I started and deleted my deontology comment multiple times before finally posting it... and didn't really try to achieve any common ground during it... I just took a victim posture and said deontology was nonsense. I also waited until I could "safely" say it in the context of someone else's comment, rather than directly addressing the post's author -- either to seek the truth or argue a clear position.
So, what do I want to replace that feeling of helplessness with? Would I rather be curious, so that I find out more about someone's apparently circular reasoning before dismissing it or fighting with it? How about compassionate, so I try to help the person find the flaw in their reasoning, if they're actually interested in the first place? What about amusement, so that I'm merely entertained and move on?
Just questioning these possibilities and bringing them into mind is already modifying the emotional response, since I've now had an (imagined) sensory experience of what it would be like to have those different emotions and behaviors in the circumstance. I can also see that I don't need to understand or persuade in such a circumstance, which feels like a relief. I can see that I didn't need to argue with my mother and frustrate myself; I could have just let her be who she was, and gone about my business.
So, this is a good time for a test. How do I feel about arguing theism with my mother? No big deal. How about deontology? Not a big deal either, but then it wasn't earlier, either, which is why I couldn't use it as a test directly. So the real test is the thought of "having to explain practical things to people hopelessly stuck in impractical thinking", which was reliably causing me to wrinkle my brow, hunch slightly, and sigh in frustration.
Now, instead of that, I get a mixed feeling of compassion/patience, felt lightly in the chest area... but there's still a hint of the old feeling, like a component is still there.
Ah... I see, I've dealt with only one need axis: connection/bonding, but not status/significance. A portion of the frustration was not being able to connect, and that portion I've resolved, but the other part was frustration with a status differential: the person making the argument is succeeding in lowering my status if I can't address their (nonsensical) argument.
Ugh. I hate status entanglements. I can't fix the brain's need for status, only remove specific entries from the "status threats" table. So let's see if we can take this one out.
I'm noticing that other memories of kids teasing or insulting me in school are coming up in connection with this -- the same fundamental circumstance of being in a conversation with no good answers, silence included. No matter what I do, I will lose face.
Ouch. This is a tough one. The rookie mistake here would be to think I have to be able to come up with better comebacks or something... that is, that I have to solve the problem in the outside world, in order to change my feelings. But if I instead change my feelings first on the inside, then my behavior will change to match.
So, what do I want to feel? Amused? Confident? As with other forms of learned helplessness, I am best off if I can feel the outcome emotions in advance of tthe outside world conforming to my preference. (That is, if I already feel the self-esteem I want from the interaction, before the interaction takes place, it is more likely that I will act in a way that results in a favorable interaction.)
So how would I feel if those kids were praising, instead of teasing or insulting? I would feel honored by the attention...
Boom! The memory just changed, popping into a new interpretation: the kids teasing and insulting me were giving me positive attention. This new interpretation drives a different feeling about it... along with a change to my feelings about certain discussions that have taken place on LW. ;-) Netiher seems like a threat any more.
Similarly, thinking about being criticized in other contexts doesn't seem like a threat... I strangely feel genuinely honored that somebody took the time to tell me how they feel, even if I don't agree with it. Wow. Weird. ;-) (But then, as I'm constantly telling people, if your change doesn't surprise you in some way, you probably didn't really change anything.)
The change also sent me reeling for a moment, as suddenly the sense of loneliness and "outsider"-ness I had as a child begins to feel downright stupid and unnecessary in retrospect.
Wow. Deep stuff. Did not expect anything of this depth from your suggestion, JenniferRM. I think I will take the rest of my processing offline, as it's been increasingly difficult to type about this while doing it... trying to explain the extra context/purpose stuff has been kind of distracting anyway, while I was in the middle of doing things.
Whew. Anyway, I hope that was helpfully illustrative, nonetheless.
Thanks for the response.
That was way more than I was hoping to get back and went in really interesting directions - the corrections about the way the "reprocessing" works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same "index" and become more like stories was totally unexpected and fascinating.
Also, that was very impressive in terms of just... raw emotional openness, I guess. I don't know about other readers, but it stirred up my emotions just reading about... (read more)