This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of "beginner's ethics". In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don't have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.
And before I'm accused of missing the difference between consequentialism and deontology: no, I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.
Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it... and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment's truth, rather than as simply a cached response to prior experience.
Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I'm definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline... like any other belief in non-physical entities, rooted in mystery worship.
I also seem to recall that previous psychology research showed that that sort of thinking was something people ...
Do you think it is likely that the emotional core of your claim was captured by the statement that "everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously"?
And then assuming this question finds some measure of ground.... how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
I haven't read into your writings super extensively, but from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions. Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I'm sure there's a huge amount more, but this is my gloss that's r...
how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?
That's an interesting question. I don't think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the "wrong" others of their error, and I didn't feel any particular motivation to convince deontologists that they're wrong! I included the disclaimer because I'm honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I'd have been motivated to express greater certainty, as has happened occasionally in the past.)
So, if there's any emotional reaction taking place, I'd have to say it was frustration with an inability to understand something... and the intensity level was pretty low.
In contrast, I've had discussions here last year where I definitely felt an inclination to convince pe...
[split from parent comment due to length]
Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.
I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking... and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age... she would tie me in knots, not with clever rhetoric, but with sheer insanity -- logical rudeness writ large.
But I couldn't just come out and say that to her... not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.
Huh. No wonder I feel frustrated trying to understand deontology... I get the same, "I can't even understand this craziness well enough to be able to say it's wrong" feeling.
Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness... which would certainly explain why I...
This comment has done more than anything else you've written to convince me that you aren't generally talking nonsense.
I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own.
Not as much as you might think. Bear in mind that by the time anybody reads anything I've written about something like that, it's no longer the least bit emotional for me -- it has become an interesting anecdote about something "once upon a time".
If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn't yet "done" with and wanted to work on some more.
Likewise, it's a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) "you" aren't really responsible, and 2) you can change. It's easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the "feeling of being a bad person" part of the equation.
That's if you do it consciously, which I wasn't suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.
Sometimes I believe that
Of course, everyone uses "good" to label all three, but the difference is what is fundamental. cf Richard Chappell
My issue with deontology-as-fundamental is that, whenever someone feels compelled to defend a deontological principle, they invariably end up making a consequentialist argument.
E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.
The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them. This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.
If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences, fine, that's perfectly valid. But in that case, as in the story of Churchill, we've already established what they are, we're just haggling over the price.
"Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.
Any talk about consequences has to involve some counterfactual. Saying "outcome Y was a consequence of act X" is an assertion about the counterfactual worlds in which X isn't chosen, as well as those where it is. So if you construct your counterfactuals using something other than causal decision theory, and you choose an act (now) based on its consequences (in the past), is that another overlap between consequentialism and deontology?
Everyone doing X is not even a remotely likely consequence of me doing X.
AAAAAAAAAAAAH
*ahem* Excuse me.
I meant: Wow, have I ever failed at my objective here! Does anyone want me to keep trying, or should I give up and just sob quietly in a corner for a while?
There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.
Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.
For example, if they see you "get away with" an act they will infer that if they repeat your action the will also avoid reprisal (especially if you and they are in similar social reference classes). If they see you act proudly and in the open they will infer that you've already done the relevant social calculations to determine that no one will object and apply sanctions. If they see you defend the act with words, they will assume that they can cite you as an authority and you'll support them in a factional debate in order not to look like a hypocrite... and so on ad nauseum.
There are various reasons people might deny that they function as role models in society. Perhaps they are hermits? Or perhaps they are no...
Very insightful comment (and the same for your follow-up). I don't have much to add except shamelessly link a comment I found on Slashdot that it reminded me of. (I had also posted it here.) For those who don't want to click the link, here goes:
...I also disagree that our society is based on mutual trust. Volumes and volumes of laws backed up by lawyers, police, and jails show otherwise.
That's called selection/observation bias. You're looking at only one side of the coin.
I've lived in countries where there's a lot less trust than here. The notion of returning an opened product to a store and getting a full refund is based on trust (yes, there's a profit incentive, and some people do screw the retailers [and the retailers their customers -- SB], but the system works overall). In some countries I've been to, this would be unfeasible: Almost everyone will try to exploit such a retailer.
When a storm knocks out the electricity and the traffic lights stop working, I've always seen everyone obeying the rules. I doubt it's because they're worried about cops. It's about trust that the other drivers will do likewise. Simply unworkable in other places I've lived in.
I've had neighbors who
Err... I suspect our priors on this subject are very different.
From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.
The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference.
If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the ...
As someone who is on the fence between between noncognitivism and deontic/virtue ethics, I seem to be witnessing a kind of incommensurability of ethical theories going on in this thread. It is almost like Alicorn is trying to show us the rabbit, but all we are seeing is the duck and talking about the "rabbit" as if is it some kind of bad metaphor for a duck.
On Less Wrong, consequentialism isn't just another ethical theory that you can swap in and out of our web of belief. It seems to be something much more central and interwoven. This might be due to the fact that some disciplines like economics implicitly assume some kind of vague utilitarianism and so we let certain ethical theories become more central to our web of belief than is warranted.
I predict that Alicorn would have similar problems trying to get people on Less Wrong to understand Aristotelian physics, since it is really closer to common sense biology than Einsteinian physics (which I am guessing is very central to our web of belief).
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.
I'm not convinced that this 'backward-looking vs. forward-looking' contrast really cuts to the heart of the distinction. Note that consequentialists may accept an 'holistic' axiology according to which whether some future event is good or bad depends on what has previously happened. (For a simple example, retributivists may hold that it's positively good when those who are guilty of heinous crimes suffer. But then in order to tell whether we should relieve Bob's suffering, we need to look backwards in time to see whether he's a mass-murderer.) It strikes me as misleading to characterize this as involving a form of "overlap" with deontological theories. It's purely consequentialist in form; it merely has a more complex axiology than (say) hedonism.
The distinction may be better characterised in terms of the relative priority of 'the right' and 'the good'. Consequentialists take goodness (i.e. desirability, or what you ought to want) as fundamental, and thus have a tel...
The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to". But the deontologist is not thinking anything with the terms "utility function" [...]
Right, but what about Dutch book-type arguments? Even if I agree that lying is wrong and not because of its likely consequences, I still have to make decisions under uncertainty. The reason for trying to bludgeon everything into being a utility function is not that "the rightness of something depends on what happens subsequently." It's that, well, we have these theorems that say that all coherent decisionmaking processes have to satisfy these-and-such constraints on pain of being value-pumped. Anything you might say about rights or virtues is fine qua moral justification, but qua decision process, it either has to be eaten by decision theory or it loses.
The problem with unbreakable rules is that you're only allowed to have one. Suppose I have a moral duty to tell the truth no matter what and a moral duty to protect the innocent no matter what. Then what do I do if I find myself in a situation where the only way I can protect the innocent is by lying?
More generally, real life finds us in situations where we are forced to make tradeoffs, and furthermore, real life is continuous in a way that is not well-captured by qualitative rules. What if I think I have a 98% chance of protecting the innocent by lying?---or a 51% chance, or a 40% chance? What if I think a statement is 60% probable but I assert it confidently; is that a "lie"? &c., &c.
"Lying is wrong because I swore an oath to be honest" or "Lying is wrong because people have a right to the truth" may be good summaries of more-or-less what you're trying to do and why, but they're far too brittle to be your actual decision process. Real life has implementation details, and the implementation details are not made out of English sentences.
The problem with unbreakable rules is that you're only allowed to have one.
I second the question. Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.
Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.
I don't know whether there is a standard reply in deontology but the appropriate reply is using a function equivalent to the utility function by a consequentialist.
Obviously, the 'deontological decision function' sacrifices the unbreakable criteria. This is appropriate when making a fair comparison between consequentialist and deontological decisions. The utility function sacrifice absolute reliance on one particular desideratum in order to accommodate all the others.
For the sake of completeness I'll iterate what seem to be the only possible approaches that actually allow having multiple unbreakable rules.
1) Only allow unbreakable rules that never contradict each other. This ...
The problem with unbreakable rules is that you're only allowed to have one.
"Allowed"?
It is quite common for moral systems found in the field to have multiple unbreakable rules and for subscribers to be faced with the bad moral luck of having to break one of them. The moral system probably has a preference on the choice, but it still condemns the act and the person.
+10karma for you!
I have a bit of a negative reaction to deontology, but upon consideration the argument would be equally applicable to consequentialism: the prescriptions and proscriptions of a deontological morality are necessarily arbitrary, and likewise the desideratum and disdesideratum (what is the proper antonym? Edit: komponisto suggests "evitandum", which seems excellent) of a consequentialist morality are necessarily arbitrary.
...which makes me wonder if the all-atheists-are-nihilists meme is founded in deontological intuitions.
desideratum...(what is the proper antonym?)
"Evitandum"?
Sounds even better in the plural: "The evitanda of the theory..."
I can perfectly understand the idea that lying is fundamentally bad, not just because of its consequences. My problem comes up for how that doesn't imply that something else can be bad because it leads to other people lying.
The only way I can understand it is that deontology is fundamentally egoist. It's not hedonist; you worry about things besides your well-being. But you only worry about things in terms of yourself. You don't care if the world descends into sin so long is you are the moral victor. You're not willing to murder one Austrian to save him from murdering six million Jews.
Am I missing something?
That seems like a fairly useless part of consequential theory. In particular, when retrospecting about one's previous actions, a consequentialist should give more weight to the argument "yes, he turned out to become Hitler, but I didn't know that, and the prior probability of the person who took my parking space being Hitler is so low I would not have been justified in stabbing him for that reason" than "oh no, I've failed to stab Hitler". It's just a more productive thing to do, given that the next person who takes the consequentialist's parking space is probably not Stalin.
Real-life morality is tricky. But when playing a video game, I am a points consequentialist: I believe that the right thing to do in the video game is that which maximizes the amount of points I get at the end.
Suppose one of my options is randomly chosen to lead to losing the game. I analyze the options and choose the one that has the lowest probability of being chosen. Turns out, I was unlucky and lost the game. Does that make my choice any less the right one? I don't believe that it does.
Deontology treats morality as terminal. Consequentialism treats morality as instrumental.
Is this a fair understanding of deontology? Or is this looking at deontology through a consequentialism lens?
I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):
Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.
However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless ...
I said:
maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.
Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.
You can then extensionally define "renate" as "has a spinal column"
But what "renate" means intensionally has to do with kidneys, not spines.
I don't think this has been covered here yet, so for those not familiar with these two terms: inferring something extensionally means you infer something based on the set in which an object belongs to. Inferring something intensionally means you infer something based on the actual properties of the object.
Wikipedia formulates these as
...An extensional definition of a concept or term form
It seems to me that this addresses two very different purposes for moral judgments in one breath.
When trying to draw a moral judgment on the act of another, what they knew at the time and their intentions will play a big role. But this is because I'm generally building a predictive model of whether or not they're going to do good in the future. By contrast, when I'm trying to assess my own future actions, I don't see what need concern me except whether act A or act B bring about more good.
For those curious about what kind of case can be made for deontology vs. consequentialism:
Deontological arguments (apart from helping with "running on corrupted hardware") are useful for the compression of moral values. It's much easier to check your one-line deontology, than to run a complicated utility function through your best estimate of the future world.
A simple "do not murder" works better, for most people, than a complex utilitarian balancing of consequences and outcomes. And most deontological aguments are not rigid; they shade towards consequentialism when the consequences get too huge:
extensional definitions are terribly unsatisfactory
True enough, but it's worth noting that what we have here (between a deontological theory and its 'consequentialized' doppelganger) is necessary co-extension. Less chordates and renates, more triangularity and trilaterality. And it's philosophically controversial whether there can be distinct but necessarily co-extensive properties. (I think there can be; but I just thought this was worth flagging.)
Typo: "prise apart" not "prize apart".
EDIT: another typo: "tl;dr" at the start of the post. Please consider getting rid of this habit. Your writing is, as a rule, improved by moving your main point to the top, and this reader appreciates your doing that; the cutesy Internetism is an needless distraction.
Signaling might not be necessary, as your summary normally serves as a "hook" to draw readers into the body of the article.
That said, you could italicize or bold (my preference) the summary, or set it off from the body with a horizontal rule.
I think your definition of consequentialism (and deontology) is too broad because it makes some contractarian theories consequentialist. In "Equality," Nagel argues that the rightness of an act is determined by the acceptability of its consequences for those to whom they are most unacceptable. This is similar to Rawls's view that inequalities are morally permissible if they result in a net-benefit to the most disadvantaged members of society. These views are definitely deontological (and self-labeled as such), and since consequentialism and deont...
If I understand you, you're claiming that the "justification" for a deontological principle need not be phrased in terms of consequences, and consequentialists fail to acknowledge this. But can't it always be re-phrased this way?
I prefer to inhabit worlds where I don't lie [deontological]. Telling a lie causes the world to contain a lying version of myself [definition of "cause"]. Therefore, lying is wrong [consequentialist interpretation of preference violation].
This transformation throws away the original justification, but from a...
Deontologists are common. Someday, you may need to convince a deontologist on some matter where their deontology affects their thinking. If you are ignorant about an important factor in how their mind works, you will be less able to bring their mind to a state that you desire.
I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.
Come on, why did Alicorn write a post on deontology without giving any explanation why we should learn about it? What am I missing here? If she (or anyone else) thinks that we should put some weight into deontology in our moral beliefs, why not just come out and say that?
Well, apart from the fact that it looked like people wanted me to write it, I'm personally irritated by the background assumption of consequentialism on this site, especially since it usually seems to come from incomprehension more than anything else. People phrasing things more neutrally, or at least knowing exactly what it is they're discarding, would be nice for me.
I very much appreciated reading this article.
As a general comment, I think that this forum falls a bit too much into groupthink. Certain things are assumed to be correct that have not been well argued. A presumption that utilitarianism of some sort or another is the only even vaguely rational ethical stance is definitely one of them.
Not that groupthink is unusual on the internet, or worse here than elsewhere! Au contraire. But it's always great to see less of it, and to see it challenged where it shows up.
Thanks again for this, Mr. Corn.
Please, call me Ali. Ms. Corn is my mother.
...No, seriously, folks, it's a word, abbreviating it doesn't make sense. "Alicorn".
...And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and
How about a post on understanding consequentialism for us deontologists? :-)
The Wikipedia defines deontological ethics as "approach to ethics that judges the morality of an action based on the action's adherence to a rule or rules."
This definition implies that the Scientific method is a deontological ethic. It's called the "scientific method" after all. Not the "scientific result."
The scientific method is rule based. Therefore, if there is not a significant overlap between the consequentialist and deontologist approaches, then...
Consequentialists see morality through consequence-colored lenses. I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.
Consequentialism1 is built around a group of variations on the following basic assumption:
It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article. "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism". I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints. All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple". But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.
To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.
Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong. This may include, but is not limited to:
Individual deontological theories will have different profiles, just like different consequentialist theories. And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3. The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:
And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.
But extensional definitions are terribly unsatisfactory. Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys). You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates. The two terms will tell you "yes" to the same creatures and "no" to the same creatures.
But what "renate" means intensionally has to do with kidneys, not spines. To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory. To try to capture a non-consequentialism with a doppelganger commits the same sin. A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.
If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things. Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them. And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie. But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point. The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.
But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib. And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.") The voices' instruction "happened" before the prospective act of lying. The explosion at the North Pole is a subsequent potential event. The promise to the reindeer is in the past. The vengeful haunting comes up later.
A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor. It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight. It may even look like that to the agent. Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.
The difference is subtle, and how it gets implemented depends on one's epistemological views. Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y. The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act. His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad." (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.) Her assessment, on the other hand, is more complicated, and can branch in a few places. Does the agent know that X will lead to Y? If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences. If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.
1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general. I apologize. In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism. "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.
2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms. Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader. I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.
3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do. Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all. I will be ignoring expected utility consequentialisms for this reason.
4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.
5This is not intended to be a real model of anyone's consequentialist caveats. But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.
6As far as I know, no one seriously endorses "schizophrenic deontology". I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views. Please do not take it to be representative of deontic theories in general.
7Real epistemic state means the beliefs that the agent actually has and can in fact act on.
8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.