orthonormal comments on Deontology for Consequentialists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (247)
My issue with deontology-as-fundamental is that, whenever someone feels compelled to defend a deontological principle, they invariably end up making a consequentialist argument.
E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.
The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them. This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.
If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences, fine, that's perfectly valid. But in that case, as in the story of Churchill, we've already established what they are, we're just haggling over the price.
Afaict this is true for any ethical principle, consequentialist ones included. I'm skeptical that there are unconditional principles.
Dude. "Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.
I take exception to your anthropocentric morality!
And if we lived on the Planet of the Sociopaths, what then? Ethics leap out a window and go splat?
See here for what this is like.
Any talk about consequences has to involve some counterfactual. Saying "outcome Y was a consequence of act X" is an assertion about the counterfactual worlds in which X isn't chosen, as well as those where it is. So if you construct your counterfactuals using something other than causal decision theory, and you choose an act (now) based on its consequences (in the past), is that another overlap between consequentialism and deontology?
I can't parse your comment well enough to reply intelligently.
What I think pengvado is getting at is that the concept of "consequence" is derived from the concept of "causal relation", which itself appears to require a precise notion of "counterfactual".
I read Newcomb's paradox as a counter-example to the idea that causality must operate forward in time. Essentially, one-boxing is choosing an act in the present based on its consequences in the past. This smells a bit like a Kantian counterfactual to me, but I haven't read Kant.
There are many accounts of causation; some of them work in terms of counterfactuals and some don't. (I don't have many details; I've never taken a class on causation.) There is considerable disagreement about the extent to which causation must operate forward in time, especially in things like discussions of free will.
Don't. It's a miserable pastime.
I'm pretty satisfied with Pearl's formulation of causality, it seems to capture everything of interest about the phenomenon. An account of causality that involves free will sounds downright unsalvageable, but I'd be interested in pointers to any halfway decent criticism of Pearl's approach.
Thanks for affirming my suspicions regarding Kant.
I wouldn't characterize Kant this way. He isn't thinking about a possible world in which the maxim is universalized, whether a maxim can or cannot be universalized has to do with the form of the maxim, nothing else. It might be the case that he sneaks in some counter-factual thinking but it isn't his intention to make his ethics rely on it. It wouldn't be a priori otherwise.
No two people can agree on how to characterize Kant, but it is a legitimate interpretation that I have heard advanced by a PhD-having philosopher that you can think about that formulation of the CI as referring to a possible world where the maxim is followed like a natural law.
This is what Kant seems to do in practice whenever he illustrates normative application of the CI. But his notion of a priori does appear to preclude this. Then again, Kant also managed to develop Newtonian physics a priori, so maybe he just knew something we don't. <sarcasm/>
What has never stopped bewildering me is the question of why anyone should consider such a possible world relevant to their individual decision-making. I know Kant has some... tangled, Kantian argument regarding this, but does anyone who isn't a die-hard Kantian have any sensible reason on hand for considering the counterfactual "What if everyone did the same"?
Everyone doing X is not even a remotely likely consequence of me doing X. Maybe this is to beg the question of consequences mattering in the first place. But I suppose I have no idea what use deontology is if it doesn't boil down to consequentialism at some level... or, particularly, I have no idea what use it is if it makes appeals to impossibly unlikely consequences like "Everyone lying all the time," instead of likely ones.
I thought of one possible reason that would make deontology "justifiable" in consequentialist terms. Those classic "my decision has negligible effect by itself, but if everyone made the same decision, it would be good/bad" situations, like "should I bother voting" or "is okay if I shoplift". If everyone were consequentialists, each might individually decide that the effect of their action is negligible, and thus end up not voting or deciding that shoplifting was okay, with disastrous effects for society. In contrast, if more people were deontologists, they'd do the right thing even if the effect of their individual decision probably didn't change anything.
Kant's point is not that "everyone doing X" matters, it's that ethical injunctions should be indexically invariant, i.e. "universal". If an ethical injunction is affected by where in the world you are, then it's arguaby no ethical injunction at all.
Wei_Dai and EY have done some good work in reformulating decision theory to account for these indexical considerations, and the resulting theories (UDT and TDT) have some intuitively appealing features, such as cooperating in the one-shot PD under some circumstances. Start with this post.
I'm (obviously) no Kant scholar, but I wonder if there is any possible way to flesh out a consistent and satisfactory set of such context-invariant ethical injunctions.
For example, he infamously suggests not lying to a murderer who asks where your friend is, even if you reasonably expect him to go murder your friend, because lying is wrong. Okay -- even if we don't follow our consequentialist intuitions and treat that as a reductio ad absurdum for his whole system -- that's your 'not lying' principle satisfied. But what about your 'not betraying your friends' principle? How many principles have we got in the first place, and how can we weigh them against one another?
Actually, Kant only defended the duty not to lie out of philanthropic concerns. But if the person inquired of was actually a friend, then one might reasonably argue that you have a positive duty not to reveal his location to the murderer, since to do otherwise would be inconsistent with the implied contract between you and your friend.
To be fair, you might also have a duty to make sure that your friend is not murdered, and this might create an ethical dilemma. But ethical dilemmas are not unique to deontology.
ETA: It has also been argued that Kant's reasoning in this case was flawed since the murderer engages in a violation of a perfect duty, so the maxim of "not lying to a known murderer" is not really universalizable. But the above reasoning would go through if you replaced the murderer with someone else whom you wished to keep away from your friend out of philanthropic concerns.
This just isn't true. Lying is one of the examples used to explain the universalization maxim. It is forbidden in all contexts. Can't right now, but I'll come back with cites.
Actually I'm going to save you the effort and provide the cite myself:
Specifically, in the Metaphysics of Morals, Kant states that "not suffer[ing our] rights to be trampled underfoot by others with impunity" is a perfect duty of virtue.
Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?
AAAAAAAAAAAAH
*ahem* Excuse me.
I meant: Wow, have I ever failed at my objective here! Does anyone want me to keep trying, or should I give up and just sob quietly in a corner for a while?
Sorry. But then I said:
And added,
?
Yeah, if you have no idea what "use" deontology is unless it's secretly just tarted-up consequentialism, I have failed.
Huh? To be fair, I don't think you were setting out to make the case for deontology here. All I am saying about its "use" is that I don't see any appeal. I think you gave a pretty good description of what deontologists are thinking; the North Pole - reindeer - haunting paragraph was handily illustrative.
Anyway, I think Kant may be to blame for employing arguments that consider "what would happen if others performed similar acts more frequently than they actually do". People say similar things all the time -- "What if everyone did that?" -- as though there were a sort of magical causal linkage between one's individual actions and the actions of the rest of the world.
There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.
Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.
For example, if they see you "get away with" an act they will infer that if they repeat your action the will also avoid reprisal (especially if you and they are in similar social reference classes). If they see you act proudly and in the open they will infer that you've already done the relevant social calculations to determine that no one will object and apply sanctions. If they see you defend the act with words, they will assume that they can cite you as an authority and you'll support them in a factional debate in order not to look like a hypocrite... and so on ad nauseum.
There are various reasons people might deny that they function as role models in society. Perhaps they are hermits? Or perhaps they are not paying attention to how social processes actually happen? Or it may also be the case that they are momentarily confabulating excuses because they've been caught with blood on their hands?
Not that I'm a big deontologist, but I think deontologists say things that are interesting, worthwhile, and seem unlikely to be noticed from other theoretical perspectives. Several apologists for deontology who I've known from a distance (mostly in speech and debate contexts) were super big brains.
Their pitch, to get people into the relevant deliberative framework, frequently involved an epistemic argument at the beginning. Basically they pointed out that it was silly to make moral judgments with instantaneous behavioral consequences based on things you can't see or measure or know in the present. There is more to it than that (like there are nice ways to update and calculate deontic moral theories based on morality estimates, subsequent acts, and independent "retrospective moral feelings" about how the things turned out) but we're just in the comment section, and I'd rather not have my fourth post in this community spend a lot of time articulating the upsides a moral theory that I don't "fully endorse" :-)
I wasn't trying to make the case for deontology, no - just trying to clear up the worst of the misapprehensions about it. Which is that it's not just consequentialism in Kantian clothing, it's a whole other thing that you can't properly understand without getting rid of some consequentialist baggage.
There does not have to be a causal linkage between one's individual actions and those of the rest of the world. (Note: my ethics don't include a counterfactual component, so I'm representing a generalized picture of others' views here.) It's simply not about what your actions will cause! A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if the world is about to end and your act will have no consequences at all beyond being the act it is. It can be informative even if you'd never have dreamed of performing the act were it a common act type (in fact, especially then!). The counterfactual is a place to stop. It is, if justificatory at all, inherently justificatory.
Erm. I agree with the PhD-having philosopher that you can think about the formulation that way. But my PhD-having philosophers are pretty clear that even if Kant ends up implicitly relying on this it can't be what he is really trying to argue since it obviously precludes a priori knowledge of the CI. And if you can't know it a priori then Kant's entire edifice falls apart.
And below, Breakfast is wondering why one should consider possible worlds relevant to decision making and says "I know Kant has some... tangled, Kantian argument regarding this". But of course Kant has no such argument! Because that isn't his argument. The argument for the CI stems from Kant's conception of freedom (that it is self-governance and that the only self-governance we could have a priori comes from the form of self-governance itself). The argument fails, I think, but it has nothing to do with counterfactuals. So when you say "Counterfactuals, straight out of Kant", it seems a lot of people who haven't read Kant are going to be mislead.
I know you're just using Kant illustratively, but maybe qualify it as "some formulations of Kant"?
For us hybridist, it is the function of cosequentialism to justify rules, and the function of rules to justify sanctions.
That seems to lead to a logical cycle. What is the function of sanctions? To modify the behavior of other agents. Why do we want to modify the behavior of other agents? Because we find some actions undesirable. Why do we find them undesirable? Because of their consequences, or because they violate established rules...
Not all cycles are bad.