Take some form of consequentialism, precompute a set of actions which cover 90% of the common situations, call them rules, and you get a deontology (like the ten commandments). Which works fine unless you run into the 10% not covered by the shortcuts, and until the world changes significantly enough that what used to be 90% becomes more like 50 or even 20.
Possible consequentialist response: our instincts are inconsistent: i.e., our instinctive preferences are intransitive, not subject to independence of irrelevant alternatives, and pretty much don't obey any "nice" property you might ask for. So trying to ground one's ethics entirely in moral instinct is doomed to failure.
There's a good analogy here to behavioral economics vs. utility maximization theory. For much the same reason that people who accept gambles based on their intuitions become money pumps (see: the entire field of behavioral econom...
I have a rant on this subject that I've been meaning to write.
Deontology, Consequentialism, and Virtue ethics are not opposed, just different context, and people who argue about them have different assumptions. Basically:
Consequence:Agents :: Deontology:People :: Virtue:Humans
To the extent that you are an agent, you are concerned with the consequences of your actions, because you exist to have an effect on the actual world. A good agent does not make a good person, because a good agent is an unsympathetic sociopath, and not even sentient.
To the extent that...
I dispute the claim that the default human view is deontological. People show a tendency to prefer to apply simple, universal rules to small scale individual interactions. However, they are willing to make exceptions when the consequences are grave (few agree with Kant that it's wrong to lie to try to save a life). Further, they are generally in favor of deciding large scale issues of public policy on the basis of something more like calculation of consequences. That's exactly what a sensible consequentialist will do. Due to biases and limited informa...
Could it be possible that some peoples' intuitions are more deontologist or more consequentialist than others? While trying to answer this, I think I noticed an intuition that being good should make good things happen, and shouldn't make bad things happen. Looking back on the way I thought as a teenager, I think I must have been under that assumption then (when I hadn't heard this sort of ethics discussed explicitly). I'm not sure about further back then that, though, so I don't know that I didn't just hear a lot of consequentialist arguments and get used ...
While it's possible to express consequentialism in a deontological-sounding form, I don't think this would yield a central example of what people mean by deontological ethics — because part of what is meant by that is a contrast with consequentialism.
I take central deontology to entail something of the form, "There exist some moral duties that are independent of the consequences of the actions that they require or forbid." Or, equivalently, "Some things can be morally required even if they do no benefit, and/or some things can be morally for...
Deontology is not in general incompatible. You could have a deontology that says :God says do exactly what eliezer yudkowsky thinks is correct. But most people's deontology does not work that way.
Our instincts being reminiscent of deontology is very much not the same thing as deontology being true.
As far as I understand Eliezer's metaethics, I would say that it is compatible with deontology. It even presupposes it a little bit, since the psychological unity of mankind can be seen as a very general set of deontologies.
I would agree thus that deontology is what human instincts are based on.
Under my further elaboration on said metaethics, that is the view of morality as common computations + local patches, deontology and consequentialism are not really opposing theories. In the evolution of a species, morality would be formed as common computations tha...
I agree; on my reading, the metaethics in the Metaethics sequence are compatible with deontology as well as consequentialism.
You can read Eliezer defending some kind of utilitarianism here. Note that, as is stressed in that post, on Eliezer's view, morality doesn't proceed from intuitions only. Deliberation and reflection are also important.
I suspect the real reason why a lot of people around here like consequentialism, is that (despite their claims to the contrary) alieve that ideas should have a Platonic mathematical backing, and the VNM theorem provides just such a backing for consequentialism.
1- Which is by definition not deontological.
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.
2- A fairly common deontological rule is "Don't murder an innocent, no matter how great the benefit."
Take the following scenario:
This is not a counter-example. It doesn't even seem to be an especially difficult scenario. I'm confused.
-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B's own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity's sake.
Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision "intentionally kill innocent" at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).
Your conversion would have "Killing innocents intentionally" as an evil, and thus A would be obliged to kill the innocent.
No, that would be a silly conversion. If A is a deontological agent that adheres to the rule "never kill innocents intentionally' then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn't. It doesn't kill B.
I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn't. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.
You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say "What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?". We're just considering that viewpoint in the form of the utility function it would take to make it happen.
Alright- conceded.
My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.
There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.
Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.
I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.