Deontology and Virtue Ethics are reducible to counting non-obvious consequences of your actions. If you choose to lie, people are more likely to disbelieve you - so there's a reason to follow a "no lying" rule that a naive consequentialist misses.
I don't believe this is a reduction. A deontologist will not lie even when he has built up an immense base of trust and would "win" a whole lot from the lie. He just won't do it, because to him it's completely unethical.
Furthermore, the consequentialist might reason the other way around. A deontologist not-liar might decide that he can use Exact Words or You Didn't Ask to engage in some necessary deception. A long-term consequentialist will note that actually doing so gets you a reputation for being a Manipulative Bastard -- which then actually segues right into "Virtue Ethics as Timeless Decision Theory or ethics under repeated games".
(By the way, what is actually the distinction between timeless decision theory and a decision model under which all scenarios are treated as repeated even before they happen the first time?)
EDIT: What I actually do, myself, is to sometimes lie using the Moral Equivalent of the Truth: a lie designed not to poison other people's decision-making. Lying outright about having an errand to do instead of sleeping in (insert other minor vices here...) is more-or-less ok, but using Exact Words about making a contract and becoming a magical girl... is evil.
(Yes, that was a Madoka Magica reference.)
EDIT EDIT: Which definitely does seem consequentialist, in the limit, but includes consequential reasoning over how my actions affect other people's decision-making, which then involves Timelessness and virtue-reasoning.
A deontologist will not lie even when he has built up an immense base of trust and would "win" a whole lot from the lie.
If you make a deontologist out of whole cloth with non-contradicting rules, sure. An actual human using deontological thinking is reducible to consequentialism plus large penalties for rule breaking. I mean, at some point the deontologist has to choose between two kinds of rule-breaking (say, between "always tell the truth" and "do not kill people, or through inaction allow people to die"), and the way to ...
On the recent LessWrong/CFAR Census Survey, I hit the following question:
To my own surprise, I couldn't come up with a clear answer. I certainly don't consistently apply one of these things across every decision I make in my life, and yet I consider myself at least mediocre on the scale of moral living, if not actually Neutral Good. So what is it I'm actually doing, and how can I behave more ethical-rationally?
Well, to analyze my own cognitive algorithms, I do think I can actually place these various codes of ethics in relation to each other. Basically, looked at behavioristically/algorithmically, they vary across how much predictive power I have, my knowledge of my own values, and what it is I'm actually trying to affect.
Consequentialism is the ethical algorithm I consider useful in situations of greatest predictive power and greatest knowledge of my own values. It is, so to speak, the ethical-algorithmic ideal. In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment, consequentialism offers a complete ideal for ethical action over a complete spectrum of moral values for affecting both the universe and myself (but I repeat: I'm part of the universe).
However, in almost all real situations, I don't have perfect predictive knowledge -- not of the "external" universe and not of my own values. In these situations, I can, however, use my incomplete and uncertain knowledge to find acceptable heuristics that I can expect to yield roughly monotonic behavior: follow those rules, and my actions will generally have positive effects. This kind of thinking quickly yields up recognizable, regular moral commandments like, "You will not murder" or "You will not charge interest above this-or-that amount on loans". Yes, of course we can come up with corner-case exceptions to those rules, and we can also elaborate logically on the rules to arrive at more detailed rules covering more circumstances. However, by the time we've fully elaborated out the basic commandments into a complete, obsessively-compulsively detailed legal code (oh hello Talmud), we've already covered most of the major general cases of moral action. We can now invent a criterion for how and when to transition from one level of ethical code to the one below it: our deontological heuristics should be detailed enough to handle any case where we lack the information (about consequences and values) to resort to consequentialism.
At first thought, virtue ethics seems like an even higher-level heuristic than deontological ethics. The problem is that, unlike deontological and consequentialist ethics, it doesn't output courses of action to take, but instead short- and long-term states of mind or character that can be considered virtuous. So we don't have the same thing here; it's not a higher-level heuristic but a seemingly completely different form of ethics. I do think we can integrate it, however: virtue ethics just consists of a set of moral values over one's own character. "What kind of person do I think is a good person?" might, by default, be a tautological question under strict consequentialism or deontology. However, when we take an account of the imperfect nature of real people (we are part of the universe, after all), we can observe that virtue ethics serves as a convenient guide to heuristics for becoming the sort of person who can be relied upon to take right actions when moral issues present themselves. Rather than simply saying, "Do the right thing no matter what" (an instruction that simply won't drive real human beings to actually do the right thing), virtue ethics encourages us to cultivate virtues, moral cognitive biases towards at least a deontological notion of right action.
It's also possible we might be able to separate virtue ethics into both heuristics over our own character, and actual values over our own character. These two approaches to virtue ethics should then converge in the presence of perfect information: if I knew myself utterly, my heuristics for my own character would exactly match my values over my own character.
This is my first effort at actually blogging on rationality subjects, so I'm hoping it's not covering something hashed and rehashed, over and over again, in places like the Sequences, of which I certainly can't attest a full knowledge.