An ideally moral agent would be a consequentialist (though I won't say "utilitarian" for fear of endorsing the Mere Addition Population Ethic). However, actual humans have very limited powers of imagination, very limited knowledge of ourselves, and very little power of prediction. We can't be perfect consequentialists, because we're horrible at imagining and predicting the consequences of our actions -- or even how we will actually feel about things when they happen.
We thus employ any number of other things as admissible heuristics. Virtue ethics are used to encourage the rote learning architecture of our lower brains to learn behaviors that usually have good consequences, making those behaviors easier to generate on demand (like Qiaochu_Yuan said). Deontological rules are used to approximate our beliefs about what actions usually and predictably have good consequences.
When our heuristics break down, we often have enough context, detailed facts, and knowledge of which of our many actual cares are relevant to actually think over the real consequentialist issues.
Disclaimer: I am not a philosopher, so this post will likely seem amateurish to the subject matter experts.
LW is big on consequentialism, utilitarianism and other quantifiable ethics one can potentially program into a computer to make it provably friendly. However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions. We may reevaluate our judgment later, based on laws and/or actual or expected usefulness, but the initial impulse still remains, even if overridden. To quote Casimir de Montrond, "Mistrust first impulses; they are nearly always good" (the quote is usually misattributed to Talleyrand).
Some examples:
I am not sure how to classify religious fanaticism (or other bigotry), but it seems to require a heavy dose of virtue ethics (feeling righteous), in addition to following the (deontological) tenets of whichever belief, with some consequentialism (for the greater good) mixed in.
When I try to introspect my own moral decisions (like whether to tell the truth, or to cheat on a test, or to drive over the speed limit), I can usually find a grain of virtue ethics inside. It might be followed or overridden, sometimes habitually, but it is always there. Can you?