To me, the main reason behind deontology and similar non-consequentialists moral theories are to work around the human bias, our inability to implement fully consequentialism (because of overconfidence, hyperbolic discounting, stress, emotional weight, ...).
Having a way to encode a deontological moral theory into a utility function (and use consequentialism on it later on) is a nice thing, but not really useful when the point of deontology is that it (arguably) works better on faulty hardware and buggy software we run on than raw consequentialism. If we could perform consequentialism safely, we wouldn't need deontology.
So I stand to my current stance : I use consequentialism when cold-blooded and thinking abstractly, to devise and refine ethical rules ("deontology"), but when directly concerned by something or in the heat of events, I use the deonotological rules decided beforehand, unless I've a very, very strong consequentialist reason not to do so, because I don't trust myself to wield raw consequentialism, and the failure mode of poorly implemented consequentialism is usually worse than of poorly implemented deontology (when you can revise the deontological code afterwards, at least, I'm not speaking of a bible-like code that can't change even in the course of centuries).
This was demonstrated, in a certain limited way, in Peterson (2009). See also Lowry & Peterson (2011).
The Peterson result provides an "asymmetry argument" in favor of consequentialism:
Another argument in favor of consequentialism has to do with the causes of different types of moral judgments: see Are Deontological Moral Judgments Rationalizations?
Update: see Carl's criticism.