This was demonstrated, in a certain limited way, in Peterson (2009). See also Lowry & Peterson (2011).
The Peterson result provides an "asymmetry argument" in favor of consequentialism:
Consequentialists can account for phenomena that are usually thought of in nonconsequentialist terms, such as rights, duties, and virtues, whereas the opposite is false of nonconsequentialist theories. Rights, duty or virtue-based theories cannot account for the fundamental moral importance of consequences. Because of this asymmetry, it seems it would be preferable to become a consequentialist – indeed, it would be virtually impossible not to be a consequentialist.
Another argument in favor of consequentialism has to do with the causes of different types of moral judgments: see Are Deontological Moral Judgments Rationalizations?
Update: see Carl's criticism.
I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. "doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc."
All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can't have a stable "Kantian utility function" that values weightings over world-histories and is consistent over time.
There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).
Also see Brown, Consequentialize This.