I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.
I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).
Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")
The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.
Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.
Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.
Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.
(ducks before accusations of misusing "isomorphic")
(Sorry for slow response. Super busy IRL.)
Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.
a) After reading Luke's link below, I'm still not certain if what I've said about them being (approximately) isomorphic is correct... b) Assuming my isomorphism claim is true enough, I'd claim that the "meaning" carried by your preferred ethical framework is just framing.
That is, (a) imagine that there's a fixed moral landscape. (b) Imagine there are three transcriptions of it, one in each framework. (c) Imagine agents would all agree on the moral landscape, but (d) in practice differ on the transcription they prefer. We can then pessimistically ascribe this difference to the agents preferring to make certain classes of moral problems difficult to think about (i.e., shoving them under the rug).
I maintain that this is incorrect. The framework of virtue ethics could easily have the item "it is virtuous to be the sort of person who gets things done." And "Make things happen, or else" could be a deontological rule. (Just because most examples of these moral frameworks are lame doesn't mean that it's a problem with the framework as opposed to the implementation.)