This was demonstrated, in a certain limited way, in Peterson (2009). See also Lowry & Peterson (2011).

The Peterson result provides an "asymmetry argument" in favor of consequentialism:

Consequentialists can account for phenomena that are usually thought of in nonconsequentialist terms, such as rights, duties, and virtues, whereas the opposite is false of nonconsequentialist theories. Rights, duty or virtue-based theories cannot account for the fundamental moral importance of consequences. Because of this asymmetry, it seems it would be preferable to become a consequentialist – indeed, it would be virtually impossible not to be a consequentialist.

Another argument in favor of consequentialism has to do with the causes of different types of moral judgments: see Are Deontological Moral Judgments Rationalizations?

Update: see Carl's criticism.

New to LessWrong?

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 3:24 PM

To me, the main reason behind deontology and similar non-consequentialists moral theories are to work around the human bias, our inability to implement fully consequentialism (because of overconfidence, hyperbolic discounting, stress, emotional weight, ...).

Having a way to encode a deontological moral theory into a utility function (and use consequentialism on it later on) is a nice thing, but not really useful when the point of deontology is that it (arguably) works better on faulty hardware and buggy software we run on than raw consequentialism. If we could perform consequentialism safely, we wouldn't need deontology.

So I stand to my current stance : I use consequentialism when cold-blooded and thinking abstractly, to devise and refine ethical rules ("deontology"), but when directly concerned by something or in the heat of events, I use the deonotological rules decided beforehand, unless I've a very, very strong consequentialist reason not to do so, because I don't trust myself to wield raw consequentialism, and the failure mode of poorly implemented consequentialism is usually worse than of poorly implemented deontology (when you can revise the deontological code afterwards, at least, I'm not speaking of a bible-like code that can't change even in the course of centuries).

That's fine as a heuristic for choosing the morally best action, but that's not really what the papers are talking about. They're talking about whether the underlying moral theory is consequentialist or deonotological, not whether the heuristics for finding moral outcomes fall into either category.

Never mind whether it works in practice...does it work in theory?

I've long had the intuition that consequentialist ethics and deontological ethics were equivalent (in the sense described here). To someone who has read the papers (finals, etc), why can't you represent an arbitrary consequentialist moral theory within a deontological moral theory? Many deonotological theories are context dependent - e.g. If context A holds, don't lie, but if context B holds, lie. Suppose we have some utility function U that we want to deontologicalize. It seems fairly trivial to set context A as "The set of situations where lying does not maximize utility according to U" and context B as "The set of situations where lying does maximize utility according to U." Indeed, if you look at a utility-maximizer in the right light, she looks like she's following a categorical imperative: "Always and everywhere choose the action which maximizes utility." What's wrong with this?

Consequentialism (usually) has a slightly richer vocabulary than just "This is the right act": there's usually a notion of degree. That is, rather than having an ordinal ranking of actions, you get a cardinal ranking. So action A could be twice as good as action B. The translation you've proposed collapses this. I'm not sure how big a problem that is, though.

Yeah, that thought crossed my mind after I posted my comment. This may be what the authors are talking about. I envisioned the problem a little differently though - consequentialism does seem to only need an ordinal ranking, while deontological theories just need to put actions in categories "good" and "bad" depending on context - at least based on my understanding of the terms.

Actually, neither of the papers that Luke linked to seem to discuss whether consequentialist theories can be represented deontologically. They seem more interested in the reverse question.

Did you mean to say that consequentialism needs a cardinal ranking, rather than an ordinal one? A two-category ranking is certainly an ordinal one!

No I didn't, but I should have said that usually consequentialism typically has a higher resolution - i.e. more categories if it's an ordinal ranking - so you're still losing information by making it deontological.

For some reason I've never understood consequentialist philosophers also often/usually collapse that cardinal ranking into the right (usualy one) action and all the other wrong actions, see this. Presumably they wouldn't worry too much about this problem.

There may be an equivalence reaction, or a transitive relation between consequentialism, deontology and virtue ethics, but either way, they are basically one axis, and subjectivity-> objectivity is another.

One day I might understand why the issue of ethical subjectivity versus objectivity is so regularly ignored in less wrong.

I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. "doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc."

All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can't have a stable "Kantian utility function" that values weightings over world-histories and is consistent over time.

There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).

All well and good for t1, but then I need a new utility function for the next moment, t2, and places infinite weight on lying at t2 but not at t1.

Why at t2 must you no longer place infinite weight against lying at t1? It would seem that if you did not, in fact, lie at t1 (and you can infallibly achieve this) then leaving the infinite dis-utility for lying at t1 makes no practical difference. Sure, if you ever tell a single lie all subsequent behavior will become arbitrary but that possibility has been assumed away.

Provided you have infinite confidence in the impossibility of time travel or timeless decision theory style entanglement of past events with your choices now, that's right. It's not as problematic as placing infinite weight on lying at t2 when it's still t1 (which would license lying now to avoid future lying, contra deontology).

Provided you have infinite confidence in the impossibility of time travel or timeless decision theory style entanglement of past events with your choices now, that's right.

This would seem to be a problem only provided both of:

  • The "can do so infallibly" assumption is interpreted weakly - such that the infallibility is only assumed to hold at Time.now.
  • The meaning of "I" in "I lie" - that is, the construction of identity - is such that objects at (Time.now + x) and (Time.now - x) that are made of similar matter to "me" but that do not implement my decision algorithm or one that my decision algorithm endorses or would seek to create are still called me. The crude illustration being "Omega arbitrarily appears and hacks me such that my utility function such that the end product wants to do stuff that I currently assign negative infinity utility to."

Without a model of "I" that includes (milder, more subtle) versions of that kind of modification as also instances of "I" there is not a problem leaving the negative utility for "I lie" at !Time.now in place. Apart from, you know, the fact that you have an agent with an idiotic absolute deontological injunction in place. But that was the specification we were implementing.

The implementation of mechanisms for what constitutes the "self" actor in any absolute-deontological injunction is ridiculously complicated and introduces all sorts of problems and potential pitfalls. However, I don't think they are complexities and pitfalls that I am arbitrarily introducing for my convenience. They seem to be actual problems intrinsic to the practical task of implementing a deontological agent in a physical universe. In fact, they are largely problems intrinsic to actually defining a deontological rule precisely.

It's not as problematic as placing infinite weight on lying at t2 when it's still t1

I would grant both the factors you mention - less than 1 confidence in the impossibility of time travel and acausal influences - as valid reasons to qualitatively accept all the problems of Time.greater_than.now infinities as problems for Time.less_than.now infinities. So I would make the same claim that they can work correctly if you either have an encompassing "infallible" or a correctly defined "I". (By 'correct' I mean "the same as what you would intend it to mean if you said the rule aloud".)

(which would license lying now to avoid future lying, contra deontology).

I assume we are imagining a different utility function here. I am imagining a utility function defined over universe-histories where negative infinity is returned when "I lie" is true at any point in time and normal stuff (like lots of utility for delicious cookies, volcano lairs and catgirls) for all the rest. Of all the undesirable outcomes that would come from executing such a utility function lying now to avoid future lying would not be one of them. No lying behavior will ever be returned as the expected utility maximising action of that function except in cases where (according to your model) it is logically impossible to execute a behavior that has a non-zero chance of resulting in a lie. In that case behavior is and should be totally arbitrary.

Thanks for this. Your point substantially undermines the importance of Peterson's result.

There is an important point here: even if you can show that you can give an agent an utility function that represents following a particular moral theory, that utility function might not be the same from person to person. For example, if you believe lying violates the categorical imperative, you might not lie even to prevent ten people from lying in the future. What you are trying to minimize in this situation is incidences of you lying, rather then of lying, full stop.

But any other moral agent would (by hypothesis) also by trying to minimize their lying, and so you lose the right to say things like, "You ought to maximize the good consequences (according to some notion of good consequences)," which some would say are defining of consequentialism.

At any rate, you end up with a kind of "consequentialism" that's in a completely different boat to, say, utilitarianism, and TBH isn't that interesting.

Certainly, other moral theories are not equivalent to utilitarianism, but why does that make them uninteresting to you?

Sorry, that wasn't what I meant to convey! My point is that if you weaken the conditions for a theory being "consequentialism" enough, then obviously you'll eventually be able to get everything in under that umbrella. But that may not be an interesting fact, it may in fact be nearly trivial. If you broaden the notion of consequences enough, and allow the good to be indexed to the agent we're thinking about, then yes, you can make everyone a consequentialist. But that shouldn't be that surprising. And all the major differences between, say, utilitarianism and Kantianism would remain.

Who is weakening the conditions for a theory being "consequentialism"? The thing described by Peterson seems perfectly in line with consequentialism. And his point about asymmetry among moral theories remains.

Well, there are a lot of things that get called "consequentialism" (take a look at the SEP article for a similar point). I personally find that "consequentialism" connotes to "agent-neutral" in my head, but that may just be me. I feel like requiring neutrality is a more interesting position precisely because bare consequentialism is so weak: it's not really surprising that almost everything is a form of it.

There's also the possibility of accidental equivocation, since people use "consequentialism" to stand for so many things. I actually think the stronger interpretations are pretty common (again, the SEP article has a little discussion on this), and so there is some danger of people thinking that this shows a stronger result than it actually does.

Nah, people argue all the time about agent neutrality. Agent-neutral consequentialism is simply one form of consequentialism, albeit a popular one.

The problem with Kantianism-as-"consequentialism" is that the consequences you have to portray the agent as pursuing, are not very plausible ultimate goals, on the face of it. What makes the usual versions of consequentialism appealing is, in large part, the immediate plausibility of the claim that these goals (insert the particular theory of the good here) are what really and ultimately matter. If we specify particular types of actions in the goal (e.g., lying is to be minimized) and index to the agent, that immediate plausibility fades.

I don't think so, if I understand Alicorn correctly.

Alicorn says that a "consequentialist doppelganger"

applies the following transformation to some non-consequentialist theory X:

  1. What would the world look like if I followed theory X?
  2. You ought to act in such a way as to bring about the result of step 1.

But that's not what Peterson is doing. Instead, his approach (along with several previous, incomplete and failed attempts to do this) merely captures whatever rules and considerations the deontologist cares about in what a decision-theoretic agent (a consequentialist) calls the "outcome." For example, the agent's utility function can be said to assign very, very low utility to an outcome in which (1) the agent has just lied, or (2) the agent has just broken a promise previously sworn to, or (3) the agent has just violated the rights of a being that counts as a moral agent according to criterion C. Etc.

What is the important difference between (1) assigning low utilities to outcomes in which the agent has just lied, and (2) attempting consequentialitically to make the world look just like it did if the agent doesn't lie? I mean, surely the way you do #2 is precisely by assigning low utilities to outcomes in which the agent lies, no?

It's trivial to put any act-based moral theory in simple deontological terms. Simply see what act the theory would recommend, and then insist that all must follow the rule "Subject S must take action A at time T".

As was helpfully alluded to at the end of the second paper, virtue ethics can't necessarily be consequentialized. In particular, some virtue/character ethics proponents suggest that acts are not the proper purview of ethics - those variations of virtue ethics do not necessarily provide any concrete recommendations as to whether one action is preferred to another, in general or in a particular situation. Rather, on some views, virtue ethics tries to answer "What is a good man?" or "What is the good life?" - questions that simply are not addressed by an act-based ethics.

Hm.
I can certainly see how "Everyone must take those actions that they most expect to improve the state of the world" can be treated as a restatement of certain kinds of nominally non-rule-based moral systems in terms of rules.
But trying to restate that principle as a set of rules governing what specific acts specific individuals must perform at specific times strikes me as far from trivial.

Trivial at one end, very involved at the other. I'm not sure the various methods of consequentializing that were suggested fare significantly better.

That bit was introduced in the same spirit as:

  • Q:"Bungee jumping is so safe, how could someone die doing it?"
  • A:"I could drink poison while bungee jumping."

At which point "Why the hell would you want to do that?!" might be an appropriate response.

You can represent any form of agency with utility function that is 0 for doing what agency does not want to do, and 1 for doing what agency want to do. This looks like a special case of such triviality, as true as it is irrelevant. Generally one of the problems with insufficient training in math is the lack of training for not reading extra purpose into mathematical definitions.

What is a classic or particular illustrative example of the difference between consequentialism and deontological ethics?

Many forms of classical deontological ethics have rules like "don't lie" or "don't murder" as absolutes. A while ago, I saw a book by a Catholic writer that argued that when hiding people wanted by Nazis if one was asked directly by a Nazi if one was hiding someone the ethical thing would be to say yes. A consequentialist will look at that and disagree quite strongly. In practice many deontological systems won't go that far and will label some deontological constraints at different priority levels, so most Catholic theologians would as I understand it see nothing wrong with lying in that context. Similarly, in both classical ethics as discussed in both Islam and Judaism, lying in such a context would be considered the right thing to do. But some other deontological constraints may override- for example in Orthodox Judaism, idolatry cannot be engaged in even to save a life.

In practice most strong deontologist end up having systems that in most contexts look very similar to what the consequentialist would do outside a few circumstances.

Thanks! OK, so a classical deontological rule might be, "don't lie" as an absolute. Suppose that a person has this particular rule in their ethical system. It is entirely context and consequence independent.

Since I don't have time to read articles about meta-ethics at the moment, I wanted to guess the gist of the argument that this could be expressed in consequentialist terms.

Is it as simple as if someone tells a lie, then there is something that is now 'bad' about the universe and this is framed as a consequence? For example, something as simple as 'a lie has been told' or perhaps a little bit more subtle, that now a person is in the negative state of being a liar. So that there is a negative 'consequence' of the lie, but it just happens to be an absolutely immediate consequence.

Then the distinction would be that deontologists compute over consequences that are immediate results of an action or state (that is, of the action or state itself), while consequentialists will compute over the consequences of that action or state (where consequence has the usual meaning of second or third or nth effects).

The paper is both exciting and enlightening.

What? If the paper is anything like the excerpt you provided it is dull and trivially wrong. How did this every get published?

Without trying to be too offensive, what checklist did you apply while endorsing it? Before endorsing the quotation it did you consider for yourself for at least 10 seconds whether the core premise could be easily falsified? Sure, that's not something I expect most people to do but when applying the ever-useful heuristic "What would lukeprog do?" it is part of what I would come up with. My idealism is shattered!

Rights, duty or virtue-based theories cannot account for the fundamental moral importance of consequences.

In past conversations I have had far more trouble convincing people that consequentialist systems can completely incorporate (for example) deontological systems. Indeed, trying to correctly and completely model deontological values in a consequentialist framework is far, far harder than making either deontological or virtue based systems account for the moral importance of consequences. I mean, come on, it is one deontological rule!

Update: see Carl's criticism.

This I like.

(The wedrifid persona is aware that its 'real world' alter ego should not make criticisms of lukeprog's posting given that lukeprog gives him money. It would seem that wedrifid values it's freedom of expression more than the slight reduction in economic expected value.)

It seems like this could plausibly have interesting consequences for dealing with (moral) normative uncertainty - it might make the whole process a fair bit easier if we could consequentialise all moral theories as a starting point (there would still be work to do but it seems like a good start...)

"seems it would be preferable to become a consequentialist – indeed, iti s be virtually impossible not to be a consequentialist."

A consequentialist about what? No one can be a consequentialist about their own actions, because no one has perfect knowledge of the ultimate outcomes of their actions..practical morality cannot be consequentialism.

Expected utility doesn't have to be deterministic.

How is that relevant?

I f you mean you can approximate expected utility with heuristic, that is arguably doing deontology and calling it consequentialism , or at least compromising between the two.

One thing that goes along with this is the idea that possible courses of action in any given situation can be sorted according to moral desirability. Of course in practice people differ about the exact ordering. But I've never heard anyone claim that in the moral sphere, B > A, C > B and simultaneously A > C. If in a moral scheme, you always find that A > B > C implies A > C, then you ought to be able to map to a utility function.

The only thing I'd add is that this doesn't map onto a materialist consequentialism. If you were part of the crew of a spacecraft unavoidably crashing into the Sun, with no power left and no communications - is there still a moral way to behave - when nothing you do will show in the material world in an hour or so? Many moral theories would hold so, but there isn't a material consequence as such...

is there still a moral way to behave - when nothing you do will show in the material world in an hour or so?

Suppose the universe has an inescapable Big Crunch or Heat Death ahead - is there a moral way to behave, when nothing you do will show in the material world in a googolplex years or so?

Either way the answer is yes: all the materialist consequentialists need is a utility functional which has support at all times t rather than just at t_infinity.

One thing that goes along with this is the idea that possible courses of action in any given situation can be sorted according to moral desirability.

This isn't necessary for the proof to work AFAICT. All you need is to be able to say is "In context A, action X is the moral action," i.e., there just needs to be a "best" action. Then set U(best action) > U(anything else).

The only thing I'd add is that this doesn't map onto a materialist consequentialism. If you were part of the crew of a spacecraft unavoidably crashing into the Sun, with no power left and no communications - is there still a moral way to behave - when nothing you do will show in the material world in an hour or so? Many moral theories would hold so, but there isn't a material consequence as such...

Every action you take has material consequences. You are, after all, made of material.