I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.

I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).

Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")

The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.

Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.

Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.

Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.

(ducks before accusations of misusing "isomorphic")

New Comment
44 comments, sorted by Click to highlight new comments since:

In principle, you can construct a utility function that represents a deontologist who abhors murder. You give a large negative value to the deontologist who commits murder. But it's kludgy. If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

Instead, recognize that some ethical systems are better for some tasks.

If you choose your ethical system based on how it fulfils a task, you are already a consequentialist. Deontology and virtue ethics don't care about getting things done.

All ethical frameworks are equal the same way that all graphing systems are equal.

But I'll be damned if it isn't easier to graph circles with polar coordinates than it is with Cartesian coordinates.

Easier if the center of the circle is at the origin of the coordinate system.

(Sorry for slow response. Super busy IRL.)

If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

a) After reading Luke's link below, I'm still not certain if what I've said about them being (approximately) isomorphic is correct... b) Assuming my isomorphism claim is true enough, I'd claim that the "meaning" carried by your preferred ethical framework is just framing.

That is, (a) imagine that there's a fixed moral landscape. (b) Imagine there are three transcriptions of it, one in each framework. (c) Imagine agents would all agree on the moral landscape, but (d) in practice differ on the transcription they prefer. We can then pessimistically ascribe this difference to the agents preferring to make certain classes of moral problems difficult to think about (i.e., shoving them under the rug).

Deontology and virtue ethics don't care about getting things done.

I maintain that this is incorrect. The framework of virtue ethics could easily have the item "it is virtuous to be the sort of person who gets things done." And "Make things happen, or else" could be a deontological rule. (Just because most examples of these moral frameworks are lame doesn't mean that it's a problem with the framework as opposed to the implementation.)

This reminds me of a part of Zombie Sequence, specifically the Giant Lookup Table. Yes, you can approximate consequentialism by a sufficiently complex set of deontological rules, but the question is: Where did those rules come from? What process generated them?

If we somehow wouldn't have any consequentialist intuitions, what is that probability that we would invent a "don't murder" deontological rule, instead of all the possible alternatives? Actually, why would we even feel a need for having any rules?

Deontological rules seem analogical to a lookup table. They are precomputed answers to ethical questions. Yes, they may be correct. Yes, using them is probably much faster than trying to compute them from scratch. But the reason why we have these deontological rules instead of some other deontological rules is partly consequentialism and partly historical accidents.

partly consequentialism and partly historical accidents

Memetic evolution as well. Communes and societies with "bad" deontological rules do not survive.

[-][anonymous]00

Ethical memes don't randomly mutate or come into being full-formed. The victors look at why they won and why their opponent lost and adjust accordingly, which is more in line with the quote.

But the reason why we have these deontological rules instead of some other deontological rules is partly consequentialism and partly historical accidents.

Why is it partly consequentialism? In what sense did consequentialism have any causal role to play in the development of deontological ethical systems? I highly doubt that the people who developed and promulgated them were closet consequentialists who chose the rules based on their consequences.

Where did those rules come from? What process generated them?

Where did your utility function come from? What process generated it?

Evolution, of course.

We could classify reflexes and aversions as deontological rules. Some of them would even sound moral-ish, such as "don't hit a person stronger than you" or "don't eat disgusting food". Not completely unlike what some moral systems say. I guess more convincing examples could be found.

But if the rule is more complex, if it requires some thinking and modelling of the situation and other people... then consequences are involved. Maybe imaginary consequences (if we don't give sacrifice to gods, they will be angry and harm us). Though this could be considered merely a rationalization of a rule created by memetic evolution.

That one differs in not giving the reverse formulation but only reducing virtue and deontological to consequentialist but not vice versa. nonetheless a relevant link.

Hm, thanks.

[-]xnn30

(ducks before accusations of misusing "isomorphic")

Old joke:

Q: Are these two objects isomorphic?

A: The first one is, but the second one isn't.

Can deontology and/or virtue ethics include "keep track of the effects of your actions, and if the results are going badly wrong, rethink your rules"?

I think so. I know they're commonly implemented without that feedback loop, but I don't see why that would be a necessary "feature".

Sort of! But not exactly. This is a topic I've been meaning to write a long post on for ages, and have given a few short impromptu presentations about.

Consequentialism, deontology, and virtue ethics are classifiers over world-histories, actions, and agents, respectively. They're mutually reducible, in that you can take a value system or a value-system fragment in any one of the three forms, and use it to generate a value system or value-system fragment in either of the other two forms. But value-system fragments are not equally naturally expressed in different forms; if you take a value from one and try to reframe it in the others, you sometimes get an explosion of complexity, particularly if you want to reduce value-system fragments which have weights and scaling properties, and have those weights and scaling properties carry through.

Henry Sidgwick in "The Methods of Ethics" actually makes the argument that Utilitarianism can be thought of as having a single predominant rule, namely the Greatest Happiness Principle, and that all other correct moral rules could follow from it, if you just looked closely enough at what a given rule was really saying. He noted that when properly expanded, a moral rule is essentially an injunction to act a certain way in a particular circumstance, that is universal to any person in identical circumstances. He also had some interesting things to say about the relationships between virtues and Utilitarianism, and more or less tried to show that the various commonly valued virtues could be inferred from a Utilitarian perspective.

Of course Sidgwick was arguing in a time before the clear-cut delineation of moral systems into "Deontological", "Consequentialist", and "Virtue Ethics". But I thought it would be useful to point out that early classical Utilitarian thinkers did not see these clear-cut delineations and instead, often made use of the language of rules and virtues to further their case for Utilitarianism as a comprehensive and inclusive moral theory.

[-][anonymous]20

I agree. In principle, you could construct a total order over all possible states of the world. All else is merely a pretty compression scheme. That being said, the scheme is quite necessary.

I don't believe the isomorphism holds under the (imo reasonable) assumption that rulesets and utility functions must be of finite length, correct?

Which is why I said "in the limit". But I think, if it is true that one can make reasonably close approximations in any framework, that's enough for the point to hold.

I've been wondering if it makes sense to think of ethical philosophies as different classes of modeling assumptions:

Any moral statement can be expressed using the language of consequentialism, deontology or virtue ethics. The statements can therefore be translated from one framework to another. In that sense, the frameworks are equivalent. However, some statements are much easier to express in a given language.

Sometimes, we make models of ethics to explore the underlying rules that make an ethical statement "true". We try to predict whether an ethical statement is true using information from other, closely related ethical statements. However, ethical statements are multidimensional and therefore vary across many different axes. Two ethical statements can be closely related on one axis, and completely different from each other on another axis. In order to learn about the underlying rules, we have to specify which axis we are going to make modeling assumptions on. The choice will determine whether you call yourself a "consequentialist" or a "deontologist".

Problems such as "the repugnant conclusion" and "being so honest that you tell a murderer where your children are" occur when we extrapolate too far along this axis, and end up way beyond the range of problems that the model is fit to.

You can take a set of object-level answers and construct a variety of ethical systems that produce those answers, but it still matters which ethical system you use because your justification for those answers would be different, and because while the systems may agree on those answers, they may diverge on answers outside the initial set.

If indeed the frameworks are isomorphic, then actually this is just another case humans allowing their judgment to be affected by an issue's framing. Which demonstrates only that there is a bug in human brains.

Isn't that a necessary step for the claims Derek Parfit makes about convergence in his "On What Matters"?

I have been saying this for quite some time. I regret not posting it first. It would be nice to have a more formal proof of all of this with utility functions, deontics and whatnot. If you are up for it, let me know. I could help, feedback, or we could work together. Perhaps someone else has done it already. It has always struck me as pretty obvious, but this is the first time I've seen stated like this.

Check out the previous discussion Luke linked to: http://lesswrong.com/lw/c45/almost_every_moral_theory_can_be_represented_by_a/

It seems there's some question about whether you can phrase deontological rules consequentially-- to make this more formal that needs to be settled. My first thought is that the formal version of this would say something along the lines of "you can achieve an outcome that differs by only X%, with a translation function that takes rules and spits out a utility function, which is only polynomially larger." It's not clear to me how to define a domain in such a way as to allow you to compute that X%.

...unfortunately, as much as I would like to see people discuss the moral landscape instead of the best way to describe it, I have very little time lately. :/

Ok. If I ever get to work with this I will let you know, perhaps you can help/join.

[-][anonymous]00

I think the choice of systems is less important than their specifics, and that giving a choice but no specifics is being obtuse.

[This comment is no longer endorsed by its author]Reply
[-]Shmi-10

On isomorphism: every version of utilitarianism I know of leads to a repugnant conclusion of one way or another, or even multiple ones. I don't think that deontology and virtue ethics are nearly as susceptible. In other words, you cannot construct a utilitarian equivalent of an ethical system which is against suffering (without explicitly minimizing some negative utility) but does not value torture over dust specks.

EDIT: see the link in this lukeprog's comment. for limits of consequentialization.

I agree with your point on utilitarianism, but it is only one form of consequentialization, it's not the entire class. Consequentialism doesn't need to lead to a repugnant conclusion.

If you translate Kantian ethics into consequentialism, you get a utility function with a large negative value for you torturing someone, lying, and doing several other things. Suppose a sadist comes up to you and asks where your children are, so that he can torture them. Lying to him has huge negative utility. Telling them where your children are does not. He'll torture them, but since it's not you that's torturing them, it doesn't matter. It's his problem, or rather it would be if he was a Kantian.

Does a utility function that only prohibits torture if a specific person does it really count as being against suffering?

"He'll torture them, but since it's not you that's torturing them, it doesn't matter."

That doesn't remotely follow. Kantians are supposed to abstain from lying, etc, because of the knock on effects, because they would not wish lying to become general law. So "it's not me who's doing it" isthe antithesis of the Kantianism.

If you're unwilling to lie to prevent torture, then it seems pretty clear that you're more okay with you lying than the other guy torturing.

Under deontological ethics, you are not responsible for everything. If someone is going to kill someone, and it doesn't fall under your responsibility, you have no ethical imperative to stop them. In what sense can you be considered to care about things your are not responsible for?

[-][anonymous]10

If someone is going to kill someone, and it doesn't fall under your responsibility, you have no ethical imperative to stop them.

This isn't true, at any rate, for Kant. Kant would say that you have a duty to help people in need when it doesn't require self-destructive or evil behavior on your part. It's permissible, perhaps, to help people in need self-destructively, and it's prohibited to help them by doing something evil (like lying). You are responsible for the deaths or torture of the children in the sense that you're required to do what you can to prevent such things, but you're not responsible for the actions of other people, and you can't be required (or permitted) to do forbidden things (this is true of any consistent ethical theory).

And of course, Kant thinks we can and do care about lots of things we aren't morally responsible for. Morality is not about achieving happiness, but becoming worthy of happiness. Actually being happy will require us to care about all sorts of things.

This isn't true, at any rate, for Kant. Kant would say that you have a duty to help people in need when it doesn't require self-destructive or evil behavior on your part.

In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.

this is true of any consistent ethical theory

It is true by definition. That's what "forbidden" means.

And of course, Kant thinks we can and do care about lots of things we aren't morally responsible for.

We are not using the same definition of "care". I mean whatever motivates you to action. If you see no need to take action, you don't care.

[-][anonymous]00

In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.

No, there's a lot of room between 'costs you nothing' and 'self-destructive'. The question is whether or not a whole species or society could exist under universal obedience to a duty, and a duty that requires self-destruction for the sake of others would make life impossible. But obviously, helping others at some cost to you doesn't.

Also, I was pretty careful to say that you can't have a DUTY to help others self-destructively. But it's certainly permissible to do so (so long as its not aimed at self-destruction). You are however prohibited from acting wrongly for the sake of others, or yourself. And that's just Kant saying "morality is the most important thing in the universe." That's not so weird a thought.

"We are not using the same definition of "care". I mean whatever motivates you to action. If you see no need to take action, you don't care."

No, we're using the same definition. So again, Kant thinks we can and do care about, for example, the moral behavior of others. We're not morally responsible for their behavior (but then, no ethical theory I know of asserts this), but we can certainly care about it. You ought to prevent the murderer at the door from finding the victim. You should do everything in your power, and it's permissible to die trying if that's necessary. You just can't do evil. Because that would be to place something above the moral law, and that's irrational.

It's not plausible to think that if someone doesn't act, they don't care. If someone insults me, I generally won't strike them or even respond, but that doesn't mean I'm not pissed off. I just think obeying the law and being civil is more important than my feelings being hurt.

But I'm just channeling Kant here, I'm not saying I agree with this stuff. But, give credit...there are very few ethical ideas as compelling and powerful and influential as his.

No, there's a lot of room between 'costs you nothing' and 'self-destructive'.

I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.

I don't mean "costs nothing" as in "no self-harm". I mean that a Kantian cares about not directly harming others, so directly harming others would be a cost to something. You could measure how much they care about something by how much they're willing to harm others for it. If they're only willing to harm others by zero, they care zero about it.

Also, I was pretty careful to say that you can't have a DUTY to help others self-destructively. But it's certainly permissible to do so (so long as its not aimed at self-destruction).

It's also permissible under nihilist ethics. I'm not going to say that nihilism is anti-suffering just because nihilism allows you to prevent it.

I judge an ethical system based on what someone holding to it must do, not what they can.

You are however prohibited from acting wrongly for the sake of others, or yourself. And that's just Kant saying "morality is the most important thing in the universe."

If you are prohibited from acting wrongly under any circumstances, then the most important thing is that you, personally, are moral. Everyone else acting immoral is an infinitely distant second.

No, we're using the same definition.

If someone insults me, I generally won't strike them or even respond, but that doesn't mean I'm not pissed off.

We are not using the same definition. When I say that someone following an ethical framework should care about suffering, I don't mean that it should make them feel bad. I mean that it should make them try to stop the suffering.

Although my exact words were "In what sense can you be considered to care about things your are not responsible for?", so technically the answer would be "In the sense that you feel bad about it."

[-][anonymous]20

I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.

This sounds right to me, so long as 'self-harm' is taken pretty restrictively, and not so as to include things like costing me $20.

In his discussion of the 'murderer at the door' case Kant takes pains to distinguish between 'harm' and 'wrong'. So while we should never wrong anyone, there's nothing intrinsically wrong with harming people (he grants that you're harming, but not wronging, the victim by telling the truth to the murderer). So in this sense, I think you're right that Kantian deontology isn't worried about suffering in any direct sense. Kant will agree that suffering is generally morally significant, and that we all have an interest in minimizing it, but he'll say that it's not immediately a moral issue. (I think he's right about that). So this isn't to say that a Kantian shouldn't care about suffering, just that it's as subordinate to morality as is pleasure, wealth, etc.

I judge an ethical system based on what someone holding to it must do, not what they can.

It seems to me arbitrary to limit your investigation of ethics in this way. The space of permissibility is interesting, not least because there's a debate about whether or not that space is empty.

then the most important thing is that you, personally, are moral. Everyone else acting immoral is an infinitely distant second.

Agreed, though everything is an infinitely distant second, including your own happiness. But no one would say that you aren't therefore passionately attached to your own happiness, or that you're somehow irrational or evil for being so attached.

Are you saying that some consequentialist systems don't even have deontological approximations?

It seems like you can have rules of the form "Don't torture... unless by doing the torture you can prevent an even worse thing" provides a checklist to compare badness ...so I'm not convinced?

[-]Shmi00

Are you saying that some consequentialist systems don't even have deontological approximations?

Actually, this one is trivially true, with the rule being "maximize the relevant utility". I am saying the converse need not be true.