I think there’s a confusion in our discussions of deontology and consequentialism. I’m writing this post to try to clear up that confusion. First let me say that this post is not about any territorial facts. The issue here is how we use the philosophical terms of art ‘consequentialism’ and ‘deontology’.

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.” There is of course an equivalently confused, though much less common, complaint about consequentialism.

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘how do we know that it is wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.

Some consequentialists and deontologists are also moral realists. Some are not. Some believe in divine commands, some are hedonists. Consequentialists and deontologists in practice always also subscribe to some meta-ethical theory which purports to explain the value of consequences or the source of injunctions. But consequentialism and deontology as such do not. In order to avoid strawmaning either the consequentialist or the deontologist, it’s important to either discuss the comprehensive views of particular ethicists, or to carefully leave aside meta-ethical issues.

This Stanford Encyclopedia of Philosophy article provides a helpful overview of the issues in the consequentialist-deontologist debate, and is careful to distinguish between ethical and meta-ethical concerns.

SEP article on Deontology

New Comment
85 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Jack280

This is right in spirit but wrong in letter:

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.”

It's not a confusion it's just something that isn't true. Deontological theories routinely provide explanations for these injunctions and some of these explanations are interesting (though I guess that's subjective).

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘why is it wrong to kill?’ is not a normative but a meta-ethical question.

No it isn't. "Why is it wrong to kill?" is a great example of a normative question! Utilitarianism provides an answer. So does deontology. A meta-ethical question would be "what does it mean to say, 'it's wrong to kill'". An applied ethics question would be "in circumstances x, y and z, is it wrong to kill?". Normative theories are absolutely supposed to answer this question.

Some consequentialists and deontologists are also moral realists. Some are not.

While I guess this could be logically possib... (read more)

3[anonymous]
To be absolutely clear, my post is about the way academic philosophy happens to organize a certain debate, and I cite that SEP article as my major source. It will be very helpful to me if you point out where you disagree with the SEP article (and on what basis), or where you think I've misread it. (Look specifically at this section: http://plato.stanford.edu/entries/ethics-deontological/#DeoTheMet Again, there is no fact of the matter about what is a normative and what is a meta-ethical question, just a convention. Being a moral anti-realist is compatible with having, and following, a moral theory: you just think you have reasons to be moral which are not based on mind-independent facts. For example, you might think convention gives you reason to be moral, where conventionalism is traditionally described as a form of non-realism. (see: http://plato.stanford.edu/entries/moral-anti-realism/#ChaMorAntRea Being a deontologist (I think, and my post assumes) is even compatible with being a moral nihilist: "Moral principles must come in the form of injunctions, and there are no such injunctions."
5Jack
Well there is a fact of the matter, it's just a fact about a convention. Yes, I understand what your post was arguing and I'm familiar with the way academic philosophy organizes this debate. And yes, deontology does not presume any particular metaethics. Your error, as far as I can tell, is in not getting what counts as a meta-ethical question and what doesn't. "Why is murder wrong?" is a straightforward question for normative theory. Kantian deontology, for instance, answers by saying "Murder is wrong because it violates the Categorical Imperative." And then there are a lot of details about what the Categorical Imperative is and how murder violates it. Rule utilitarianism says that murder is wrong because a rule that prohibits murder provides for the greatest good for the greatest number. And so on. Normative theories exist precisely to explain why certain actions are moral and other actions are immoral. A normative theory that can't explain why murder is (usually) immoral is a terribly incomplete normative theory. Meta-ethics isn't about asking why normative claims are true. It is about asking what it means to make a moral claim. Thus the "meta". E.g. questions like "are there moral facts?" At no point have I mentioned credentials to try and win a philosophical debate on Less Wrong. But if there is anything my philosophy degree makes me a minimal expert in, it's jargon. I realize this, but this resembles just about no one interested in debating consequentialism vs. deontology. Right. Like I said, it isn't logically impossible. It's just silly and sociologically implausible.
0bogus
Um, that's not a very interesting question, is it. Making a moral claim means, more or less: "I am right and you are wrong and you should do what I say". Note that this is not a morally absolutist view in the meta-ethical sense: even moral relativists make such claims all the time, they just admit that one's peculiar customs or opinions might affect the kinds of moral claims one makes. What's a more interesting question is, "what should happen when folks make incompatible moral claims, or claim incompatible rights". This is what ethics (in the Rushworth Kidder sense of setting "right against right") is all about. When we do ethics, we abandon what might be called (in a perhaps naïve and philosophically incorrect way) "moral absolutism" or the simple practice of just making moral claims, and start debating them in public. Law, politics and civics are a further complication: they arise when societies get more complex and less "tribal", so simple ethical reasoning is no longer enough and we need more of a formal structure.
1Jack
Well your attempt to explain what a normative claim is actually includes a normative claim so I don't think you've successfully dissolved the question. You are "right" about what? Facts? The world? What kind of facts? What kind of evidence can you offer to demonstrate that you are right and I am wrong? That "should" is there again. I don't imagine there ever was a "simple practice of just making moral claims". Moral claims are generally claims made on others and they are speech acts which means they exist to communicate something. People don't spend a lot of time making moral claims that everyone agrees with and abides by which means it's pretty much in the nature of a moral claim to be part of a debate or discussion. I can't see the importance or the force of the distinction you are trying to make.
-3bogus
Who says I need "evidence" to argue that you should do something? I could rely on my perceived authority - in fact, you could take this as a definition of what "moral authority" is all about. Sometimes that moral authority comes from religion (or cosmology, more generally), sometimes it's derived from tradition, etc. So I have to dispute your claim that: since it is quite self-evident that many people and institutions have made moral claims in the past that were not perceived as propely being part of a "debate" or "discussion". It's true that, sometimes, moral claims are seen in such a way - especially when they're seen as originating from individual instinct and cognition, and thus leading people to think of themselves as being on the "right side" of an ethical dilemma or conflict. And yet, at some level, more formalized systems like law and politics presumably rely on widespread trust in the "system" itself as a moral authority, if only one with a very limited scope. So, you're never going to get an answer to the question of "what a normative claim is", because the whole concept involves a kind of tension. There's an "authority to be followed" side, and an "internal moral cognition" side, and both can be right to some degree and even interact in a fruitful way.
-1[anonymous]
I still feel like we're talking past each other. I made a straightforward empirical claim in my post. So all we need to do is find some empirical evidence. If you accept that SEP typically and in this case represents the academic state of the art and conventional usage, then look at the last section of the SEP article I linked to. It agrees with me (I think). If you don't think the SEP article represents the convention accurately, just say that and we can move on to another source. There's no sense in arguing about whether or not the distinction between normative and meta ethics reported in the SEP article makes sense. I agree that it does not. But we're not arguing about that. We're arguing about what the convention actually is.
3Jack
The SEP does not agree with you. No where in that section does it say that the "Why is murder wrong?" is a meta-ethical question. All it says is that while deontology does not assume a meta-ethical position, though certain meta-ethical positions are more hospitable to it. I agree with you and the SEP here. I'm not saying deontology is a meta-ethical theory. It isn't. As I said: By convention "why is murder wrong?" is a question for normative theory. Your sentence in the post, this one: is wrong. The SEP does not say otherwise. In any way. "Why is it wrong to kill?" is a normative question. Maybe what is tripping you up is this sentece from the SEP? I could see how that could be read as "reasons for the truth of deontological morality". But these are questions actually about the epistemology of moral claims-- "how do we know x is immoral?", is actually different from "why x is immoral?" Obviously these questions are usually connected but they don't have to be. It is logically possible to think that the Categorical Imperative makes murder wrong but that the way we learn that is by God speaking to us or by studying physics or whatever. The distinction makes plenty of sense. It just isn't what you think it is.
2[anonymous]
Great, I assume this means you think the SEP article is representing the convention. Let me know if that's not the case, since if it isn't, we're wasting our time talking about my interpretation of it. Anyway, suppose someone were to come along and say 'Moral truths come primarily in the form of absolute injunctions!' (or whatever would fix him as a deontologist). We ask him for an example of such an injunction, and he says 'Do not kill.' So far, we agree that this whole discussion has taken place within normative ethics. Now we ask him 'Why shouldn't we kill?' This is a pretty ambiguous question, and we could be asking a clearly normative question to which the answer might be 'because there's an injunction to the effect that you shouldn't'. But this isn't the kind of question I'm talking about in my (perhaps poorly phrased) initial post. What the confused person I discuss wants is not an answer to the question 'what is right and wrong', from the deontologist, he wants answers to questions like 'what makes a particular injunction true?' 'How do you know this injunction is true?' and so on. What this confused person often complains about (I know you've had some recent experience with this on "Philosophical Landmines") is that the only explanations they get, explanations which are obviously inadequate, are explanations like 'Because God said so in the Bible'. In complaining about this, the confused person implies that this is the kind of answer they want, but that it's a very poor one. A deontologist who gives this kind of answer is, I think we will agree, endorsing some form of divine command theory. So what kind of a thing is 'divine command theory', and what kind of answer is 'because God said so'? Is it meta-ethical, or normative? Well, the SEP article says this: Notice that Divine command theory is on the list of things next to 'expressivist', 'constructivist', and other meta-ethical positions, implying that 'because God said so' (the kind of answer the conf
4Jack
Okay. I think I see what is happening. The whole issue get's weirdly skewed by divine command theory, which is so simple it is hard to see the distinction and which implies a very particular formula for a normative theory. Let me outline the position: Metaethics: Divine Command theory. In answer to the question "What is morality?" they answer "the will/decree of God". Normative Ethics: In answer to the question "Why is murder immoral?" they provide a proof that God decrees murder to be immoral, say, a justification for the Bible as the word of God and a citation of the Ten Commandments. Non-judeo-christian divine command theorists would say something else. Some normative theories under the umbrella of divine command theory could even be consequentialist, "God told me in a dream to maximize preference satisfaction." These answers assume divine command theory but they're still normative theory. Now in a real life debate with a divine command theorist they may emphasize the "God said so part" instead of the "here is where he said it" part. But that's just pragmatics: you don't care about the normative proof until you share the meta-ethic so it is reasonable for a divine command theorists to skip straight to the major point of contention. In the case of divine command deontology the "non-answer" issue is pretty much entirely about the meta-ethical assumptions and not the actual normative theory. So I can see why you were emphasizing the fact that deontology is logically independent of any particular meta-ethical framework. It might be less confusing to just emphasize that "deontology" isn't a particular normative theory-- just a class of normative theory determined by a particular feature (just like consequentialism) and that there is nothing necessarily mysterious or magical about that feature; that that association is due to a particular sort of deontological normative theory which is popular among non-philosophers, a theory which assumes a stupid meta-ethics eve
2[anonymous]
Yes, and I don't think we have any further disagreement. Thanks for the interesting discussion.
2bogus
I'm not sure that divine command theory implies "a very particular formula for a normative theory". In practice, many divine command theorists pay a lot of attention to things like casuistry (i.e. case-based reasoning) and situational ethics. In other words, they do morality "case by case" or "fable by fable". Surely any such moral theory must contain a lot of non-trivial normative content. It's not at all the case that all arguing happens on the meta-ethical, "God said it" level.
1Jack
This is a good point.
0bogus
The answer to this question actually depends on whether you are doing normative ethics, or talking about morality. In the former case, a sensible answer would be: "because, as a matter of fact, most individuals and societies agree that "non-killing" is a morally relevant 'value', where 'value' means a conative ambition (i.e. what "should" we do?). As a normative ethicist, I fall back on such widely-shared values". When doing morality in a sort of common-sense way, the answer is more complicated. Generally speaking, you're going to find that such 'values' (or, again, conative ambitions of the "should" variety) are a part of the "moral core" of individuals, what they take their "morality" to be about. This moral core is influenced by many factors, including their biology (so, yes, they're generally going to share most other humans' values), society, perceived moral authorities, etc. It can also be influenced by ethical debates they take part in: most people can be convinced that they should drop some moral values and take up others. All of this means that the real world is quite complicated, and does not fully reflect any of the "moral positions" that philosophers like to talk about.
0[anonymous]
That is doubtlessly true, though I wonder if its an entirely fair criterion. While most ethicists would agree that the right view should reflect actual everyday moral judgements, nothing in particular holds them to that. It's simply possible that no one is presently good, and that the everyday moral judgement people make are terribly corrupt and over-complicated compared to the correct judgements.
4bogus
Note that "the way academic philosophy happens to organize" debates about ethics and morality should be taken with a huge grain of salt. Most people who engage in moral/ethical judgment in everyday life pay very little attention to moral philosophy in the academic sense. In fact, as it happens, most of the public debate about ethics and morals takes place outside academic philosophy, and is hard to disentangle from debate involving politics, law and general worldviews or "cosmologies" (in the anthropological sense).
1[anonymous]
Very true, though I think it's important to acknowledge two things: a) philosophers like Mill and Kant have had a huge impact on everyday moral thinking in the west, and b) the kinds of moral debates we typically have on this site are not independent of academic philosophy.
0buybuydandavis
A moral non-realist can have moral theories in the "If, then" form. If you value A.B.C, then you value D. If you're a paper clip maximizer, then ...
2BerryPick6
Except since those are simply hypothetical imperatives, the Moral Non-Realist won't see the need to call these theories 'moral' in nature. The Error Theorist agrees that if you want A then you should do B, but he wouldn't call that a theory of morality.
0buybuydandavis
There are all kinds of preferences, and distinguishing moral preferences from other types of preferences is still useful, even if you don't believe that those preferences are commands from existence. The Error Theorist might not call that a theory of morality. My reply to him is that what others call moral preferences have practical differences to hat preferences. Treating them all the same is throwing out the conceptual baby with the bathwater. And others, perhaps you, might not want to call these theories "moral" either, because you seem to want "imperatives", and my account of morality doesn't include imperatives from the universe, or anything else.
0TimS
The problem is that the line between what has felt like a "moral" preference and what has felt like some other kind of preference has been different in different social contexts. There may not even be agreement in a particular culture. For example, some folks think an individual's sexual preferences are "moral preferences," such that a particular preference can be immoral. Other folks think a sexual preference is more like a gastric preference. Some people like broccoli, some don't. Good and evil don't enter into that discussion at all. If the error theory were false, I would expect the line dividing different types of preferences would be more stable over time, even if value drift caused moral preferences to change over time. In other words, the Aztecs thought human sacrifice was good, we now think it is evil. But the question has always been understood as a moral question. I'm asserting that some questions have not always been seen as "moral" questions, and the movement of that line is evidence for the error theory.
0Eugine_Nier
The line between "truth" and "belief" is also not stable across cultures.
0TimS
The line between "true" and "not true" is different in different cultures? I wasn't aware that airplanes don't work in China.
0Eugine_Nier
I meant in the same sense that you meant the statement about cultures, i.e., if you ask an average member of the culture, you'll get different answers for what is true depending on the culture.
0TimS
I was talking about community consensus, not whatever nonsense is being spouted by the man-on-the-street. As you noted, the belief of the average person is seldom a reliable indicator (our even all that coherent). That's why we don't measure a society's scientific knowledge that way.
0Eugine_Nier
Ok, my point still stands.
0Eugine_Nier
That's still a moral theory.
3buybuydandavis
Which was the point I was making. "A moral non-realist can have moral theories ..." So I presented the form of the moral theory a moral non-realist could have.
0Eugine_Nier
Sorry, I was in a hurry when I posted the grandparent and was unclear: Specifically my point was that the form of extreme be-yourself-ism implicit in your statement is still a moral theory, one that would make statements like: "If you're a paper clip maximizer, then maximize paperclips." "If you're a Nazi, kill Jews." "If you're a liberal, try to stop the Nazis."
0buybuydandavis
Those aren't accurate statements of the kinds of moral theories I was speaking of. I gave the example: That's not an imperative, it's an identification of the relationship between different values, in this case that A,B,C imply D.
0Eugine_Nier
Ok, that's not a moral theory unless you're sneaking in the statements I made in the parent as connotations.
0buybuydandavis
To me, a theory that identifies a moral value implied by other moral values would count as a moral theory. What kind of theory do you want to call it?
0TimS
I think I agree with Eugine_Nier that it isn't a moral theory to be able to draw conclusions. One doesn't need to commit to any ethical or meta-ethical principles to notice that Clippy's preferences will be met better if Clippy creates some paperclips. At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
0buybuydandavis
Does one need to commit to a theory to have one? It sounds to me like you only think a person has a moral theory then the moral theory has them. For you, under your moral theories. Not for me. I'm happy to have theories that tell me what moral values I do have, and what moral values other people have. What do you want to call those kinds of theories?
0TimS
Obviously not - but it isn't your moral theory that tells you how Clippy will maximize its preferences. Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa. Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder - that's why she tends not to do that. I should be working now, but I don't want to. So I'm here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?

I wonder if it would be more useful, instead of talking about consequentialist vs. deontological positions, to talk about consequence-based and responsibility/rights-based inference steps, which can possibly coexist in the same moral system; or possibly consequence-based and responsibility/rights-based descriptions of morally desirable conditions?

2[anonymous]
I think that's an excellent suggestion.

_TL;DR: I see lots of debates flinging around "consequentialism" and "utilitarianism" and "moral realism" and "subjectivism" and various other philosophical terms, but each time I look up one of them or ask for an explanation, it inevitably ends up being something I already believe, even when it comes from both sides of a heated argument. So it turns out "I am a X" for nearly all X I've ever seen on LessWrong. Here's what I think about all of this, in honest lay-it-out-there form. For a charitable reading, ... (read more)

2Jack
If you think of your map as a set of sentences that models the territory, an objective fact can be defined as a sentence in this set. So morality is objective in this regard if what determines your moral judgments are sentences in your map. Now consider the following counterfactual: in this world the algorithms that determine your decisions are very different. They are so different that counterfactual-you thinks torturing and murdering innocent people is the most moral thing one can do. Now I ask (non-counterfactual) you: is it moral for counter-factual you to torture and murder innocent people? Most people say "no". This is because our moral judgments aren't contingent on our beliefs about the algorithms in our head. That is, they are not objective facts. We just run the moral judgment software we have and project that judgment onto the map. I developed this argument further here.
1buybuydandavis
I think we're fairly close, but have one major difference. I'd say there are moral facts. These moral facts are objective features of the universe. These facts are about the evaluations that could be made by the moral algorithms in our heads. Where I differ with you is in the number of black boxes. "We" don't have "a" black box. "Each" of us has our own black box. Moral, as evaluated by you, is the result of your algorithm given the relevant information and sufficient processing time. I think this is somewhat in line with EY, though I can never tell if he is a universalist or not. Moral is the result of an idealized calculation of a moral algorithm, where the result of the idealization is often different than the actual because of lack of information and processing time. A case could be made for this view to fall into many of the usual categories. Moral relativism. Ethical Subjectivism. Moral Realism. Moral Anti Realism. About the only thing ruled out is Universalism. For Deontology vs. Consequentialism, it gets similarly murky. Do consequentialists really do de novo analysis of the entire state of the universe again and again all day? If I shoot a gun at you, but miss, is it "no harm, no foul"? When a consequentialist actually thinks about it, all of a sudden I expect a lot of rules of behavior to come up. There will be some rule consequentialilsm. Then "acts" will be seen as part of the consequences too. Very quickly, we're seeing all sorts of aspects of deontology when a consequentialist works out the details. The same thing with deontologists. Does the rule absolutely always apply? No? Maybe it depends on context? Why? Does it have something to do with the consequences in the different contexts? I bet it often does. Similarly, the "though the heavens fall, I shall do right" attitude is rarely taken in hypotheticals, and would be more rarely taken in actual fact. You won't tell a lie to keep everyone in the world from a fiery death? Really? I doubt it. I'd
0DaFranker
This doesn't seem to be a point on which we differ at all. In this later comment I'm saying pretty much the same thing. Indeed, I wouldn't be surprised if each of us has hundreds of processes that feel like they're calculating "morality", and aren't evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
0buybuydandavis
OK. I see your other comment. I think I was mainly responding to this: You can't extract "an" objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.
1[anonymous]
Thanks for the very clear direct account of your view. I do have one question: it seems that on your view it should be impossible to act according to your preferences, but morally wrongly. This is at least a pretty counterintuitive result, and may explain some of the confusion people have experienced with your view.
0DaFranker
As stated, this is correct. I don't quite think this is what you were going for, though ;) Basically, yes fully true even in spirit, IFF: morality is the only algorithm involved in human decision-making AND human decision-making is the only thing that determines what the rest of my brain, nervous system, and my body actually end up doing. Hint: All of the above conditions are, according to my evidence, CLEARLY FALSE. Which means there are competing elements within human brains that do not seek morality, and these are a component of what people usually refer to when they think of their "preferences", such as "I would prefer having a nice laptop even though I know it costs one dead kid." If we recenter the words, in terms of "it is impossible to decide that it is more moral to act against one's moral preferences", then... yeah. I think that logically follows, and to me sounds almost like a tautology. Once the equations are balanced and x is solved for and isolated, one's moral preferences is what one decides is more moral to act in accordance with, which is what one morally prefers as per the algorithm that runs in some part of the brain. So judging from this, the solution might simply be to taboo and reduce more stuff. Thanks, your question and comment were directly useful to me and clear.
1[anonymous]
Okay, thanks for clarifying. I still have a similar worry though: it seems to be impossible that anyone should act on their own moral preferences, yet morally wrongly. This still seems quite counterintuitive.
2DaFranker
You are correct in that conclusion. I think it is impossible for one to act on their own (true) moral preferences yet morally wrongly. There are two remaining points, for me. First is that it's difficult to figure out one's own exact moral preferences. The second is that it becomes extremely important to never forget to qualify "morally wrongly" with a parent. Frank can never act on Frank's true moral preferences and yet act Frank's-Evaluation-Of morally wrongly. Bob can never act on Bob's true moral preferences and yet act Bob's-Evaluation-Of morally wrongly. However, since it is not physically required in the laws of the universe that Frank's "Evaluation of Morally Wrong" function == Bob's "Evaluation of Morally Wrong" function, this can mean that: Frank CAN act on Frank's true moral preferences and yet act Bob's-Evaluation-Of morally wrongly. So to attempt to resolve the whole brain-wracking nightmare that ensues, it becomes important to see whether Bob and Frank have common parts in their evaluation of morality. It also becomes important to notice that it's highly likely that a fraction of Frank's evaluation of morality depends on the results of Bob's evaluation of morality, and vice-versa. Thus, we can get cases where Frank's moral preferences will depend on the moral preferences of Bob, at least in part, which means if Frank is really acting according to what Frank's moral preferences really say about Frank not wanting to act completely against Bob's moral preferences, then Frank is usually also acting partially according to most of Bob's preferences. It is counterintuitive, I'll grant that. I find it much less counterintuitive than Quantum Physics, though, and as the latter exemplifies it's not uncommon for human brains to not find reality intuitive. I don't mean this association connotatively; I don't really have other examples. My point is that human intuition is a poor tool to evaluate advanced notions like these.
0bogus
This is sensible enough as a theory of morality, but you still haven't accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation? A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn't debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
0DaFranker
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics. And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there's more than just one or two things on which they would agree. As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank's moral preferences as they would be if Frank had known Bob's true moral preferences. Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality. If we add more players, eventually it gets to a point where you can't keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic "compromise", since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation. Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don't have the actual algorithms of actual humans to work with, of course.
0torekp
Point 2 is terrific, and bears repeating in some other threads.
0whowhowho
But those "objective" facts would only be about the intuitions of individual minds, Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact. Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans. That is hard to inpterpret. Why should opinions be what is "objectively moral"? You might mean there is nothing more to morality than people's jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective. And in any case, what is important is co-ordinating the judgements of individuals in the case of conflict. "We" individually, or "we" collectively? That is a very important point to skate over. THat seems to be saying that it is instrumentally in people's interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible. If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
0DaFranker
(part 2 of two-part response, see below or above for the first) See this later comment but this one especially (the first is mostly for context) to see that I do indeed take that into account. The key point is that "morality" isn't straightforwardly "what people want" at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things. Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example: Julie might find it moral to kill three humans because she values the author of this post saying "Shenanigans" out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn't want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn't care about me saying "Shenanigans". Thus, because Julie cares about Jack's morality (most humans, I assume, have values in their morality for "what other people of my tribe consider moral or wrong"), she will "make a personal sacrifice and use self-restrain" to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says "Bah! Things could have been more fun.", but game-theoretically she gains an advantage in the long term - Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack's people-alive moral counter as well. I think you are vastly confusing "good", "greater good", and "good for me". These need to be tabooed and reduced. Again, example time: Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as "his country".
0whowhowho
Tom will sacrifice himself if his values lead him too, and not if they don't. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled "moral". I think it isn't. If someone tries to persuade you that you are wrong about morality, it is useful to consider the "what is morality for" question. Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
0DaFranker
Yes! . (this space intentionally left blank) . . What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions. Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low. If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
0whowhowho
Consequentualism versus deontology, objectivism versus subjectivism, as in the context. Any would be good Metaethics is sometimes touted as a solve problem on LW.
0DaFranker
Oh. Yep. As I said originally, both of those "X versus Y" and many others are just confusing and mysterious-sounding to me. They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm - the car object goes faster. Oh. Well, yeah, it does sound kind-of solved. Judging by the wikipedia description of "meta-ethics" and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
0whowhowho
You previously said something much more definite-sounding: "I believe that there is an objective system of verifiable, moral facts which can be true or false" ..although it has turned out you meant something like "there are objective facts about de facto moral reasoning". The alleged solution seems as elusive as the Snark to me.
0DaFranker
You seem to misunderstand most of my beliefs, so I'll try to address that first before I go any further to avoid confusion. No. Just no. No no no no no no no no no no no no no. NO! NO! The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form "Kill this child?", the bundle sends queries to other bundles: "Benefits?" "People who die if child lives?" "Hungry?" "Have we had sex recently?" "Is the child real?" etc. Then, an output is produced, "KILLING CHILD IS WRONG" or "KILLING CHILD IS OKAY HERE". Human consciousnesses, the "you" that is you and that wouldn't randomly decide to start masturbating in public while sleepwalking (you don't want to be the guy whom that happened to, seriously), doesn't have access to the whole thing that the bundle of synapses called "morality" inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles. In other words, intuitions. What I refer to as an "objective fact", the "objective" morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact "objective morality" of each human is a complicated thing that I'm not even sure I grasp entirely and can describe adequately, but I'm quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate. The "objective moral fact" (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A's morality system to kill B, and B is correct when B t
2whowhowho
That's still not the point. The entire bundle still isn't Objective Morality, because the entire bundle is still insie one person's head. Objective morality is what all ideal agents would converge on. The way you have expressed this is contradiictory. You said "it is moral", simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn't the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses "moral" despite the fact that they don't resole conflicts, or take others' interestes into account. I wouldn't call them that. I think the contraiction means at least one of the agent's I-think-this-is-moral beliefs is wrong I don't think so Ethics " Moral principles that govern a person's or group's behavior." "1. ( used with a singular or plural verb ) a system of moral principles: the ethics of a culture. 2. the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. 3. moral principles, as of an individual: His ethics forbade betrayal of a confidence. 4. ( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. " Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious. If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law. Their own something. I don't think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms "consicience" and "superego" cover internal regulation of behaviour without prejudice to the philo
-2DaFranker
Okay. That is clearly a word problem, and you are arguing my definition. You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that: IFF It is moral-A that A kills B && It is moral-B that B is not killed by A && There are no other factors influencing moral-A or moral-B THEN: It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin. Really? You're going there? Please stop this. I'm seeing more and more evidence that you're deliberately ignoring my arguments and what I'm trying to say, and that you're just equating everything I say with "This is not a perfect system of normative ethics, therefore it is worthless". I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I'm not talking about laws and saying "The law should only punish those that act against their intuitions of morality, oh derp!" -- I'm not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue? Yes. And in case that wasn't painfully obvious yet, this "something" of their own is exactly what I mean to say when I use the word "morality"! I'm not attempting to convince anyone that "morality" "exists". To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you're getting at or what you even mean by that sentence or the one after it. Yup. If I agree to use your words, then yes. There's an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of
0whowhowho
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by "objective moral facts". What fight? You have added the "for A" and "for B" clauses that were missing last time. Are you hilding me to blame for taking you at your word? You claimed a distinction in meaning between "morality" and "ethics" that doesn't exist. Pointing that out is useful for clarity of communication. It was not intended to prove anything at the object level. I don't know how accidental it was , but your "moral for A" and "moral for B" comment does suggest that two people can in contradiciton and yet both right. I am totally aware of that. But you don't get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony. You said there were objective facts about it! You haven't explained that or how or why different individuals would converge on a single objective reality by refining their intuitions. And no, EY doesn't either. if they haven't already. So values and intuitions are a necessary ingredient. Any number of others could be as well.
-1bogus
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a "fact" about morality, which we could call a "moral fact". And that fact is certainly "objective" from the POV of any single individual, although it's not objective at all in the naïve Western sense of "objectivity" or God's Eye View. Dictionary definitions are worthless, especially in specialized domains. Does a distinction between "morality" and "ethics" (or even between "descriptive morality" and "normative morality", if you're committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
2whowhowho
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true. It's a second order fact. I've never seen that distinction in the specialised domain in question.
1bogus
I don't think that's a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it's generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by "morality" in the real world. For an unusually clear example, see Ayn Rand's moral theory, incidentally also called "Objectivism".
0joaolkf
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term "descriptive morality". I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn't, and the above comment is the solely instance of the term I could find. I'm blaming you them! Not really though, it seems I've invented this term on my own - and I'm not proud of it. So far, I've failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
0bogus
So it's even worse than I thought? When ethicists do any "descriptive" research, they are studying morality, whether they care to admit it or not. The problem with calling such things "ethics" is not so much that it implies a pluralist/relativist view - if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term "ethics" is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
0Jack
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
0bogus
Well, yes. I'm using scare quotes around the terms "objective" and "fact", precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading. Needless to say, I do not think this is "being loose with language". And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.
-2BerryPick6
If I've understood your position correctly, it's extremely similar to what I would call the "high-level LW metaethical consensus." Luke's sequence on Pluralistic Moral Reductionism, Eliezer's more recent posts about metaethics and a few posts by Jack all illustrate comparable theories to yours. If others have written extensively about metaethics on LW, I may have missed them.
0whowhowho
These seem different from each other to me.
0BerryPick6
How so?
-1whowhowho
I don't see (explicit) pluralism in EY. Jack's approach is so deflationary it could be an error theory.

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘why is it wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.

Either D-ology or C-ism can be taken meta-ethically or... (read more)

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.”

I think if someone said this, what they probably mean (i.e., would say once you cleared up their confusion about terminology and convention) is something like "deontology does not seem compatible with any meta-ethical theories that I find plausible, while consequentialism does, and that is one reason why I'm more confident in consequentialism than in deontology." Is this statement sufficiently unconfused?

0[anonymous]
Yes, that sounds perfectly clear and unproblematic to me, as well as a good way to get at issues which may help decide the consequentialism vs deontology debate.

The best distinction I've seen between the two consists in whether you honour or promote your values.

Say I value not-murdering.

If I'm a consequentialist, I'll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.

If I'm a deontologist, I'll act on this value by honouring it: I'll withhold from murdering anyone, even if this might increase the total amount of murdering.

Unfortunately I can't remember offhand who came up with this analysis.

2DaFranker
This sounds like they are, in fact, valuing different things altogether. The consequentialist negvalues the amount of murdering there is, while the deontologist negvalues doing the murdering. If the deontologist and consequentialist both value not-murdering-people, then the consequentialist takes the action which leads to them not having murdered someone (so they don't murder, even if it means more total murdering), and the deontologist is as quoted. If they both negvalue the total amount of murders, the deontologist will honour not-doing-things-which-are-more-total-murder, which by logical necessity implies ¬( not murdering this one time), which means they also murder for the sake of less murdering. It seems the distinction is, again, merely one of degree and probability estimates, and a difference in the general conceptspace of where people from both "camps" tend to usually pinpoint their values. To rephrase, this means it seems like the only real difference between consequentialists and deontologists is the language and the general empirical clusters of things they value more, including different probability estimates for certain values of the likelihood of some things.
0prase
I think it isn't precise to say that they value different things, since the deontologist doesn't decide in terms of values. Speaking of values is practical from the point of view of a consequentialist, who compares different possible states (or histories) of the world; values are then functions defined over the set of world states which the decider tries to maximise. A pure ideal deontologist doesn't do that; his moral decisions are local (i.e. they take into account only the deontologist's own action and perhaps its immediate context) and binary (i.e. the considered action is either approved or not, it isn't compared to other possible actions). If more actions are approved the deontologist may use whatever algorithm to choose between them, but this choice is outside the domain of deontologist ethics. Deontologist rules can't force one to act as if one valued some total amount of murders (low or high), as the total amount of murders isn't one's own action. Formulating the preference as a "deontological" rule of "you shouldn't do things that would lead you to believe that the total amount of murders would increase" is sneaking consequentialism into deontology.
0bogus
This is not at all clear to me. The Kantian Categorical Imperative is usually seen as a deontological rule, even though it's really a formulation of 'reflective' concerns (viz., 'you should not act as you would not have everyone act', akin to the Silver and Golden Rule) that could be seen as meta-ethical in their own right.
0bogus
Good point. This also explains why we are so willing to delegate "killing" to external entities, such as job occupations (when the "killing" involves chickens and cattle) and authorities (when we target war enemies, terrorists and the like. Of course this comes with very strict safeguards and due processes.) More recently, we have also started delegating our "killing" to machines such as drones; admittedly, this ignores the truism that drones don't kill people, people kill people. Maybe if we were less deontological and more consequentialist in our outlook, there would be less of this kind of delegation.
0Eugine_Nier
Depends, a deontological outlook with a maxim that you are responsible for what you have done in your name would be even more effective.

To make sure I understood this post correctly:

This would mean the correct common argument would instead be "The type of moral theory that leads to deontology provides no (or no interesting) explanation for the specific injunctions that are in the type of deontology followed."

Is this correct?
Also, is there a name for the philosophy being criticized in the above argument?

1[anonymous]
Right, a non-confused attack on the deontologist in the spirit of the confused attack would say something like "your meta-ethical theory does not sufficiently explain the injunctions included in your normative, deontological theory." But as you imply, this is a criticism of a meta-ethical theory, or better yet an ethicist's whole view. This is not an attack on deontology as such. And I don't think there's any name for those who make the mistake I point out. Its not even really a mistake, just a confusion about how a certain academic discussion is organized, which leads, in this case, to a lot of strawmaning.
0falenas108
Sorry, looks like I should have been clearer on the last point. I wasn't asking for the name of a fallacy, I was asking if there is a name for the type of meta-ethics that leads to deontology.
0[anonymous]
As to the name of the fallacy, I'm not sure. I suppose it's something like a misplaced expectation? The mistake is thinking that a certain theoretical moving part should do more work than it is rightly expected to do, while refusing to examine those moving parts which are rightly expected to do that work. EDIT: An example of a similar mistake might be thinking that a decision theory should tell you what to value and why, or that evolution should give an account of bio-genesis. The SEP article's last section, on deontology and metaethics is very helpful here: