Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

  • The rightness of something depends on what happens subsequently.

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

  • The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
  • The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
  • Historical facts (e.g. having made a promise, sworn a vow)
  • Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
  • Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
  • The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)

Individual deontological theories will have different profiles, just like different consequentialist theories.  And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3.  The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:

  1. What would the world look like if I followed theory X?
  2. You ought to act in such a way as to bring about the result of step 1.

And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.

But extensional definitions are terribly unsatisfactory.  Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys).  You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates.  The two terms will tell you "yes" to the same creatures and "no" to the same creatures.

But what "renate" means intensionally has to do with kidneys, not spines.  To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory.  To try to capture a non-consequentialism with a doppelganger commits the same sin.  A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.

If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things.  Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them.  And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie.  But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point.  The deontologist wasn't thinking any of those things.  The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.

But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib.  And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory.  (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again.  And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.")  The voices' instruction "happened" before the prospective act of lying.  The explosion at the North Pole is a subsequent potential event.  The promise to the reindeer is in the past.  The vengeful haunting comes up later.

A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor.  It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight.  It may even look like that to the agent.  Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.

The difference is subtle, and how it gets implemented depends on one's epistemological views.  Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y.  The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act.  His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad."  (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.)  Her assessment, on the other hand, is more complicated, and can branch in a few places.  Does the agent know that X will lead to Y?  If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences.  If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.

 

1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general.  I apologize.  In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism.  "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.

2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms.  Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader.  I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.

3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do.  Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all.  I will be ignoring expected utility consequentialisms for this reason.

4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.

5This is not intended to be a real model of anyone's consequentialist caveats.  But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.

6As far as I know, no one seriously endorses "schizophrenic deontology".  I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views.  Please do not take it to be representative of deontic theories in general.

7Real epistemic state means the beliefs that the agent actually has and can in fact act on.

8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.

New Comment
255 comments, sorted by Click to highlight new comments since: Today at 6:51 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of "beginner's ethics". In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don't have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.

And before I'm accused of missing the difference between consequentialism and deontology: no, I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

[This comment is no longer endorsed by its author]Reply

I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it... and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment's truth, rather than as simply a cached response to prior experience.

Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I'm definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline... like any other belief in non-physical entities, rooted in mystery worship.

I also seem to recall that previous psychology research showed that that sort of thinking was something people ... (read more)

Do you think it is likely that the emotional core of your claim was captured by the statement that "everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously"?

And then assuming this question finds some measure of ground.... how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

I haven't read into your writings super extensively, but from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions. Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I'm sure there's a huge amount more, but this is my gloss that's r... (read more)

how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

That's an interesting question. I don't think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the "wrong" others of their error, and I didn't feel any particular motivation to convince deontologists that they're wrong! I included the disclaimer because I'm honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I'd have been motivated to express greater certainty, as has happened occasionally in the past.)

So, if there's any emotional reaction taking place, I'd have to say it was frustration with an inability to understand something... and the intensity level was pretty low.

In contrast, I've had discussions here last year where I definitely felt an inclination to convince pe... (read more)

[split from parent comment due to length]

Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.

I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking... and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age... she would tie me in knots, not with clever rhetoric, but with sheer insanity -- logical rudeness writ large.

But I couldn't just come out and say that to her... not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.

Huh. No wonder I feel frustrated trying to understand deontology... I get the same, "I can't even understand this craziness well enough to be able to say it's wrong" feeling.

Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness... which would certainly explain why I... (read more)

This comment has done more than anything else you've written to convince me that you aren't generally talking nonsense.

8pjeby14y
Thank you, that's very kind of you to say. Overnight, I continued working on that thread of thoughts, and dug up several related issues. One of them was that I've also not been nearly as generous with giving positive attention and appreciation as I would've liked others to be. So I made a change to fix that this morning, and I actually felt genuine warmth and gratitude in response to your comment... something that I generally haven't felt, even towards very positive comments here in the past. So really, thank you, as it was indeed both kind and generous of you to say it.
9JenniferRM14y
Thanks for the response. That was way more than I was hoping to get back and went in really interesting directions - the corrections about the way the "reprocessing" works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same "index" and become more like stories was totally unexpected and fascinating. Also, that was very impressive in terms of just... raw emotional openness, I guess. I don't know about other readers, but it stirred up my emotions just reading about your issues as you worked through them. I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own. I'm a little frightened by how much trust you gave me I think? But I'm very grateful too. (And yes, "soul dousing" is a term I made up for the post for the sake of trying to summarize things I've read by you in the past in my own words to see if I was hearing what you were trying to say.)

I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own.

Not as much as you might think. Bear in mind that by the time anybody reads anything I've written about something like that, it's no longer the least bit emotional for me -- it has become an interesting anecdote about something "once upon a time".

If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn't yet "done" with and wanted to work on some more.

Likewise, it's a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) "you" aren't really responsible, and 2) you can change. It's easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the "feeling of being a bad person" part of the equation.

4Seth_Goldin14y
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it. I know it's been discussed here on Less Wrong, but Jonathan Haidt's research is really great, and relevant to this discussion. Professor Haidt's work has validated David Hume's assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
9Alicorn14y
Deciding whether a rule "works" based on whether it usually brings about good consequences, and following the rules that do and calling that "right", is called rule consequentialism, not deontology.

That's if you do it consciously, which I wasn't suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.

0Johnicholas14y
How about this formulation: Suppose that humans' aggregate utility function includes both path-independent ("ends") terms, and path-dependent ("means") terms. A (pseudo) deontologist in this scenario is someone who is concerned that all this talk about "achieving the best possible state of affairs" means that the path-dependent terms may be being neglected. If you think about it, any fixed "state of affairs" is undesirable, simply because it is FIXED. I don't know for sure, but I think almost everything that you value is actually a path unfolding in time - possibilities might include: falling in love, learning something new, freedom/self-determination, growth and change.
-1roystgnr12y
"Deontologists are just elevating intermediate heuristics to terminal values" is true, but also misleading and unfair unless you prepend "Consequentialists and " first. After all, it seems quite likely that joy, curiosity, love, and all the other things we value are also merely heuristics that evolution found to be useful for its terminal goal of "Make more mans. More mans!" But if our terminal values happen to match some other optimizing process' instrumental values, so what? That's an interesting observation, not a devastating criticism.

Sometimes I believe that

  • consequentialism calls possible worlds good
  • deontology calls acts good
  • virtue ethics calls people good

Of course, everyone uses "good" to label all three, but the difference is what is fundamental. cf Richard Chappell

6MichaelVassar14y
Possible worlds, however, encompass acts and people.
1wedrifid14y
To be fair, deontology encompasses possible worlds in a similar way to consequentialism encompassing acts.
3MichaelVassar14y
I don't think so, but I'd be happy to hear why you say that.
4Paul Crowley14y
I don't know whether this bears directly on this point, but I am reminded of a discussion in Toby Ord's PhD thesis on how just as the consequences of an action propagate forwards in time, rightness propagates backwards. If it is right to pull the lever, it is right to push the button that pulls the lever, right to throw the ball that pushes the button that pulls the lever and so on. This struck me as an argument for consequentialism in itself, since this observation is a natural consequence of consequentialism and doesn't follow so obviously from deontology, but perhaps this kind of thinking is built in in a way I don't see.
4DanielVarga14y
Can consequentialism handle the possibility of time-travel? If not, then something may be wrong with consequentialism, regardless of whether time-travel is actually possible or not. One of the intuitions leading me to deontology is exactly the time-symmetry of physics. Almost by definition, the rightness of an act can only be perfectly decided by an outside observer of the space-time continuum. (I could call the observer God, but I don't want to be modded down by inattentive mods.) Now, maybe I have read too much Huw Price and Gary Drescher, but I don't think this fictional outside observer would care too much about the local direction of the thermodynamic arrow of time.
6pengvado14y
I don't see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don't know how to affect the past, so if necessary I'll settle for optimizing only the future. The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn't hard enough already), but doesn't change the form of the answer. Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that's nothing new, and has to do with a bad formalization of "if I do A", not due to the focus on outcomes.
6Alicorn14y
This seems like a pretty good first pass classification to me.
5thomblake14y
I think you're right on, in broad brushstrokes. I've actually diagrammed this for people, showing the [person]->[action]->[result] system, and haven't seen a philosopher object to the rough characterization.
0TheAncientGeek10y
That's about my theory: different theories of morality are talking about different theories(but interconnected) things...that is a desirable outcome,or not, what is a culpable act , or not.
0Morendil14y
...and I'm wondering where contractualism fits in there.
3Douglas_Knight14y
Contracts tend to be about acts. Social contract theories, including Scanlon's sound deontological to me.

My issue with deontology-as-fundamental is that, whenever someone feels compelled to defend a deontological principle, they invariably end up making a consequentialist argument.

E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.

The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them. This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.

If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences, fine, that's perfectly valid. But in that case, as in the story of Churchill, we've already established what they are, we're just haggling over the price.

4Jack14y
Afaict this is true for any ethical principle, consequentialist ones included. I'm skeptical that there are unconditional principles.
2Alicorn14y
Dude. "Counterfactuals." Fourth thing on the bulleted list, straight outta Kant. I take exception to your anthropocentric morality! And if we lived on the Planet of the Sociopaths, what then? Ethics leap out a window and go splat? See here for what this is like.

"Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.

Any talk about consequences has to involve some counterfactual. Saying "outcome Y was a consequence of act X" is an assertion about the counterfactual worlds in which X isn't chosen, as well as those where it is. So if you construct your counterfactuals using something other than causal decision theory, and you choose an act (now) based on its consequences (in the past), is that another overlap between consequentialism and deontology?

0Alicorn14y
I can't parse your comment well enough to reply intelligently.
4loqi14y
What I think pengvado is getting at is that the concept of "consequence" is derived from the concept of "causal relation", which itself appears to require a precise notion of "counterfactual". I read Newcomb's paradox as a counter-example to the idea that causality must operate forward in time. Essentially, one-boxing is choosing an act in the present based on its consequences in the past. This smells a bit like a Kantian counterfactual to me, but I haven't read Kant.
8Alicorn14y
There are many accounts of causation; some of them work in terms of counterfactuals and some don't. (I don't have many details; I've never taken a class on causation.) There is considerable disagreement about the extent to which causation must operate forward in time, especially in things like discussions of free will. Don't. It's a miserable pastime.
7loqi14y
I'm pretty satisfied with Pearl's formulation of causality, it seems to capture everything of interest about the phenomenon. An account of causality that involves free will sounds downright unsalvageable, but I'd be interested in pointers to any halfway decent criticism of Pearl's approach. Thanks for affirming my suspicions regarding Kant.
6Jack14y
I wouldn't characterize Kant this way. He isn't thinking about a possible world in which the maxim is universalized, whether a maxim can or cannot be universalized has to do with the form of the maxim, nothing else. It might be the case that he sneaks in some counter-factual thinking but it isn't his intention to make his ethics rely on it. It wouldn't be a priori otherwise.
3Alicorn14y
No two people can agree on how to characterize Kant, but it is a legitimate interpretation that I have heard advanced by a PhD-having philosopher that you can think about that formulation of the CI as referring to a possible world where the maxim is followed like a natural law.
7bogus14y
This is what Kant seems to do in practice whenever he illustrates normative application of the CI. But his notion of a priori does appear to preclude this. Then again, Kant also managed to develop Newtonian physics a priori, so maybe he just knew something we don't.
1Breakfast14y
What has never stopped bewildering me is the question of why anyone should consider such a possible world relevant to their individual decision-making. I know Kant has some... tangled, Kantian argument regarding this, but does anyone who isn't a die-hard Kantian have any sensible reason on hand for considering the counterfactual "What if everyone did the same"? Everyone doing X is not even a remotely likely consequence of me doing X. Maybe this is to beg the question of consequences mattering in the first place. But I suppose I have no idea what use deontology is if it doesn't boil down to consequentialism at some level... or, particularly, I have no idea what use it is if it makes appeals to impossibly unlikely consequences like "Everyone lying all the time," instead of likely ones.

Everyone doing X is not even a remotely likely consequence of me doing X.

AAAAAAAAAAAAH

*ahem* Excuse me.

I meant: Wow, have I ever failed at my objective here! Does anyone want me to keep trying, or should I give up and just sob quietly in a corner for a while?

0Breakfast14y
Sorry. But then I said: And added, ?
6Alicorn14y
Yeah, if you have no idea what "use" deontology is unless it's secretly just tarted-up consequentialism, I have failed.
7Breakfast14y
Huh? To be fair, I don't think you were setting out to make the case for deontology here. All I am saying about its "use" is that I don't see any appeal. I think you gave a pretty good description of what deontologists are thinking; the North Pole - reindeer - haunting paragraph was handily illustrative. Anyway, I think Kant may be to blame for employing arguments that consider "what would happen if others performed similar acts more frequently than they actually do". People say similar things all the time -- "What if everyone did that?" -- as though there were a sort of magical causal linkage between one's individual actions and the actions of the rest of the world.

There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.

Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.

For example, if they see you "get away with" an act they will infer that if they repeat your action the will also avoid reprisal (especially if you and they are in similar social reference classes). If they see you act proudly and in the open they will infer that you've already done the relevant social calculations to determine that no one will object and apply sanctions. If they see you defend the act with words, they will assume that they can cite you as an authority and you'll support them in a factional debate in order not to look like a hypocrite... and so on ad nauseum.

There are various reasons people might deny that they function as role models in society. Perhaps they are hermits? Or perhaps they are no... (read more)

Very insightful comment (and the same for your follow-up). I don't have much to add except shamelessly link a comment I found on Slashdot that it reminded me of. (I had also posted it here.) For those who don't want to click the link, here goes:

I also disagree that our society is based on mutual trust. Volumes and volumes of laws backed up by lawyers, police, and jails show otherwise.

That's called selection/observation bias. You're looking at only one side of the coin.

I've lived in countries where there's a lot less trust than here. The notion of returning an opened product to a store and getting a full refund is based on trust (yes, there's a profit incentive, and some people do screw the retailers [and the retailers their customers -- SB], but the system works overall). In some countries I've been to, this would be unfeasible: Almost everyone will try to exploit such a retailer.

When a storm knocks out the electricity and the traffic lights stop working, I've always seen everyone obeying the rules. I doubt it's because they're worried about cops. It's about trust that the other drivers will do likewise. Simply unworkable in other places I've lived in.

I've had neighbors who

... (read more)
1Breakfast14y
I'm newish here too, JenniferRM! Sure, I have an impact on the behaviour of people who encounter me, and we can even grant that they are more likely to imitate/approve of how I act than disapprove and act otherwise -- but I likely don't have any more impact on the average person's behaviour than anyone else they interact with does. So, on balance, my impact on the behaviour of the rest of the world is still something like 1/6.5 billion. And, regardless, people tend to invoke this "What if everyone ___" argument primarily when there are no clear ill effects to point out, or which are private, in my experience. If I were to throw my litter in someone's face, they would go "Hey, asshole, don't throw your litter in my face, that's rude." Whereas, if I tossed it on the ground, they might go "Hey, you shouldn't litter," and if I pressed them for reasons why, they might go "If everyone littered here this place would be a dump." This also gets trotted out in voting, or in any other similar collective action problem where it's simply not in an individual's interests to 'do their part' (even if you add in the 1/6.5-billion quantity of positive impact they will have on the human race by their effect on others). "You may think it was harmless, but what if everyone cheated on their school exams like you did?" -- "Yeah, but, they don't; it was just me that did it. And maybe I have made it look slightly more appealing to whoever I've chosen to tell about it who wasn't repelled by my doing so. But that still doesn't nearly get us to 'everyone'."

Err... I suspect our priors on this subject are very different.

From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.

The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference.

If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the ... (read more)

0[anonymous]14y
Err... I suspect our priors on this subject are very different. From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti. The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference. If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy. In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood. In this scenario, the cos
0[anonymous]14y
Err... I suspect our priors on this subject are very different. From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti. The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference. If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy. In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood. In this scenario, the cos
6Alicorn14y
I wasn't trying to make the case for deontology, no - just trying to clear up the worst of the misapprehensions about it. Which is that it's not just consequentialism in Kantian clothing, it's a whole other thing that you can't properly understand without getting rid of some consequentialist baggage. There does not have to be a causal linkage between one's individual actions and those of the rest of the world. (Note: my ethics don't include a counterfactual component, so I'm representing a generalized picture of others' views here.) It's simply not about what your actions will cause! A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if the world is about to end and your act will have no consequences at all beyond being the act it is. It can be informative even if you'd never have dreamed of performing the act were it a common act type (in fact, especially then!). The counterfactual is a place to stop. It is, if justificatory at all, inherently justificatory.
0Breakfast14y
Okay, I get that. But what does it inform you of? Why should one care in particular about the universalizability of one's actions? I don't want to just come down to asking "Why should I be moral?", because I already think there is no good answer to that question. But why this particular picture of morality?
6Alicorn14y
I don't have an arsenal with which to defend the universalizeability thing; I don't use it, as I said. Kant seems to me to think that performing only universalizeable actions is a constraint on rationality; don't ask me how he got to that - if I had to use a CI formulation I'd go with the "treat people as ends in themselves" one. It suits some intuitions very nicely. If it doesn't suit yours, fine; I just want people to stop trying to cram mine into boxes that are the wrong shape.
3Breakfast14y
I suppose that's about as good as we're going to get with moral theories! Well, I hope I haven't caused you too much corner-sobbing; thanks for explaining.
7bogus14y
Kant's point is not that "everyone doing X" matters, it's that ethical injunctions should be indexically invariant, i.e. "universal". If an ethical injunction is affected by where in the world you are, then it's arguaby no ethical injunction at all. Wei_Dai and EY have done some good work in reformulating decision theory to account for these indexical considerations, and the resulting theories (UDT and TDT) have some intuitively appealing features, such as cooperating in the one-shot PD under some circumstances. Start with this post.
3Breakfast14y
I'm (obviously) no Kant scholar, but I wonder if there is any possible way to flesh out a consistent and satisfactory set of such context-invariant ethical injunctions. For example, he infamously suggests not lying to a murderer who asks where your friend is, even if you reasonably expect him to go murder your friend, because lying is wrong. Okay -- even if we don't follow our consequentialist intuitions and treat that as a reductio ad absurdum for his whole system -- that's your 'not lying' principle satisfied. But what about your 'not betraying your friends' principle? How many principles have we got in the first place, and how can we weigh them against one another?
2bogus14y
Actually, Kant only defended the duty not to lie out of philanthropic concerns. But if the person inquired of was actually a friend, then one might reasonably argue that you have a positive duty not to reveal his location to the murderer, since to do otherwise would be inconsistent with the implied contract between you and your friend. To be fair, you might also have a duty to make sure that your friend is not murdered, and this might create an ethical dilemma. But ethical dilemmas are not unique to deontology. ETA: It has also been argued that Kant's reasoning in this case was flawed since the murderer engages in a violation of a perfect duty, so the maxim of "not lying to a known murderer" is not really universalizable. But the above reasoning would go through if you replaced the murderer with someone else whom you wished to keep away from your friend out of philanthropic concerns.
2Jack14y
This just isn't true. Lying is one of the examples used to explain the universalization maxim. It is forbidden in all contexts. Can't right now, but I'll come back with cites.
6bogus14y
Actually I'm going to save you the effort and provide the cite myself: Specifically, in the Metaphysics of Morals, Kant states that "not suffer[ing our] rights to be trampled underfoot by others with impunity" is a perfect duty of virtue.
1Douglas_Knight14y
I don't see how lying to the murderer fails the test you quote, yet Kant does forbid it elsewhere ETA: perhaps it's OK to lie out of love of money, but not out of love of man? Added, years later: by "love of money," I mean that Kant says that it is OK to lie to the thief, but not to the murderer.
0Jack14y
We're allowed self-defense and punishment, according to Kant (indeed, it is required). It may, for example, be acceptable to lie to a murderer if he lies to you, since we are obligated to punish those who violate the CI. (EDIT: It could also mean that we don't have to say anything to murderers, we aren't obligated to tell the truth in every situation, but we are obligated to tell the truth in every case where we tell something. ) That said, I'm not not sure exactly what you mean by the original line "Kant only defended the duty not to lie out of philanthropic concerns". It could mean, "Kant defended the duty not to lie, but his reasons for this duty were mere philanthropic ones." It could also mean "With respect to truth-telling, Kant only says we have a duty when we might prefer to lie for philanthropic reasons." Both interpretations are wrong. Here is a quote from Kant's explicit tackling of the issue in the appropriately titled "On a supposed right to lie from philanthropy." Apologies for the long quote but I don't want to have to debate context.
1Breakfast14y
Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?
5Kaj_Sotala14y
I thought of one possible reason that would make deontology "justifiable" in consequentialist terms. Those classic "my decision has negligible effect by itself, but if everyone made the same decision, it would be good/bad" situations, like "should I bother voting" or "is okay if I shoplift". If everyone were consequentialists, each might individually decide that the effect of their action is negligible, and thus end up not voting or deciding that shoplifting was okay, with disastrous effects for society. In contrast, if more people were deontologists, they'd do the right thing even if the effect of their individual decision probably didn't change anything.
0Jack14y
Erm. I agree with the PhD-having philosopher that you can think about the formulation that way. But my PhD-having philosophers are pretty clear that even if Kant ends up implicitly relying on this it can't be what he is really trying to argue since it obviously precludes a priori knowledge of the CI. And if you can't know it a priori then Kant's entire edifice falls apart. And below, Breakfast is wondering why one should consider possible worlds relevant to decision making and says "I know Kant has some... tangled, Kantian argument regarding this". But of course Kant has no such argument! Because that isn't his argument. The argument for the CI stems from Kant's conception of freedom (that it is self-governance and that the only self-governance we could have a priori comes from the form of self-governance itself). The argument fails, I think, but it has nothing to do with counterfactuals. So when you say "Counterfactuals, straight out of Kant", it seems a lot of people who haven't read Kant are going to be mislead. I know you're just using Kant illustratively, but maybe qualify it as "some formulations of Kant"?
0TheAncientGeek10y
For us hybridist, it is the function of cosequentialism to justify rules, and the function of rules to justify sanctions.
0Nornagest10y
That seems to lead to a logical cycle. What is the function of sanctions? To modify the behavior of other agents. Why do we want to modify the behavior of other agents? Because we find some actions undesirable. Why do we find them undesirable? Because of their consequences, or because they violate established rules...
-4TheAncientGeek10y
Not all cycles are bad.

As someone who is on the fence between between noncognitivism and deontic/virtue ethics, I seem to be witnessing a kind of incommensurability of ethical theories going on in this thread. It is almost like Alicorn is trying to show us the rabbit, but all we are seeing is the duck and talking about the "rabbit" as if is it some kind of bad metaphor for a duck.

On Less Wrong, consequentialism isn't just another ethical theory that you can swap in and out of our web of belief. It seems to be something much more central and interwoven. This might be due to the fact that some disciplines like economics implicitly assume some kind of vague utilitarianism and so we let certain ethical theories become more central to our web of belief than is warranted.

I predict that Alicorn would have similar problems trying to get people on Less Wrong to understand Aristotelian physics, since it is really closer to common sense biology than Einsteinian physics (which I am guessing is very central to our web of belief).

7wnoise14y
You're confusing "understand" and "accept as useful or true". Alicorn's post was good summary of deontology. I understand it, I just don't agree with it. Richard Garfinkle's SF novel Celestial Matters in addition to being a great read, also elucidates some consequences of Aristotelian physics, increasing the intuition of the reader. I certainly think that Garfinkle understands Aristotelian physics, and just as assuredly is unwilling to use it for orbital calculations in practice (though quite capable of doing the same for fiction purposes). EDIT: reading further in the comments, I do indeed see plenty of people who don't understand deontic ethics. But just your comment about "not being able to swap in or out" does not at all demonstrate lack of understanding. EDIT: I'd also appreciate a comment by the person who downvoted me about their reasoning (or anyone else who disagrees with the substance). I obviously think this is fairly straight-forward point -- understanding and accepting are two different things. Wanting to swap a framework in or out of our web of belief is not purely about understanding it, but about accepting it. Related, certainly (it really helps to understand something in order to accept it), but not the same.

Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.

I'm not convinced that this 'backward-looking vs. forward-looking' contrast really cuts to the heart of the distinction. Note that consequentialists may accept an 'holistic' axiology according to which whether some future event is good or bad depends on what has previously happened. (For a simple example, retributivists may hold that it's positively good when those who are guilty of heinous crimes suffer. But then in order to tell whether we should relieve Bob's suffering, we need to look backwards in time to see whether he's a mass-murderer.) It strikes me as misleading to characterize this as involving a form of "overlap" with deontological theories. It's purely consequentialist in form; it merely has a more complex axiology than (say) hedonism.

The distinction may be better characterised in terms of the relative priority of 'the right' and 'the good'. Consequentialists take goodness (i.e. desirability, or what you ought to want) as fundamental, and thus have a tel... (read more)

6Alicorn14y
As I specify in my first footnote, consequentialism is wickedly hard to define. It may be that the teleological aspect is more important than the subsequence aspect, but either one leaves some things to be desired, and my post was already awfully long without going into "teleology". I like your article, though!

The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to". But the deontologist is not thinking anything with the terms "utility function" [...]

Right, but what about Dutch book-type arguments? Even if I agree that lying is wrong and not because of its likely consequences, I still have to make decisions under uncertainty. The reason for trying to bludgeon everything into being a utility function is not that "the rightness of something depends on what happens subsequently." It's that, well, we have these theorems that say that all coherent decisionmaking processes have to satisfy these-and-such constraints on pain of being value-pumped. Anything you might say about rights or virtues is fine qua moral justification, but qua decision process, it either has to be eaten by decision theory or it loses.

4Alicorn14y
If you think that lying is just wrong, can't you just... not lie? I don't see the problem here.

The problem with unbreakable rules is that you're only allowed to have one. Suppose I have a moral duty to tell the truth no matter what and a moral duty to protect the innocent no matter what. Then what do I do if I find myself in a situation where the only way I can protect the innocent is by lying?

More generally, real life finds us in situations where we are forced to make tradeoffs, and furthermore, real life is continuous in a way that is not well-captured by qualitative rules. What if I think I have a 98% chance of protecting the innocent by lying?---or a 51% chance, or a 40% chance? What if I think a statement is 60% probable but I assert it confidently; is that a "lie"? &c., &c.

"Lying is wrong because I swore an oath to be honest" or "Lying is wrong because people have a right to the truth" may be good summaries of more-or-less what you're trying to do and why, but they're far too brittle to be your actual decision process. Real life has implementation details, and the implementation details are not made out of English sentences.

The problem with unbreakable rules is that you're only allowed to have one.

I second the question. Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.

9wedrifid14y
I don't know whether there is a standard reply in deontology but the appropriate reply is using a function equivalent to the utility function by a consequentialist. * Take the concept of the utility function * Rename it to something suitably impressive (but I'll just go with the bland 'deontological decision function') * Replace 'utility of this decision' with 'rightness of this decision'. * A primitive utility function may include a term for 'my bank balance'. A primitive deontological decision function would have a term for "Not telling a lie". ---------------------------------------- Obviously, the 'deontological decision function' sacrifices the unbreakable criteria. This is appropriate when making a fair comparison between consequentialist and deontological decisions. The utility function sacrifice absolute reliance on one particular desideratum in order to accommodate all the others. For the sake of completeness I'll iterate what seem to be the only possible approaches that actually allow having multiple unbreakable rules. 1) Only allow unbreakable rules that never contradict each other. This involves making the rules more complex. For example: * Always rescue puppies. * Never lie, except if it saves the life of puppies. * Do not commit adultery unless you are prostituting yourself in order to donate to the (R)SPCA. Such a system is results in an approximation of the continuous deontological decision function. 2) Just have a single unbreakable meta-rule. For example: * Always do the most right thing in the deontological decision function. Or, * Always maximise utility. These responses amount to "Hack a deontological system with unbreakable rules to work around the spirit of either 'unbreakable' or 'deontological'" and I include them only for completeness. My main point is that a deontological approach can be practically the same as the consequentialist 'utility function' approach.
1wedrifid14y
It disappoints me that this comment is currently at -1. Of all the comments I have made in the last week this was probably my favourite and it remains so now. If "the standard reply of a consequentialist is the utility function" then the analogous reply of a deontologist is something very similar. It is unreasonable to compare consequentialism with a utility function with a deontological system in which rules are 'unbreakable'. The latter is an absurd caricature of deontological reasoning that is only worth mentioning at all because deontologists are on average less inclined to follow their undeveloped thoughts through to the natural conclusion. Was my post downvoted because...? 1. Someone disagrees that a 'function' system applies to deontology just as it applies to consiquentialism. 2. I have missed the fact that conclusion is universally apparent and I am stating the obvious. 3. I included an appendix to acknowledge the consequences of the 'universal rule' system and elaborate on what a coherent system will look like if this universality can not be let go.
4Alicorn14y
I haven't voted on your comment. I like parts of it, but found other parts very hard to interpret, to the point where they might have altered the reading of the parts I like, and so I was left with no way to assess its content. If I had downvoted, it would be because of the confusion and a desire to see fewer confusing comments.
1wedrifid14y
Thankyou. A reasonable judgement. Not something that is trivial to rectify but certainly not an objection to object to.
5[anonymous]14y
I'm pretty sure the standard reply is, "Sometimes there is no right answer." These are rules for classifying actions as moral or immoral, not rules that describe the behavior of an always moral actor. If every possible action (including inaction) is immoral, then your actions are immoral.
4Tyrrell_McAllister14y
In my experience, deontologists treat this as a feature rather than a bug. The absolute necessity that the rules never conflict is a constraint, which, they think, helps them to deduce what those rules must be.
3drnickbone12y
This assumes that deontological rules must be unbreakable, doesn't it? That might be true for Kantian deontology, but probably isn't true for Rossian deontology or situation ethics. We can, for instance imagine a deontological system (moral code) with three rules A, B and C. Where A and B conflict, B takes precedence; where B and C conflict, C takes precedence; where C and A conflict, A takes precedence (and there are no circumstances where rules A, B and C all apply together). That would give a clear moral conclusion in all cases, but with no unbreakable rules at all. True, there would be a complex, messy rule which combines A, B and C in such a way as not to create exceptions, but the messy rule is not itself part of the moral code, so it is not strictly a deontological rule.
2Unknowns14y
All unbreakable rules in a deontological moral system are negative; you would never have one saying "protect the innocent." But you can have "don't lie" and "don't murder" and so on. And no, if you answer the question truthfully, failing to protect the innocent, they don't count that as murdering (unless there was some other choice that you could have made without either lying or failing to protect the person.)
6Alicorn14y
This isn't necessarily the case. You can have positive requirements in a deontic system.
1Unknowns14y
Yes, but not "unbreakable" ones. In other words there will be exceptions on account of some other positive or negative requirement, as in the objections above.

The problem with unbreakable rules is that you're only allowed to have one.

"Allowed"?

It is quite common for moral systems found in the field to have multiple unbreakable rules and for subscribers to be faced with the bad moral luck of having to break one of them. The moral system probably has a preference on the choice, but it still condemns the act and the person.

5Alicorn14y
A really clever deontic theory either doesn't permit those conflicts, or has a meta-rule that tells you what to do when they happen. (My favored solution is to privilege the null action.) A deontic theory might take into account your probability assessments, or ideal probability assessments, regarding the likely outcome of your action. And of course if you're going to fully describe what a rule means, you have to define things in it like "lie", just as to fully describe utilitarianism you have to define "utility".
2Unknowns14y
It's true that the detail of real life is an objection to deontology, but it is also an objection to every other moral system, for much the same reasons.
9wedrifid14y
Yes. It may or may not cause the extinction of humanity but if you want to 'just... not lie' you can certainly do so.
5komponisto14y
Can a deontologist still care about consequences? Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?
1Alicorn14y
A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.

+10karma for you!

I have a bit of a negative reaction to deontology, but upon consideration the argument would be equally applicable to consequentialism: the prescriptions and proscriptions of a deontological morality are necessarily arbitrary, and likewise the desideratum and disdesideratum (what is the proper antonym? Edit: komponisto suggests "evitandum", which seems excellent) of a consequentialist morality are necessarily arbitrary.

...which makes me wonder if the all-atheists-are-nihilists meme is founded in deontological intuitions.

desideratum...(what is the proper antonym?)

"Evitandum"?

Sounds even better in the plural: "The evitanda of the theory..."

6Alicorn14y
Oh, I like that, it's adorable.
5Eliezer Yudkowsky14y
I initially associated this to "evidence" but I suppose it would be easy enough to learn.
0RobinZ14y
...how do you pronounce that? And what is the etymology? The only obvious source I can see is "evil", which is Germanic rather than Latinate. (A carping complaint, to be sure, but even if I fold on this one, I still maintain that many mismatched combinations - particularly "ombudsperson" - are abominations unto good taste.)
5komponisto14y
What Alicorn said. "Evitare" is Latin for "to avoid"; if "X-are" is a Latin verb meaning "to Y", then an "X-andum" is a "thing to be Y-ed".
2ABranco14y
"Avoidum" (pl. "avoida") could be an alternative — but "evitandum", having more syllables, does sound better.
0JohnWittle11y
I never came across that word during my four years of studying latin. What declension is it?
0Richard_Kennaway11y
From my two years of studying Latin I know that evitandum is second declension neuter gender, being a gerund. In Latin the word can also be an adjective, in which case it is second declension and inflected for all genders. Cf. the English word "inevitable" = unavoidable.
3JohnWittle11y
err, I meant 'Avoidum'
4Richard_Kennaway11y
Ok, that's just a made-up mish-mash of English and Latin.
4Alicorn14y
From "evitable", which is the opposite of "inevitable" - so it means "thing to be avoided".
1RobinZ14y
All is clear! Approved! (Would have edited in, but no natural way to do so and preserve thread of conversation.) (Edit: Have edited into the parenthetical.)
3Breakfast14y
Certainly, many theists immediately lump atheism, utilitarianism and nihilism together. There are heaps of popular depictions framing utilitarian reasoning as being too 'cold and calculating' and not having 'real heart'. Which follows from atheists 'not having any real values' and from accepting the nihilistic, death-obsessed Darwinian worldview, etc.

I can perfectly understand the idea that lying is fundamentally bad, not just because of its consequences. My problem comes up for how that doesn't imply that something else can be bad because it leads to other people lying.

The only way I can understand it is that deontology is fundamentally egoist. It's not hedonist; you worry about things besides your well-being. But you only worry about things in terms of yourself. You don't care if the world descends into sin so long is you are the moral victor. You're not willing to murder one Austrian to save him from murdering six million Jews.

Am I missing something?

6Kevin14y
Hitler may not be the best example since it's not obvious to me that Hitler's death would have resulted in fewer lives lost during the genocides of the 20th century, because a universe without Hitler would have had a more powerful USSR.
1Strange714y
For that matter, Germany could've picked a different embittered, insane would-be dictator. They weren't in short supply.
0[anonymous]14y
I don't think your assessment is accurate, because of the following facts: * USSR actually ended up more powerful, enlarged, and with greater prestige in 1945 -- for the exact reason that Germany, its main strategic rival, went on to pursue a suicidal attack against it under Hitler. * The German-Soviet war itself opened the opportunities for genocidal and near-genocidal campaigns by both sides, especially Germans, and it would have to have been an awfully large decrease in Soviet-perpetrated genocide to balance that. * Not counting the deaths related to the military operations, the overwhelming number of killings done by Stalin had already been finished by 1941. After that, the situation under him was of course awfully bad, but there was nothing like the enormous, Holocaust-scale mass killing projects he undertook in the 1930s.
3Alicorn14y
If I know he's going to murder six million Jews, that's relevant. If I stab him because he took my parking space and for this reason, he does not go on to murder six million Jews, I have achieved no moral victory.
5cousin_it14y
I'm not sure this scenario enlightens me. It seems to be about available information rather than deontologism vs consequentialism. From the way you describe it, both the deontologist and the consequentialist will murder Hitler if they know he's going to become Hitler, and won't if they don't.
0Alicorn14y
The consequentialist will not in fact kill Hitler if they don't know he's Hitler, but it's part of their theory that they should.
[-][anonymous]13y110

That seems like a fairly useless part of consequential theory. In particular, when retrospecting about one's previous actions, a consequentialist should give more weight to the argument "yes, he turned out to become Hitler, but I didn't know that, and the prior probability of the person who took my parking space being Hitler is so low I would not have been justified in stabbing him for that reason" than "oh no, I've failed to stab Hitler". It's just a more productive thing to do, given that the next person who takes the consequentialist's parking space is probably not Stalin.

Real-life morality is tricky. But when playing a video game, I am a points consequentialist: I believe that the right thing to do in the video game is that which maximizes the amount of points I get at the end.

Suppose one of my options is randomly chosen to lead to losing the game. I analyze the options and choose the one that has the lowest probability of being chosen. Turns out, I was unlucky and lost the game. Does that make my choice any less the right one? I don't believe that it does.

0[anonymous]14y
Same for the consequentialist, no?
0Jack14y
In Kantian deontology the actions of someone else can generate a positive obligation. In particular, your are obligated to punish those who violate the Categorical Imperative. You're definitely obligated to punish Hitler after the fact. Beforehand is trickier (but this is more a metaphysics of time issue than a ethical issue, you could probably make a case for timeless punishment under Kantian deotology).
3DanielLC14y
My point wasn't punishment. If I were to kill an innocent man to keep Hitler from getting into power, that would still save him from murdering millions, and there'd be a net decrease in the murder of innocent people. If anything, this version is even more clear cut, since it's not clear if you're really saving someone if they end up dead. The only reason you can justify not doing it is if you think it's more important for you to not be a murderer than Hitler.
0lessdazed13y
Your excellent comment would have been improved by instead saying "murdering 11 million in concentration camps" or better yet "beginning a war that led to over fifty million dead". Many consequentialist theories might have a special term in which it is bad (for people, to be sure) if a culture or a people is targeted and destroyed. If Poland had ~35 million people, including ~3 million Jews, and Hitler killed ~6 million Polish civilians, including nearly all the Jews, he did worse than if he would have killed 6 million at random, as he killed ~6 million innocent people and one innocent people (sic). In another sense, there may have been similar suffering among Polish Jews and non-Jews (perhaps more aggregate suffering among non-Jews if the non-fatal suffering of the other Poles is included, but as a point of historical fact the average suffering by a Jew before death was probably greater than the average Pole's before death 1939-45). Perhaps killing a people isn't very bad, and our condemnation to it has to do with how hard it is to kill a people without killing people, the second of which is the important bad thing. Similarly, the mode of death and capacity of the dead varies greatly among consequentialism and deontology, but a singular mention of the murdered somewhat indicates deontological thinking. How much worse is a murder than a killing (of a volunteer soldier? Of a draftee? Of a weapons manufacturing worker? Of a power plant worker? Of an apprentice florist who has nearly reached draft age)? Broadly speaking, when generalizing over consequentialism I wouldn't focus on those murdered, but of those who died. Doing so would have more clearly indicated that your point wasn't punishment.
1DanielLC13y
My reference to historical events would have been slightly more complete? Referencing historical events isn't important. I wasn't even so much referencing the event as referencing that it always gets referenced. Hitler just happened to end up in the middle of a popular thought experiment. So? My point is that, even if you accept a given action is inherently bad, if it's bad for anyone to do it, it may be worth while for you to do it. It only works out as Deontology if you assume that actions can only be bad if you're the one doing them. More specific thought experiments can show that it only works if they're only bad if you're doing them right at this very moment. If it was just bad to die, no deontologist would argue that there's anything wrong with killing one guy to keep him from killing another. I was assuming for the sake of argument. it was just murder that was bad.

Deontology treats morality as terminal. Consequentialism treats morality as instrumental.

Is this a fair understanding of deontology? Or is this looking at deontology through a consequentialism lens?

3Alicorn14y
This looks okay as an interpretation of deontology to me. This may be because it sounds like a nice thing to say about it, and a comparatively mean thing to say about consequentialism, but I can't claim to get consequentialism on an emotional level, so I guess I don't know what's considered mean to say about it.
8wedrifid14y
For comparison, as I read that it sounded like a mean thing to say about deontology and a neutral thing to say about consequentialism. This may be because I have internalized consequentialist thinking so consequentialist related things sound better. Or maybe it is because I naturally associate 'morality as terminal' with 'lies you tell people and stuff you try to force other people to do'.
0Alicorn14y
That's very interesting. If it happened in one direction - if morality being instrumental started out sounding good to you and bad to me - that could explain a lot of the apparent disconnect between consequentialists and non-.
6sark14y
I'm a consequentialist, and treating morality as terminal seems to me like missing the point of morality entirely. I'm glad I got it right that deontologists think that way. But I can't understand why you would consider treating morality as terminal correct. As a consequentialist I would say: "Morality concerns what you care about, not the fact that you care." What does the deontologist think of that?
5Alicorn14y
I'd say it doesn't matter if you care: you should do what's right anyway. Even psychopaths should do what's right.
7AndyWood14y
Does the question of "why" simply not enter into a deontologists thinking? My mind seems to leap to complete the statement "you should do what's right" with something along the lines of "because society will be more harmonious". Also, I wish that psychopaths would do what's right, but what seems to be missing is any force of persuasion. And that seems important.
5Alicorn14y
We can have "whys", but they look a little different. Mine look like "because people have rights", mostly. Or "because I am a moral agent", looking from the other direction.
1AndyWood14y
I think one reason that so many people here are consequentialists is that these kinds of ideas do not hit bottom. I think LW attracts people who like to chase explanations down as far as possible to foundations. Do you yourself apply reductionism to morality?
2Alicorn14y
"Reductionism" is one of those words that can mean about seventeen things. I think rights/moral agency both drop out of personhood, which is a function of cognitive capacities, which I take to be software instantiated on entirely physical bases - does that count as whatever you had in mind?
3AndyWood14y
From Wikipedia: The bold part is plenty close enough to what I have in mind. Your response definitely counts. Next question that immediately leaps to mind is: how do you determine which formulations of morality best respect the personhood and rights of others, and best fulfill your duty as a moral agent?
1Alicorn14y
My theory isn't finished, so I can't present you with a list, or anything, but I've just summarized what I've got so far here.
2Jack14y
Have you given a description of your own ethical philosophy anywhere? If not, could you summarize your intuitions/trajectory? Doesn't need to be a complete theory or anything, I'm just informally polling the non-utilitarians here. (Any other non-utilitarians who see this feel free to respond as well)

I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):

Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.

However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless ... (read more)

9Jack14y
Upvoted for spelling out so much, though I disagree with the whole approach (though I think I disagree with the approach of everyone else here too). This reads like pretty run of the mill deontology-- but since I don't know the field that well, is there anywhere you differ from most other deontologists? Also, are rights axiomatic or is there a justification embedded in your concept of personhood (or somewhere else)?
3Alicorn14y
The quintessential deontologist is Kant. I haven't paid too much attention to his primary sources because he's miserable to read, but what Kant scholars say about him doesn't sound like what goes through my head. One place I can think of where we'd diverge is that Kant doesn't forbid cruelty to animals except inasmuch as it can deaden humane intuitions; my principle of needless destruction forbids it on its own demerits. The other publicly deontic philosopher I know of is Ross, but I know him only via a two-minute unsympathetic summary which - intentionally or no - made his theory sound very slapdash, like he has sympathies to the "it's sleek and pretty" defense of utilitarianism but couldn't bear to actually throw in his lot with it. The justification is indeed embedded in my concept of personhood. Welcome to personhood, here's your rights and responsibilities! They're part of the package.
7simplicio14y
Ross is an interesting case. Basically, he defines what I would call moral intuitions as "prima facie duties." (I am not sure what ontological standing he thinks these duties have.) He then lists six important ones: beneficence, honour, non-maleficence, justice, self-improvement and... goodness, I forget the 6th. But essentially, all of these duties are important, and one determines the rightness of an act by reflection - the most stringent duty wins and becomes the actual moral duty. E.g., you promised a friend that you would meet them, but on the way you come upon the scene of a car crash. A person is injured, and you have first aid training. So basically Ross says we have a prima facie duty to keep the promise (honour), but also to help the motorist (beneficence), and the more stringent one (beneficence) wins. I like about it that: it adds up to normality, without weird consequences like act utilitarianism (harvest the traveler's organs) or Kantianism (don't lie to the murderer). I don't like about it that: it adds up to normality, i.e., it doesn't ever tell me anything I don't want to hear! Since my moral intuitions are what decides the question, the whole thing functions as a big rubber stamp on What I Already Thought. I can probably find some knuckle-dragging bigot within a 1-km radius who has a moral intuition that fags must die. He reads Ross & says: "Yeah, this guy agrees with me!" So there is a wrong moral intuition. On the other hand, before reading Peter Singer (a consequentialist), I didn't think it was obligatory to give aid to foreign victims of starvation & preventable disease; now I think it is as obligatory as, in his gedanken experiment, pulling a kid out of a pond right beside you (even though you'll ruin your running shoes). Ross would not have made me think of that; whatever "seemed" right to me, would be right. I am also really, really suspicious of the a priori and the prima facie. It seems very handwavy to jump straight to these "duties"
1AlexanderRM9y
"The whole thing functions as a big rubber stamp on What I Already Thought" Speaking as a (probably biased) consequentialist, I generally got the impression that this was pretty much the whole point of Deontology. However, the example of Kant being against lying seems to go against my impression. Kantian deontology is based on reasoning things about your rules, so it seems to be consistent in that case. Still, it seems to me that more mainstream Deontology allows you to simply make up new categories of acts (ex. lying is wrong, but lying to murderers is OK) in order to justify your intuitive response to a thought experiment. How common is it for Deontologists to go "yeah, this action has utterly horrific consequences, but that's fine because it's the correct action", the way it is for Consequentialists to do the reverse? (again noting that I've now heard about the example of Kant, I might be confusing Deontology with "inuitive morality" or "the noncentral fallacy".)
3Jack14y
So I think I have pretty good access to the concept of personhood but the existence of rights isn't obvious to me from that concept. Is there a particular feature of personhood that generates these rights?
2Alicorn14y
That's one of my not-finished things, is spelling out exactly why I think you get there from here.
3lessdazed13y
Rather than take the "horrible consequences" tack, I'll go in the other direction. How possible is it that something can be deontologically right or wrong if that something is something no being cares about, nor do they care about any of its consequences, by any extrapolation of their wants, likes, conscious values, etc., nor should they think others care? Is it logically possible?
1Alicorn13y
You seem to answer your own question in the quote you chose, even though it seems like you chose it to critique my inconsistent pronoun use. If no being cares about something, nor wants others to care about it, then they're not likely to want to retain their rights over it, are they? The sentences in which I chose "ey" are generic. The sentences in which I used "he" are about a single sample person.
4lessdazed13y
So if they want to retain without interruption their right to, say, not have a symmetrical spherical stone at the edge of their lawn rotated without permission, they perforce care whether or not it is rotated? They can't merely want a right? Or if they want a right, and have a right, and they don't care to exercise the right, but want to retain the right, they can't? What if the only reason they care to prohibit stone turning is to retain the right? Does that work? Is there a special rule saying it doesn't? As part of testing theories to see when they fail rather than succeed, my first move is usually to try recursion. Least convenient possible world, please. Regardless, you seem to believe that some other forms of deontology are wrong but not illogical, and believe consequentialist theories wrong or illogical. For example, a deontology otherwise like yours that valued attentiveness to evidence more you would label wrong and not illogical. I ask if you would consider a deontological theory invalid if it ignored wants, cares etc. of beings, not whether or not that is part of your theory. If it's not illogical and merely wrong, then is that to say you count that among the theories that may be true, if you are mistaken about facts, but not mistaken about what is illogical and not? I think such a dentology would be illogical, but am to various degrees unsure about other theories, which is right and which wrong, and about the severity and number of wounds in the wrong ones. Because this deontology seems illogical, it makes me suspect of its cousin theories, as it might be a salient case exhibiting a common flaw. I think it is more intellectually troubling than the hypothetical of committing a small badness to prevent a larger one, but as it is rarely raised presumably others disagree or have different intuitions. I don't see the point of mucking with the English language and causing confusion for the sake of feminism if the end result is that singular sample murde
-10Alicorn13y
0Peterdjones11y
Kant's answer, greatly simplified, is that rational agents will care about following moral rules, because that is part of rationality.
3AngryParsley14y
Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics. If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?
5Alicorn14y
What do you mean, "these particular rights"? I haven't presented a list. I mentioned one right that I think we probably have. Oh, now, that was low. Do you mean: does Alicorn's nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They're different questions.
3AngryParsley14y
Yeah, but most people don't come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture. Now I'm curious. Is your answer to them different? Could you please answer both of those hypotheticals? ETA: If your answer is different, then isn't your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?
1Alicorn14y
Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn't even like chocolate chip cookie dough ice cream. No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn't have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn't make my morality "based on consequences"; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.
4AngryParsley14y
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world? A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn't OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn't OAC also likely believe that inaction is inherently bad? I doubt OAC would say, "I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action." Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you're peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.
4Alicorn14y
It seems likely indeed that someone would do that. I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible. One thing that's very complicated and not fully fleshed out that I didn't mention is that, in certain cases, one might be obliged to waive one's own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.
2Jack14y
Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions. And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.
6AngryParsley14y
Umm... torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights. It doesn't sound nice or explain our intuitions. Instead, the goal is the most good for the most people.

I said:

maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.

2wedrifid14y
Because she says so. Which is a good reason. Much as I have preferences for possible worlds because I say so.
2AndyWood14y
Thanks for writing this out. I think you'll be unsurprised to learn that this substantially matches my own "moral code", even though I am (if I understand the terminology correctly) a utilitarian. I'm beginning to suspect that the distinction between these two approaches comes down to differences in background and pre-existing mental concepts. Perhaps it is easier, more natural, or more satisfying for certain people to think in these (to me) very high abstractions. For me, it is easier, more natural, and more satisfying to break down all of those lofty concepts and dynamics again, and again, until I've arrived (at least in my head) at the physical evolution of the world into successive states that have ranked value for us. EDIT: FWIW, you have actually changed my understanding of deontology. Instead of necessarily involving unthinking adherence to rules handed down from on-high/outside, I can now see it as proceeding from more basic moral concepts.
0blacktrance10y
I find myself largely in agreement with most of this, despite being a consequentialist (and an egoist!).
0Alicorn10y
Where's the point of disagreement that makes you a consequentialist, then?
1blacktrance10y
Because while I agree that people have rights and that it's wrong to violate them, rights are themselves derived from consequences and preferences (via contractarian bargaining), and also that "rights" refers to what governments ought to protect, not necessarily what individuals should respect (though most of the time, individuals should respect rights). For example, though in normal life, justice requires you* to not murder a shopkeeper and steal his wares, murder would be justified in a more extreme case, such as to push a fat man in front of a trolley, because in that case you're saving more lives, which is more important. My main disagreement, though, is that deontology (and traditional utilitarianism, and all agent-neutral ethical theories in general) is that it fails to give a sufficient explanation of why we should be moral. .* By which I mean something like "in order to derive the benefits of possessing the virtue of justice". I'm also a virtue ethicist.
0TheAncientGeek10y
Consequentialism can override rules just where consequences can be calculated...which is very rarely.
-1wedrifid14y
Wow. You would try to stop me from saving the world. You are evil. How curious.
-2Alicorn14y
Why, what wrong acts do you plan to commit in attempting to save the world? Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.
1wedrifid14y
Evil and cunning. No! I'll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway). Absolutely! Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual: Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a 'subsistence paradise'. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong? Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy? Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene? Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I'm standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?
3Alicorn14y
Aw, thanks...? If there in fact something morally wrong about releasing the tech (your summary doesn't indicate it clearly, but I'd expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.
0wedrifid14y
That is promising. Would you let me kill Dave too?
3Alicorn14y
If you're in the room with Dave, why wouldn't you just push the AI's reset button yourself?
-1wedrifid14y
See link. Depends on how I think he would update. I would kill him too if necessary.
-2wedrifid14y
I don't know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.
1sark14y
So morality is one shard of desire from godshatter which deontologists think matters a lot?
1Alicorn14y
I really don't think desire enters into it, except in certain very indirect ways.
0wedrifid14y
Yes, although a deontologist will likely not want to describe it that way. It makes their entire ethical framework look like a cludgy hack.
1Jack14y
I can't understand how anyone could think there was a fact of the matter about this. How possibly could I decide which was a better way to treat morality?
2AndyWood14y
I don't think there has to be an objective fact of the matter about it, in order for a person to find one way better. I don't endorse the use of the word correct here, but I would say that consequentialism seems to be more fundamental. Deontology seems to stop too soon. Because I prefer reductionist explanations in general, it's easy for me to decide that some form of consequentialism is "better", given my preference. I'm interested to learn the reasons why others find deontology more appealing.
1Jack14y
Got it. Fwiw, it seems to me that reduction is for natural objects not social constructions. You shouldn't try to apply reductionism to the rules of a sporting event, for example. (I'm not a deontologist but I don't think it is any less appealing than consequentialism.)
1AndyWood14y
It may be a different flavor of reductionism from finding out how a clock works, but I still apply a kind of reductionism to social constructions, and well, pretty much everything. Social constructions, for example, have histories - origins and evolution - that I greatly enjoy digging into. You can read about how basketball originated, and what its creator was thinking when he selected the rules.
1Jack14y
Sure, I see what you mean. But you can't just change the rules of basketball because you don't think they fit what the creator was trying to do. Similarly, the relevant reduction for ethics is cultural evolution and evolutionary psychology, but those fields of study won't tell you how to act. ETA: Can't, not can. Oops.
2DanielLC14y
I, a consequentialist, consider morality terminal. I take actions that result in moral consequences.
1sark14y
OK, but what does this morality you consider terminal consist of? And why do you take it to be the way you take it to be?

You can then extensionally define "renate" as "has a spinal column"

But what "renate" means intensionally has to do with kidneys, not spines.

I don't think this has been covered here yet, so for those not familiar with these two terms: inferring something extensionally means you infer something based on the set in which an object belongs to. Inferring something intensionally means you infer something based on the actual properties of the object.

Wikipedia formulates these as

An extensional definition of a concept or term form

... (read more)
5Vladimir_Nesov14y
It was covered in Extensions and Intensions.
0Kaj_Sotala14y
You're right, I'd forgotten about that.
1arbimote14y
Are extensional and intensional definitions related to outside views and inside views? I suppose extensional definitions and outside view are about drawing conclusions from a class of things, while the intensional and inside use specific details more unique to the thing in question.
0Kaj_Sotala14y
It seems to me that they are at least somewhat related. Recently, I've been wondering to which degree extensional/intensional definitions, the outside/inside view and the near/far view might be different ways of looking at the same two modes of reasoning. (I had a longer post of it in mind, and thought I had come up with something important, but now I've forgotten what the important part of it was. :-( )

It seems to me that this addresses two very different purposes for moral judgments in one breath.

When trying to draw a moral judgment on the act of another, what they knew at the time and their intentions will play a big role. But this is because I'm generally building a predictive model of whether or not they're going to do good in the future. By contrast, when I'm trying to assess my own future actions, I don't see what need concern me except whether act A or act B bring about more good.

3Nisan14y
A possible role for deontic morality is assigning blame or approbation. For this it is necessary to take the agent's intent and level of knowledge into consideration. Blaming or approving people for their actions is part of the enforcement process in society. I'm not trying to justify deontology; I'm observing that this feature and others (like the use of reference classes and the bit about keeping promises) make deontology well-suited to (or a natural feature of) prerational societies.
7Alicorn14y
It's usually a province of consequentialism to separate rightness/wrongness from praiseworthiness/blameworthiness. They come together in other accounts. Appropriating deontic rules for only the latter purpose isn't using deontic morality proper, it's using a deontic-esque structure for blame and praise alone.
8AndyWood14y
This point seems very important to me. I wonder how much disagreement is due to this, which I see as conflation. How should I act? and How should I assign blame/praise? are very different questions. For one thing, when asking how to assign blame/praise, the framework for deciding blame/praiseworthiness is obviously key. However, when asking how oneself should act, the agent will have any number of considerations, and how praise or blame will be assigned by others may be a small or non-existent factor, depending on the situation. In general, it seems like praisers and blamers will tend to be in a position of advocating for society, and actors will tend to be in a position of advocating for their individual interests. Is there some motivation for wanting to unify these differing angles under one framework?
4Alicorn14y
Perhaps germane to the distinction, at least for me, is that I find myself more interested in how to avoid blame and seek praise, so conflating them lets me figure out how to do that and also how to do what I should do simultaneously. How one should assign blame/praise and how those things will in fact be assigned, however, are almost completely unrelated. One could be both blamed and unblameworthy, or praised and unpraiseworthy.
3RobinZ14y
You are a consequentialist. Your reply is precisely accurate, complete, and well-reasoned from a consequentialist perspective, but misses the essential difference between consequentialism and deontology. Edit: Quoting the OP:

For those curious about what kind of case can be made for deontology vs. consequentialism:

-1drnickbone12y
A big issue I have with act utilitarianism is the way it self-destructs pragmatically. It looks like better consequences will arise if we teach a form of deontology, reward or punish people who (respectively) follow or break the moral rules, call it "right" to follow the rules and "wrong" to break them etc. So a true act consequentialist will encourage everyone to become a deontologist (and to the extent others copy him, will act like a deontologist). "Rule utilitarianism" seems immune from this problem, though arguably rule utilitarianism is a form of deontology; it just has an underlying rationale for selecting a particular set of rules (i.e. the optimal moral code). A different objection is that it is simply too demanding: the best way for me to maximize utility is to give nearly all my money to humanitarian charities, so why aren't I doing that? (Answer, because my personal utility function has very weak correlation with a global additive or average utility function; though it does seem to have a strongly weighted component towards me personally following deontological rules. Funny that.)

Deontological arguments (apart from helping with "running on corrupted hardware") are useful for the compression of moral values. It's much easier to check your one-line deontology, than to run a complicated utility function through your best estimate of the future world.

A simple "do not murder" works better, for most people, than a complex utilitarian balancing of consequences and outcomes. And most deontological aguments are not rigid; they shade towards consequentialism when the consequences get too huge:

  • Freedom of speech is absolu
... (read more)
6Wei Dai14y
I think what you're talking about isn't deontology, but rule utilitarianism.
6thomblake14y
"rule utilitarianism" collapses into deontology or regular utilitarianism when pushed; otherwise, it's inconsistent. Though it is generally accepted by utilitarians that acting according to general rules will in practice generate more utility than trying to reason about every situation anew.
1Stuart_Armstrong14y
Possibly; are they distinguished in practice?

extensional definitions are terribly unsatisfactory

True enough, but it's worth noting that what we have here (between a deontological theory and its 'consequentialized' doppelganger) is necessary co-extension. Less chordates and renates, more triangularity and trilaterality. And it's philosophically controversial whether there can be distinct but necessarily co-extensive properties. (I think there can be; but I just thought this was worth flagging.)

1Jack14y
Good point. If we do another survey (and it is about time) I'd like to know how people here stand on the existence of abstract objects (universals, types, etc.)
-4CannibalSmith14y
Abstract object exist in my mind. The end.
-1[anonymous]14y
No they don't. The end.

What was the difference between hedonism and preferentialism again?

8wedrifid14y
Give people pleasure vs give people what what they really want.

Typo: "prise apart" not "prize apart".

EDIT: another typo: "tl;dr" at the start of the post. Please consider getting rid of this habit. Your writing is, as a rule, improved by moving your main point to the top, and this reader appreciates your doing that; the cutesy Internetism is an needless distraction.

5wedrifid14y
Typo: "a" not "an". I too find "tl;dr" irritating. It is entirely unintuitive and looks like a rendering error. Too Obfuscated; Don't Decode. ETA Typo: missing 'is' ;)
4Alicorn14y
Is there a suitable substitute for tl;dr that you would find less distracting? I do want to signal "this is an ultra-short summary" to avoid people interpreting it as part of the "flow" of the whole article.

Signaling might not be necessary, as your summary normally serves as a "hook" to draw readers into the body of the article.

That said, you could italicize or bold (my preference) the summary, or set it off from the body with a horizontal rule.

5Alicorn14y
Italicized. Thanks for your input.
7Vladimir_Golovin14y
I recently had a need for such substitute to summarize a long email to an extremely busy non-chatty high-status person. I went with "In a nutshell", and it worked -- I got a nice reply. (TL;DR is perfectly fine with me, but I don't think it's appropriate when addressing people who are unlikely to keep up with the latest Internet slang.)
6HughRistik14y
The more academic substitute is "abstract." Not that I have anything against good ol' TL;DR.
5komponisto14y
How about "(Ultra-Short) Summary:..."?
-1k3nt14y
tl;dr to me indicates something you say about somebody else's post (which you didn't bother to read because you found it too long). Used w/r/t one's own post it's very confusing. I use "Shorter me:" for what that's worth.
1arbimote14y
I personally don't mind "tl;dr", but I agree that where practical it is best to use language that will be understood by as wide an audience as possible. (Start using "tl;dr" again when it becomes mainstream :) )
1wedrifid14y
Please don't. I need to budget my downvotes!
1Alicorn14y
Thanks, I'll fix it ^^
0MrHen14y
"Prise" is a variant spelling of "prize" in my dictionary. Are we looking for the word "pry"?
0Morendil14y
Dunno - I didn't actually check the dictionary, just a Google search for relative frequency of "prize apart" which I found jarring, vs. "prise apart" which sounded no alarm. The first mostly appears with "prize" being a noun not a verb, so I supposed my gut feel was correct. Call that the Language Log method. ;) The dictionary method does suggest "prize apart" is also correct, if less common. Looks like I made a wrong call.
2MrHen14y
I looked at it in more detail and it appears that "prise" is a valid variant of "prize" only when using it as a synonym of "pry." So... that is a little confusing but now I know something new. :) Dictionary.com
-4CannibalSmith14y
This is the Internet. Nobody says "abstract" or "summary" on the Internet.

I think your definition of consequentialism (and deontology) is too broad because it makes some contractarian theories consequentialist. In "Equality," Nagel argues that the rightness of an act is determined by the acceptability of its consequences for those to whom they are most unacceptable. This is similar to Rawls's view that inequalities are morally permissible if they result in a net-benefit to the most disadvantaged members of society. These views are definitely deontological (and self-labeled as such), and since consequentialism and deont... (read more)

1Alicorn14y
I haven't made a close study of Rawls, but what I know inclines me towards an interpretation under which the difference principle is a prediction about what agents would agree to behind the veil of ignorance, and only via their agreeing upon it does it gain moral force. I don't think they are necessarily either of these things. You can have considerable overlap - even doppelgangers blur the lines - and you're neglecting virtue ethics, which doesn't have a clear allegiance with either. This neglects satisficing theories, and (depending on how strict you mean this to be) theories that talk about things other than acts or rules. Defining deontology in terms of consequentialism is something I'd like to avoid.

If I understand you, you're claiming that the "justification" for a deontological principle need not be phrased in terms of consequences, and consequentialists fail to acknowledge this. But can't it always be re-phrased this way?

I prefer to inhabit worlds where I don't lie [deontological]. Telling a lie causes the world to contain a lying version of myself [definition of "cause"]. Therefore, lying is wrong [consequentialist interpretation of preference violation].

This transformation throws away the original justification, but from a... (read more)

2Alicorn14y
Yes, you can doppelganger any deontic theory you want. And from the perspective of a consequentialist who doesn't care about annoying eir deontologist friends, the doppelganger is just as good, probably better. But it misses the deontologist's point.
3Wei Dai14y
As someone who has no deontologist friends, should I bother reading this post?

Deontologists are common. Someday, you may need to convince a deontologist on some matter where their deontology affects their thinking. If you are ignorant about an important factor in how their mind works, you will be less able to bring their mind to a state that you desire.

I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.

Come on, why did Alicorn write a post on deontology without giving any explanation why we should learn about it? What am I missing here? If she (or anyone else) thinks that we should put some weight into deontology in our moral beliefs, why not just come out and say that?

Well, apart from the fact that it looked like people wanted me to write it, I'm personally irritated by the background assumption of consequentialism on this site, especially since it usually seems to come from incomprehension more than anything else. People phrasing things more neutrally, or at least knowing exactly what it is they're discarding, would be nice for me.

1Wei Dai14y
Thanks. I suggest that you write a bit about the context and motivation for a post in the post itself. I skipped most of the cryonics threads and never saw the parts where people talked about deontology, so your post was pretty bewildering to me (and to many others, judging from the upvotes my questions got).
8Tyrrell_McAllister14y
How often do you need to convince a Christian to do something where their Christianity in particular is important. That is, how often does it matter that their worldview is Christian specifically, rather than some other mysticism? The more often you need to do that, the more helpful it is to understand the Christian mindset specifically. But I can easily imagine that you will never need to do that. In contrast, it seems much more likely that you will someday need to convince a deontologist to do something that they perceive as somehow involving duty. You will be better able to do that if you better understand how their concept of duty works. The purpose of this site is to refine the art of human rationality. That requires knowing how humans are, and many humans are deontologists. If there were something specific to Christianity that made certain techniques of rationality work on it and only it, then time spent understanding Christianity would be time well-spent. It seems to me, though, that general remedies, such as avoiding mysterious answers to mysterious questions, do as well as anything targeted specifically at Christianity. So it happens that there is little to be gained from discussing the particulars of the Christian worldview. Deontology, however, seems more like the illusion of free will* than like Christianity. Deontology has something to do with how a large number of people conceive of human action at a very basic level. Part of refining human rationality is improving how humans conceive of their actions. Since so many of them conceive of their action deontologically, we should understand how deontology works. ---------------------------------------- *. . . the illusion of the illusory kind of free will, that is.
4Jack14y
1. I'm pretty sure I remember a couple of comments suggesting this topic. 2. I can't speak for alicorn but I'll come out and say that I think the metaethics sequence is the weakest of the sequences and the widespread preference utilitarianism here has not been well justified. I'm not a deontologist but I think understanding the deontologist perspective will probably lead to less wrong thinking about ethics.
2Alicorn14y
Yes, there was some enthusiasm about the topic here.
4wedrifid14y
Yes. If (for example) some well meaning fool makes a utilitarian-friendly AI then there will be a super-intelligence at large that will be maximizing "aggregative total equal-consideration preference consequentialism" across all living humans. Being able to understand how deontologist's think will better enable you to predict how their deontological beliefs will be resolved to preferences by the utilitarian AI. It may be that the best preference translation of a typical deontologist belief system turns out to be something that gives rise in aggregate to a dystopia. If that is the case you should engage in the mass murder of deontologists before the run button is pressed on the AI. I also note that as I wrote "you should engage in mass murder" I felt bad. This is despite the fact that the act has extremely good expected consequences in the hypothetical situation. Part of the 'bad feeling' I get for saying that is due to inbuilt deontological tendencies and part is because my intuitions anticipate negative social consequences for making such a statement due to the deontological ethical beliefs being more socially rewarded. Both of these are also reasons that reading the post and understanding the reasoning that deontologists use may turn out to be useful.
0loqi14y
I didn't think this was the sort of doppelgangering you were talking about. I'm not trying to ascribe additional consequentialist justifications, I'm just jettisoning the entire justification and calling a preference a spade. If the deontologist's point is that (some of) their preferences somehow possess extra justification, then they've already succeeded in annoying me with their meaningless moral grandstanding. If Anton Chigurh delivers an eloquent defense of his personal philosophy, it won't change my opinion of his moral status. This doesn't seem related to my consequentialist outlook - if your position is that "murder is always wrong, all of the time", I would expect a similar reaction. I feel like I'm still missing whatever it is that your post is trying to convey about the "deontologist's point". What is the point of deontological justification? The vertebrate/renate example doesn't do it for me, because there's a clear way to distinguish between the intensional and extensional definitions: postulate a creature with a spine and no kidneys. Such an organism seems at least conceivable. But I don't see what analogous recourse a deontologist has when attempting to make this distinction. It all just reduces to a chain of "because if"s that terminates with preferences. Even in the case of "X is only wrong if the agent performing X is aware it leads to outcome Y", a preference over the rituals of cognition employed by another agent is still a preference. It just seems like an awfully weird one.
1Alicorn14y
I find your complaints a bit slippery to get ahold of, so I'm going to say some things that floated into my brain while I read your comment and see if that helps. A preference is one sort of thing that a deontic theory can take into account when evaluating an action. For instance, one could hold that a moral right can be waived by its holder at eir option: this takes into account someone's preference. But it is only one type of thing that could be included. There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory. They're unusually actionable, which makes theories that stop there more usable than theories that stop in some other places, but they are not magic. The fact that stopping in the places deontologists like to stop (I'm fond of "personhood", myself) does not come naturally to you does not make deontology an inherently bizarre system in comparison to consequentialism.
6loqi14y
But I don't see preference as justifying a moral theory, I see it as explaining a moral theory. I don't see how a moral theory could possibly be justified, the concept appears nonsensical to me. About the closest thing I can make sense of would be soundly demonstrating that one's theory doesn't contradict itself. Put another way, I can imagine invalidating a moral theory by demonstrating the lack of a necessary condition (like consistency), but I can't imagine validating the theory by demonstrating the presence of a "sufficient" condition.
3Alicorn14y
Perhaps you can tell me a little about your ethical beliefs so I know where to start when trying to explain?
-1loqi14y
No real framework to speak of. Hanson's efficiency criterion appeals to me as a sort of baseline morality. It's hard to imagine a better first-order attack on the problem than "everyone should get as much of what they want as possible", but of course one can imagine an endless stream of counter-examples and refinements. I presumably have most standard human "pull the child off the tracks" sorts of preferences. I'm not sure I know what you're looking for. Unusual moral beliefs or ethical injuctions? I think lying is simultaneously * Despicable by default * Easily justified in the right context * Usually unpleasant to perform even when feeling justified in doing so, but occasionally quite enjoyable if that helps.
1Alicorn14y
I'm not sure what to do with that as stated at all, I'm afraid. But "as possible" seems like a load-bearing phrase in the sentence "everyone should get as much of what they want as possible", because this isn't literally possible for everyone simultaneously (two people could simultaneously desire the same thing, such that it is possible that either of them get it), and you have to have some kind of mechanism to balance contradictory desires. What mechanism looks right to you?
1loqi14y
Agreed, "as possible" is quite heavy, as is "everyone". But it at least slightly refines the question "what's right?" to "what's fair?". Which is still a huge question. The quasi-literal answer to your question is: a voronoi diagram. It looks right - I don't quite know what it means in practice, though. In general, the further a situation is from my baseline intuitions concerning fairness and respect for apparent volition, the weaker my moral apprehension of it is. Life is full of trade-offs of wildly varying importance and difficulty. I'd be suspicious of any short account of them.
0bogus14y
Good point. There is a lot of fuzziness around "preferences", "ethics", "aesthetics", "virtues" etc. Ultimately all of these seem to involve some axiological notion of "good", or "the good life", or "good character" or even "goods and services". For instance, what should we make of the so-called "grim aesthetic"? Is grimness a virtue? Should it count as an ethic? If not, why not?
0loqi14y
The second virtue is relinquishment: I think the necessary and sufficient conditions for "grimness" are found there.

I very much appreciated reading this article.

As a general comment, I think that this forum falls a bit too much into groupthink. Certain things are assumed to be correct that have not been well argued. A presumption that utilitarianism of some sort or another is the only even vaguely rational ethical stance is definitely one of them.

Not that groupthink is unusual on the internet, or worse here than elsewhere! Au contraire. But it's always great to see less of it, and to see it challenged where it shows up.

Thanks again for this, Mr. Corn.

4Jack14y
Thats Ms. Corn.

Please, call me Ali. Ms. Corn is my mother.

...No, seriously, folks, it's a word, abbreviating it doesn't make sense. "Alicorn".

5k3nt14y
I was making a silly foolish joke and didn't even think about how obviously I would be opening myself up to charges (by myself if not others) of implicit sexism. Sigh. I'm so busted.
3[anonymous]11y
What else does one abbreviate?
1Alicorn11y
Phrases. Names. Long words.
3MrHen14y
I have a similar reaction when people call me Mr. Hen. My name isn't actually Hen. I just thought it was a funny oxymoron and periods aren't normally exacted in usernames. And I meant "accepted," not "exacted." I think I need some sleep.
0[anonymous]14y
Shucks. (Get it? Eh? Eh? Aw... nevermind. I sorry.)
0mattnewport14y
We're not all utilitarians. It does seem to be a bafflingly popular view here but there are dissenting voices.
0Jack14y
Where do you stand? (If you can explain without a full page essay). I'm something of a utilitarian skeptic as well... I'd like to see if the rest of us have overlapping views.
2Morendil14y
My own ethical position is easy to state: confused. ;) There should be a post, intended for people about to embark on the topic of ethics and metaethics, providing guidance on even figuring out what your current intuitions are, and where they position you on the map of the standard debates. My (post-school) readings on the topic have included Singer's Practical Ethics and Rawls' Theory of Justice. I was definitely more impressed and influenced by the latter. If pressed, I would call myself a contractarian. (Being French, I had early encounters with Rousseau, but I don't remember those with any precision.) I'm skeptical of the way "utility function" is often used, as a lily-gilding equivalent of "what I want". I'm skeptical that interpersonal comparisons of utility have any value, such that "my utility function", assuming there is such a thing, can be meaningfully aggregated with "your utility function". Thus I'm skeptical that utility provides a useful guide to moral decisions.
1mattnewport14y
I'll try to summarize but my position isn't fully worked out so this is just a rough outline. I think it's important to distinguish the descriptive and prescriptive/normative elements of any moral or ethical theory. That distinction sometimes seems to get lost in discussions here. Descriptively, I think that what human morality actually is is a system of biologically and culturally evolved rules, principles and dispositions that have tended to lead to reproductive success. The details of what those rules are is largely an empirical question but most people have some reasonable insight into them based on being human and living in human society. There is a naive view of evolution that fails to understand how behaviour that we would generally call altruistic or that is not immediately obviously self-interested can be explained in such a framework. Hopefully most people here don't explicitly hold such a view but it seems that remnants of it still infect the moral thinking of people who 'know better'. I think if you look at human society from a game-theoretic / evolutionarily stable strategy perspective it becomes fairly clear that most of what we call altruism or non-self-interested behaviour makes good sense for self-interested agents. I don't believe there is any mystery about such behaviour that needs to be explained by invoking some view of morality or ethics that is not ultimately rooted in evolutionary success. Prescriptively, I think that people should behave so as to maximize their own self interest, where self interest is understood in the broad sense that can account for altruism, self-sacrifice and other elements that a naive interpretation of self interest would miss. In this sense I am something of an ethical egoist. That's a simple basis for morality but it does not necessarily produce a simple answer to any particular moral question. On top of this basic principle of morality, I have an ethical framework that I believe would tend to produce good resu
[-][anonymous]14y-10

And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and

... (read more)

How about a post on understanding consequentialism for us deontologists? :-)

The Wikipedia defines deontological ethics as "approach to ethics that judges the morality of an action based on the action's adherence to a rule or rules."

This definition implies that the Scientific method is a deontological ethic. It's called the "scientific method" after all. Not the "scientific result."

The scientific method is rule based. Therefore, if there is not a significant overlap between the consequentialist and deontologist approaches, then... (read more)

8wedrifid14y
Before anyone replies to this could you please confirm whether you are actually trying to make a serious point or if you are just trying to be facetious? You are conflating issues all over the place in ways that don't really seem to make sense.
1RobinZ14y
Most of the vocal population here are consequentialists - if there proves to be widespread interest, such a post may appear at a later date.