loqi comments on Deontology for Consequentialists - Less Wrong

46 Post author: Alicorn 30 January 2010 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread.

Comment author: loqi 30 January 2010 10:05:34PM 1 point [-]

If I understand you, you're claiming that the "justification" for a deontological principle need not be phrased in terms of consequences, and consequentialists fail to acknowledge this. But can't it always be re-phrased this way?

I prefer to inhabit worlds where I don't lie [deontological]. Telling a lie causes the world to contain a lying version of myself [definition of "cause"]. Therefore, lying is wrong [consequentialist interpretation of preference violation].

This transformation throws away the original justification, but from a consequentialist perspective that justification is only as relevant as an evolutionary explanation for current human preferences - the preference is what matters, its origin is incidental.

If you've ever run across the concept of a meta-circular interpreter, this seems akin to "bootstrapping" a new language using an existing one. The first interpreter you write is a complete throw-away, as its only purpose is to boost you up to another, self-sustaining level of abstraction.

Comment author: Alicorn 30 January 2010 10:15:18PM 2 points [-]

Yes, you can doppelganger any deontic theory you want. And from the perspective of a consequentialist who doesn't care about annoying eir deontologist friends, the doppelganger is just as good, probably better. But it misses the deontologist's point.

Comment author: Wei_Dai 31 January 2010 03:05:07AM 2 points [-]

And from the perspective of a consequentialist who doesn't care about annoying eir deontologist friends, the doppelganger is just as good, probably better.

As someone who has no deontologist friends, should I bother reading this post?

Comment author: Tyrrell_McAllister 31 January 2010 05:50:37AM *  8 points [-]

Deontologists are common. Someday, you may need to convince a deontologist on some matter where their deontology affects their thinking. If you are ignorant about an important factor in how their mind works, you will be less able to bring their mind to a state that you desire.

Comment author: Wei_Dai 31 January 2010 07:07:09AM 10 points [-]

I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.

Come on, why did Alicorn write a post on deontology without giving any explanation why we should learn about it? What am I missing here? If she (or anyone else) thinks that we should put some weight into deontology in our moral beliefs, why not just come out and say that?

Comment author: Tyrrell_McAllister 31 January 2010 08:11:02AM *  6 points [-]

I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.

How often do you need to convince a Christian to do something where their Christianity in particular is important. That is, how often does it matter that their worldview is Christian specifically, rather than some other mysticism? The more often you need to do that, the more helpful it is to understand the Christian mindset specifically. But I can easily imagine that you will never need to do that.

In contrast, it seems much more likely that you will someday need to convince a deontologist to do something that they perceive as somehow involving duty. You will be better able to do that if you better understand how their concept of duty works.

The purpose of this site is to refine the art of human rationality. That requires knowing how humans are, and many humans are deontologists.

If there were something specific to Christianity that made certain techniques of rationality work on it and only it, then time spent understanding Christianity would be time well-spent. It seems to me, though, that general remedies, such as avoiding mysterious answers to mysterious questions, do as well as anything targeted specifically at Christianity. So it happens that there is little to be gained from discussing the particulars of the Christian worldview.

Deontology, however, seems more like the illusion of free will* than like Christianity. Deontology has something to do with how a large number of people conceive of human action at a very basic level. Part of refining human rationality is improving how humans conceive of their actions. Since so many of them conceive of their action deontologically, we should understand how deontology works.


*. . . the illusion of the illusory kind of free will, that is.

Comment author: Alicorn 31 January 2010 03:14:31PM 9 points [-]

Well, apart from the fact that it looked like people wanted me to write it, I'm personally irritated by the background assumption of consequentialism on this site, especially since it usually seems to come from incomprehension more than anything else. People phrasing things more neutrally, or at least knowing exactly what it is they're discarding, would be nice for me.

Comment author: Wei_Dai 31 January 2010 07:55:51PM 0 points [-]

Thanks. I suggest that you write a bit about the context and motivation for a post in the post itself. I skipped most of the cryonics threads and never saw the parts where people talked about deontology, so your post was pretty bewildering to me (and to many others, judging from the upvotes my questions got).

Comment author: Jack 31 January 2010 08:13:40AM 4 points [-]
  1. I'm pretty sure I remember a couple of comments suggesting this topic.

  2. I can't speak for alicorn but I'll come out and say that I think the metaethics sequence is the weakest of the sequences and the widespread preference utilitarianism here has not been well justified. I'm not a deontologist but I think understanding the deontologist perspective will probably lead to less wrong thinking about ethics.

Comment author: Alicorn 31 January 2010 03:12:51PM 1 point [-]

Yes, there was some enthusiasm about the topic here.

Comment author: wedrifid 31 January 2010 07:39:22AM 3 points [-]

As someone who has no deontologist friends, should I bother reading this post?

Yes. If (for example) some well meaning fool makes a utilitarian-friendly AI then there will be a super-intelligence at large that will be maximizing "aggregative total equal-consideration preference consequentialism" across all living humans. Being able to understand how deontologist's think will better enable you to predict how their deontological beliefs will be resolved to preferences by the utilitarian AI. It may be that the best preference translation of a typical deontologist belief system turns out to be something that gives rise in aggregate to a dystopia. If that is the case you should engage in the mass murder of deontologists before the run button is pressed on the AI.

I also note that as I wrote "you should engage in mass murder" I felt bad. This is despite the fact that the act has extremely good expected consequences in the hypothetical situation. Part of the 'bad feeling' I get for saying that is due to inbuilt deontological tendencies and part is because my intuitions anticipate negative social consequences for making such a statement due to the deontological ethical beliefs being more socially rewarded. Both of these are also reasons that reading the post and understanding the reasoning that deontologists use may turn out to be useful.

Comment author: loqi 30 January 2010 11:04:51PM 1 point [-]

I didn't think this was the sort of doppelgangering you were talking about. I'm not trying to ascribe additional consequentialist justifications, I'm just jettisoning the entire justification and calling a preference a spade. If the deontologist's point is that (some of) their preferences somehow possess extra justification, then they've already succeeded in annoying me with their meaningless moral grandstanding.

If Anton Chigurh delivers an eloquent defense of his personal philosophy, it won't change my opinion of his moral status. This doesn't seem related to my consequentialist outlook - if your position is that "murder is always wrong, all of the time", I would expect a similar reaction.

I feel like I'm still missing whatever it is that your post is trying to convey about the "deontologist's point". What is the point of deontological justification? The vertebrate/renate example doesn't do it for me, because there's a clear way to distinguish between the intensional and extensional definitions: postulate a creature with a spine and no kidneys. Such an organism seems at least conceivable. But I don't see what analogous recourse a deontologist has when attempting to make this distinction. It all just reduces to a chain of "because if"s that terminates with preferences. Even in the case of "X is only wrong if the agent performing X is aware it leads to outcome Y", a preference over the rituals of cognition employed by another agent is still a preference. It just seems like an awfully weird one.

Comment author: Alicorn 30 January 2010 11:14:25PM 1 point [-]

I find your complaints a bit slippery to get ahold of, so I'm going to say some things that floated into my brain while I read your comment and see if that helps.

A preference is one sort of thing that a deontic theory can take into account when evaluating an action. For instance, one could hold that a moral right can be waived by its holder at eir option: this takes into account someone's preference. But it is only one type of thing that could be included.

There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory. They're unusually actionable, which makes theories that stop there more usable than theories that stop in some other places, but they are not magic. The fact that stopping in the places deontologists like to stop (I'm fond of "personhood", myself) does not come naturally to you does not make deontology an inherently bizarre system in comparison to consequentialism.

Comment author: loqi 30 January 2010 11:34:34PM *  4 points [-]

There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory.

But I don't see preference as justifying a moral theory, I see it as explaining a moral theory. I don't see how a moral theory could possibly be justified, the concept appears nonsensical to me. About the closest thing I can make sense of would be soundly demonstrating that one's theory doesn't contradict itself.

Put another way, I can imagine invalidating a moral theory by demonstrating the lack of a necessary condition (like consistency), but I can't imagine validating the theory by demonstrating the presence of a "sufficient" condition.

Comment author: Alicorn 30 January 2010 11:41:47PM 1 point [-]

Perhaps you can tell me a little about your ethical beliefs so I know where to start when trying to explain?

Comment author: loqi 31 January 2010 07:53:17AM *  0 points [-]

No real framework to speak of. Hanson's efficiency criterion appeals to me as a sort of baseline morality. It's hard to imagine a better first-order attack on the problem than "everyone should get as much of what they want as possible", but of course one can imagine an endless stream of counter-examples and refinements. I presumably have most standard human "pull the child off the tracks" sorts of preferences.

I'm not sure I know what you're looking for. Unusual moral beliefs or ethical injuctions? I think lying is simultaneously

  • Despicable by default
  • Easily justified in the right context
  • Usually unpleasant to perform even when feeling justified in doing so, but occasionally quite enjoyable

if that helps.

Comment author: Alicorn 31 January 2010 02:52:42PM *  1 point [-]

I'm not sure what to do with that as stated at all, I'm afraid. But "as possible" seems like a load-bearing phrase in the sentence "everyone should get as much of what they want as possible", because this isn't literally possible for everyone simultaneously (two people could simultaneously desire the same thing, such that it is possible that either of them get it), and you have to have some kind of mechanism to balance contradictory desires. What mechanism looks right to you?

Comment author: loqi 01 February 2010 07:18:52AM 1 point [-]

Agreed, "as possible" is quite heavy, as is "everyone". But it at least slightly refines the question "what's right?" to "what's fair?". Which is still a huge question.

The quasi-literal answer to your question is: a voronoi diagram. It looks right - I don't quite know what it means in practice, though.

In general, the further a situation is from my baseline intuitions concerning fairness and respect for apparent volition, the weaker my moral apprehension of it is. Life is full of trade-offs of wildly varying importance and difficulty. I'd be suspicious of any short account of them.

Comment author: bogus 31 January 2010 02:15:49PM *  0 points [-]

I'm just jettisoning the entire justification and calling a preference a spade.

Good point. There is a lot of fuzziness around "preferences", "ethics", "aesthetics", "virtues" etc. Ultimately all of these seem to involve some axiological notion of "good", or "the good life", or "good character" or even "goods and services".

For instance, what should we make of the so-called "grim aesthetic"? Is grimness a virtue? Should it count as an ethic? If not, why not?

Comment author: loqi 01 February 2010 07:38:08AM 0 points [-]

The second virtue is relinquishment:

Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.

I think the necessary and sufficient conditions for "grimness" are found there.