Alicorn comments on Deontology for Consequentialists - Less Wrong

46 Post author: Alicorn 30 January 2010 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 03 February 2010 03:19:10PM 4 points [-]

I'd say it doesn't matter if you care: you should do what's right anyway. Even psychopaths should do what's right.

Comment author: AndyWood 03 February 2010 03:54:49PM *  5 points [-]

Does the question of "why" simply not enter into a deontologists thinking? My mind seems to leap to complete the statement "you should do what's right" with something along the lines of "because society will be more harmonious".

Also, I wish that psychopaths would do what's right, but what seems to be missing is any force of persuasion. And that seems important.

Comment author: Alicorn 03 February 2010 03:58:45PM *  5 points [-]

We can have "whys", but they look a little different. Mine look like "because people have rights", mostly. Or "because I am a moral agent", looking from the other direction.

Comment author: AndyWood 03 February 2010 04:41:03PM 0 points [-]

I think one reason that so many people here are consequentialists is that these kinds of ideas do not hit bottom. I think LW attracts people who like to chase explanations down as far as possible to foundations. Do you yourself apply reductionism to morality?

Comment author: Alicorn 03 February 2010 04:59:54PM *  2 points [-]

"Reductionism" is one of those words that can mean about seventeen things. I think rights/moral agency both drop out of personhood, which is a function of cognitive capacities, which I take to be software instantiated on entirely physical bases - does that count as whatever you had in mind?

Comment author: AndyWood 03 February 2010 05:07:48PM 2 points [-]

From Wikipedia:

an approach to understand the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things

The bold part is plenty close enough to what I have in mind. Your response definitely counts. Next question that immediately leaps to mind is: how do you determine which formulations of morality best respect the personhood and rights of others, and best fulfill your duty as a moral agent?

Comment author: Alicorn 03 February 2010 05:28:53PM 1 point [-]

My theory isn't finished, so I can't present you with a list, or anything, but I've just summarized what I've got so far here.

Comment author: sark 03 February 2010 04:19:11PM 1 point [-]

So morality is one shard of desire from godshatter which deontologists think matters a lot?

Comment author: wedrifid 03 February 2010 04:48:44PM 1 point [-]

So morality is one shard of desire from godshatter which deontologists think matters a lot?

Yes, although a deontologist will likely not want to describe it that way. It makes their entire ethical framework look like a cludgy hack.

Comment author: Alicorn 03 February 2010 04:20:05PM 1 point [-]

I really don't think desire enters into it, except in certain very indirect ways.

Comment author: Jack 03 February 2010 05:06:03PM 1 point [-]

Have you given a description of your own ethical philosophy anywhere? If not, could you summarize your intuitions/trajectory? Doesn't need to be a complete theory or anything, I'm just informally polling the non-utilitarians here.

(Any other non-utilitarians who see this feel free to respond as well)

Comment author: Alicorn 03 February 2010 05:28:13PM *  10 points [-]

I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):

Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.

However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless destruction", but I'm probably going to re-name it later because "destruction" isn't quite what I'm trying to capture. Basically, it means you shouldn't go around "destroying" stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don't have a full explanation; how good the reason has to be depends on how severe the act it justifies is. ("I was bored" might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.

In cases where rights conflict, and there's no alternative that doesn't violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn't okay.) "The null action" is the one where you don't do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.

I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn't have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I'm an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory - this is the case of the would-be murderer who doesn't know his gun is unloaded. No out for him. I've been saying "acts", but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.

Comment author: Jack 03 February 2010 05:57:50PM 6 points [-]

Upvoted for spelling out so much, though I disagree with the whole approach (though I think I disagree with the approach of everyone else here too). This reads like pretty run of the mill deontology-- but since I don't know the field that well, is there anywhere you differ from most other deontologists?

Also, are rights axiomatic or is there a justification embedded in your concept of personhood (or somewhere else)?

Comment author: Alicorn 03 February 2010 06:03:12PM 3 points [-]

The quintessential deontologist is Kant. I haven't paid too much attention to his primary sources because he's miserable to read, but what Kant scholars say about him doesn't sound like what goes through my head. One place I can think of where we'd diverge is that Kant doesn't forbid cruelty to animals except inasmuch as it can deaden humane intuitions; my principle of needless destruction forbids it on its own demerits. The other publicly deontic philosopher I know of is Ross, but I know him only via a two-minute unsympathetic summary which - intentionally or no - made his theory sound very slapdash, like he has sympathies to the "it's sleek and pretty" defense of utilitarianism but couldn't bear to actually throw in his lot with it.

The justification is indeed embedded in my concept of personhood. Welcome to personhood, here's your rights and responsibilities! They're part of the package.

Comment author: simplicio 21 May 2010 07:54:40PM *  4 points [-]

Ross is an interesting case. Basically, he defines what I would call moral intuitions as "prima facie duties." (I am not sure what ontological standing he thinks these duties have.) He then lists six important ones: beneficence, honour, non-maleficence, justice, self-improvement and... goodness, I forget the 6th. But essentially, all of these duties are important, and one determines the rightness of an act by reflection - the most stringent duty wins and becomes the actual moral duty.

E.g., you promised a friend that you would meet them, but on the way you come upon the scene of a car crash. A person is injured, and you have first aid training. So basically Ross says we have a prima facie duty to keep the promise (honour), but also to help the motorist (beneficence), and the more stringent one (beneficence) wins.

I like about it that: it adds up to normality, without weird consequences like act utilitarianism (harvest the traveler's organs) or Kantianism (don't lie to the murderer).

I don't like about it that: it adds up to normality, i.e., it doesn't ever tell me anything I don't want to hear! Since my moral intuitions are what decides the question, the whole thing functions as a big rubber stamp on What I Already Thought. I can probably find some knuckle-dragging bigot within a 1-km radius who has a moral intuition that fags must die. He reads Ross & says: "Yeah, this guy agrees with me!" So there is a wrong moral intuition. On the other hand, before reading Peter Singer (a consequentialist), I didn't think it was obligatory to give aid to foreign victims of starvation & preventable disease; now I think it is as obligatory as, in his gedanken experiment, pulling a kid out of a pond right beside you (even though you'll ruin your running shoes). Ross would not have made me think of that; whatever "seemed" right to me, would be right.

I am also really, really suspicious of the a priori and the prima facie. It seems very handwavy to jump straight to these "duties" when the whole point is to arrive at them from something that is not morality - either consequences or through some sort of symmetry.

Comment author: AlexanderRM 27 March 2015 07:02:31PM *  0 points [-]

"The whole thing functions as a big rubber stamp on What I Already Thought"

Speaking as a (probably biased) consequentialist, I generally got the impression that this was pretty much the whole point of Deontology.

However, the example of Kant being against lying seems to go against my impression. Kantian deontology is based on reasoning things about your rules, so it seems to be consistent in that case.

Still, it seems to me that more mainstream Deontology allows you to simply make up new categories of acts (ex. lying is wrong, but lying to murderers is OK) in order to justify your intuitive response to a thought experiment. How common is it for Deontologists to go "yeah, this action has utterly horrific consequences, but that's fine because it's the correct action", the way it is for Consequentialists to do the reverse? (again noting that I've now heard about the example of Kant, I might be confusing Deontology with "inuitive morality" or "the noncentral fallacy".)

Comment author: Jack 04 February 2010 05:55:42AM 2 points [-]

So I think I have pretty good access to the concept of personhood but the existence of rights isn't obvious to me from that concept. Is there a particular feature of personhood that generates these rights?

Comment author: Alicorn 04 February 2010 02:09:28PM 1 point [-]

That's one of my not-finished things, is spelling out exactly why I think you get there from here.

Comment author: lessdazed 24 July 2011 01:13:22AM 2 points [-]

Rather than take the "horrible consequences" tack, I'll go in the other direction. How possible is it that something can be deontologically right or wrong if that something is something no being cares about, nor do they care about any of its consequences, by any extrapolation of their wants, likes, conscious values, etc., nor should they think others care? Is it logically possible?

a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong...the would-be murderer who doesn't know his gun is unloaded.

Comment author: Alicorn 24 July 2011 02:38:16AM 1 point [-]

You seem to answer your own question in the quote you chose, even though it seems like you chose it to critique my inconsistent pronoun use. If no being cares about something, nor wants others to care about it, then they're not likely to want to retain their rights over it, are they?

The sentences in which I chose "ey" are generic. The sentences in which I used "he" are about a single sample person.

Comment author: lessdazed 24 July 2011 03:24:23AM 3 points [-]

So if they want to retain without interruption their right to, say, not have a symmetrical spherical stone at the edge of their lawn rotated without permission, they perforce care whether or not it is rotated? They can't merely want a right? Or if they want a right, and have a right, and they don't care to exercise the right, but want to retain the right, they can't? What if the only reason they care to prohibit stone turning is to retain the right? Does that work? Is there a special rule saying it doesn't?

As part of testing theories to see when they fail rather than succeed, my first move is usually to try recursion.

not likely to want to retain their rights over it

Least convenient possible world, please.

Regardless, you seem to believe that some other forms of deontology are wrong but not illogical, and believe consequentialist theories wrong or illogical. For example, a deontology otherwise like yours that valued attentiveness to evidence more you would label wrong and not illogical. I ask if you would consider a deontological theory invalid if it ignored wants, cares etc. of beings, not whether or not that is part of your theory.

If it's not illogical and merely wrong, then is that to say you count that among the theories that may be true, if you are mistaken about facts, but not mistaken about what is illogical and not?

I think such a dentology would be illogical, but am to various degrees unsure about other theories, which is right and which wrong, and about the severity and number of wounds in the wrong ones. Because this deontology seems illogical, it makes me suspect of its cousin theories, as it might be a salient case exhibiting a common flaw.

I think it is more intellectually troubling than the hypothetical of committing a small badness to prevent a larger one, but as it is rarely raised presumably others disagree or have different intuitions.

I don't see the point of mucking with the English language and causing confusion for the sake of feminism if the end result is that singular sample murderers are gendered. It seems like the worst of both worlds.

Comment author: Peterdjones 18 January 2013 06:21:49PM 0 points [-]

Kant's answer, greatly simplified, is that rational agents will care about following moral rules, because that is part of rationality.

Comment author: AndyWood 03 February 2010 06:41:58PM *  2 points [-]

Thanks for writing this out. I think you'll be unsurprised to learn that this substantially matches my own "moral code", even though I am (if I understand the terminology correctly) a utilitarian.

I'm beginning to suspect that the distinction between these two approaches comes down to differences in background and pre-existing mental concepts. Perhaps it is easier, more natural, or more satisfying for certain people to think in these (to me) very high abstractions. For me, it is easier, more natural, and more satisfying to break down all of those lofty concepts and dynamics again, and again, until I've arrived (at least in my head) at the physical evolution of the world into successive states that have ranked value for us.

EDIT: FWIW, you have actually changed my understanding of deontology. Instead of necessarily involving unthinking adherence to rules handed down from on-high/outside, I can now see it as proceeding from more basic moral concepts.

Comment author: AngryParsley 03 February 2010 07:14:52PM *  2 points [-]

Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Comment author: Alicorn 03 February 2010 07:22:12PM 4 points [-]

Why those particular rights?

What do you mean, "these particular rights"? I haven't presented a list. I mentioned one right that I think we probably have.

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

Oh, now, that was low.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Do you mean: does Alicorn's nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They're different questions.

Comment author: AngryParsley 03 February 2010 07:34:53PM *  1 point [-]

I haven't presented a list.

Yeah, but most people don't come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.

They're different questions.

Now I'm curious. Is your answer to them different? Could you please answer both of those hypotheticals?

ETA: If your answer is different, then isn't your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?

Comment author: Alicorn 03 February 2010 08:13:13PM 0 points [-]

does Alicorn's nearest counterpart who grew up in such a world share her opinions?

Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn't even like chocolate chip cookie dough ice cream.

if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context?

No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn't have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn't make my morality "based on consequences"; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.

Comment author: AngryParsley 03 February 2010 11:20:35PM 3 points [-]

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn't OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn't OAC also likely believe that inaction is inherently bad? I doubt OAC would say, "I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action."

Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you're peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.

Comment author: Alicorn 03 February 2010 11:39:21PM 2 points [-]

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

It seems likely indeed that someone would do that.

If an orphanage exploded every time someone did nothing in a moral dilemma

I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.

One thing that's very complicated and not fully fleshed out that I didn't mention is that, in certain cases, one might be obliged to waive one's own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.

Comment author: Jack 04 February 2010 06:35:09AM 2 points [-]

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions.

Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.

Comment author: AngryParsley 04 February 2010 12:54:49PM 4 points [-]

Umm... torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights.

It doesn't sound nice or explain our intuitions. Instead, the goal is the most good for the most people.

Comment author: Jack 04 February 2010 07:39:28PM 9 points [-]

I said:

maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.

Comment author: wedrifid 03 February 2010 07:36:36PM 1 point [-]

Why those particular rights?

Because she says so. Which is a good reason. Much as I have preferences for possible worlds because I say so.

Comment author: wedrifid 03 February 2010 06:04:14PM 0 points [-]

Wow. You would try to stop me from saving the world. You are evil. How curious.

Comment author: Alicorn 03 February 2010 06:06:40PM *  0 points [-]

Why, what wrong acts do you plan to commit in attempting to save the world?

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Comment author: wedrifid 03 February 2010 06:49:26PM 1 point [-]

Why, what wrong acts do you plan to commit in attempting to save the world?

Evil and cunning. No! I'll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Absolutely!

Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:

Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a 'subsistence paradise'. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?

Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?

Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?

Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I'm standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?

Comment author: Alicorn 03 February 2010 06:58:30PM 3 points [-]

Evil and cunning.

Aw, thanks...?

If there in fact something morally wrong about releasing the tech (your summary doesn't indicate it clearly, but I'd expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

Comment author: wedrifid 03 February 2010 07:11:37PM *  -1 points [-]

If there in fact something morally wrong about releasing the tech

I don't know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.

Comment author: wedrifid 03 February 2010 07:25:47PM 0 points [-]

Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

That is promising. Would you let me kill Dave too?

Comment author: Alicorn 03 February 2010 08:06:02PM 2 points [-]

If you're in the room with Dave, why wouldn't you just push the AI's reset button yourself?

Comment author: wedrifid 04 February 2010 02:22:36AM *  -1 points [-]

See link. Depends on how I think he would update. I would kill him too if necessary.

Comment author: blacktrance 11 February 2014 08:53:29PM 0 points [-]

I find myself largely in agreement with most of this, despite being a consequentialist (and an egoist!).

Comment author: Alicorn 11 February 2014 09:00:24PM 0 points [-]

Where's the point of disagreement that makes you a consequentialist, then?

Comment author: blacktrance 11 February 2014 09:08:52PM *  1 point [-]

Because while I agree that people have rights and that it's wrong to violate them, rights are themselves derived from consequences and preferences (via contractarian bargaining), and also that "rights" refers to what governments ought to protect, not necessarily what individuals should respect (though most of the time, individuals should respect rights). For example, though in normal life, justice requires you* to not murder a shopkeeper and steal his wares, murder would be justified in a more extreme case, such as to push a fat man in front of a trolley, because in that case you're saving more lives, which is more important.

My main disagreement, though, is that deontology (and traditional utilitarianism, and all agent-neutral ethical theories in general) is that it fails to give a sufficient explanation of why we should be moral.

.* By which I mean something like "in order to derive the benefits of possessing the virtue of justice". I'm also a virtue ethicist.

Comment author: TheAncientGeek 11 February 2014 09:21:53PM 0 points [-]

Consequentialism can override rules just where consequences can be calculated...which is very rarely.