AngryParsley comments on Deontology for Consequentialists - Less Wrong

46 Post author: Alicorn 30 January 2010 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 03 February 2010 05:28:13PM *  10 points [-]

I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):

Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.

However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless destruction", but I'm probably going to re-name it later because "destruction" isn't quite what I'm trying to capture. Basically, it means you shouldn't go around "destroying" stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don't have a full explanation; how good the reason has to be depends on how severe the act it justifies is. ("I was bored" might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.

In cases where rights conflict, and there's no alternative that doesn't violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn't okay.) "The null action" is the one where you don't do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.

I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn't have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I'm an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory - this is the case of the would-be murderer who doesn't know his gun is unloaded. No out for him. I've been saying "acts", but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.

Comment author: AngryParsley 03 February 2010 07:14:52PM *  2 points [-]

Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Comment author: Alicorn 03 February 2010 07:22:12PM 4 points [-]

Why those particular rights?

What do you mean, "these particular rights"? I haven't presented a list. I mentioned one right that I think we probably have.

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

Oh, now, that was low.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Do you mean: does Alicorn's nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They're different questions.

Comment author: AngryParsley 03 February 2010 07:34:53PM *  1 point [-]

I haven't presented a list.

Yeah, but most people don't come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.

They're different questions.

Now I'm curious. Is your answer to them different? Could you please answer both of those hypotheticals?

ETA: If your answer is different, then isn't your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?

Comment author: Alicorn 03 February 2010 08:13:13PM 0 points [-]

does Alicorn's nearest counterpart who grew up in such a world share her opinions?

Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn't even like chocolate chip cookie dough ice cream.

if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context?

No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn't have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn't make my morality "based on consequences"; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.

Comment author: AngryParsley 03 February 2010 11:20:35PM 3 points [-]

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn't OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn't OAC also likely believe that inaction is inherently bad? I doubt OAC would say, "I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action."

Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you're peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.

Comment author: Alicorn 03 February 2010 11:39:21PM 2 points [-]

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

It seems likely indeed that someone would do that.

If an orphanage exploded every time someone did nothing in a moral dilemma

I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.

One thing that's very complicated and not fully fleshed out that I didn't mention is that, in certain cases, one might be obliged to waive one's own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.

Comment author: Jack 04 February 2010 06:35:09AM 2 points [-]

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions.

Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.

Comment author: AngryParsley 04 February 2010 12:54:49PM 4 points [-]

Umm... torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights.

It doesn't sound nice or explain our intuitions. Instead, the goal is the most good for the most people.

Comment author: Jack 04 February 2010 07:39:28PM 9 points [-]

I said:

maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.

Comment author: wedrifid 03 February 2010 07:36:36PM 1 point [-]

Why those particular rights?

Because she says so. Which is a good reason. Much as I have preferences for possible worlds because I say so.