Alicorn comments on Deontological Decision Theory and The Solution to Morality - Less Wrong

-7 [deleted] 10 January 2011 04:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (91)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 10 January 2011 07:17:19PM 4 points [-]

Um, my prudential considerations do indeed work more or less consequentialistically. That's not news to me. They just aren't morality.

Comment author: jimrandomh 10 January 2011 07:25:26PM 6 points [-]

Wait a second - is theree a difference of definitions here? That sounds a lot like what you'd get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn't actually stop following them.

Comment author: shokwave 10 January 2011 07:29:20PM 2 points [-]

I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.

Comment author: Alicorn 10 January 2011 07:28:23PM 1 point [-]

I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call "prudence" and not at all about the thing I call "morality".

Comment author: TheOtherDave 10 January 2011 08:35:18PM 1 point [-]

That seems like a reasonable summary to me. Does it seem to you that we ought to? (Care about morality, that is.)

Comment author: Alicorn 10 January 2011 09:10:39PM 1 point [-]

I think you ought to do morally right things; caring per se doesn't seem necessary.

Comment author: TheOtherDave 10 January 2011 09:15:58PM 0 points [-]

Fair enough.

Does it usually look to you like consequentialists just do prudential things and not morally right things?

Comment author: Alicorn 10 January 2011 09:23:18PM 0 points [-]

Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don't have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.

Comment author: TheOtherDave 10 January 2011 10:08:30PM 0 points [-]

Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by "prudence."

Agreed that in most cases there's no conflict.

I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?

I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?

===

(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.

Comment author: Alicorn 10 January 2011 10:14:18PM 0 points [-]

Right and right.

Comment author: TheOtherDave 10 January 2011 10:33:56PM *  2 points [-]

OK, cool. Thanks.

If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I'm not asking you what you would do in that situation.)

==

(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I'm not sure how I could do that, but just for concreteness, suppose I had Elspeth's power.

(EDIT: Actually, it occurs to me that I could more simply ask: "If I preferred...," given that I'm asking about your moral intuitions rather than your predicted behavior.)

Comment author: jimrandomh 10 January 2011 07:41:10PM 0 points [-]

Does the importance of prudence ever scale without bound, such that it dominates all moral concerns if the stakes get high enough?

Comment author: Alicorn 10 January 2011 07:50:24PM 0 points [-]

I don't know about all moral concerns. A subset of moral concerns are duplicated and folded into my prudential ones.

Comment author: Vladimir_Nesov 10 January 2011 07:44:35PM 3 points [-]

Can't parse.

Comment author: Alicorn 10 January 2011 07:49:39PM 0 points [-]

Easy reader version for consequentialists: I'm like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that's fine.

Comment author: Vladimir_Nesov 10 January 2011 08:06:13PM *  4 points [-]

Its presence makes me a very bad consequentialist sometimes, but I think that's fine.

If this cherry on top costs people lives, it's not "fine", it's evil incarnate. You should cut this part of yourself out without mercy.

(Compare to your Luminosity vampires, that are sometimes good, nice people, even if they eat people.)

Comment author: jimrandomh 10 January 2011 08:36:14PM 3 points [-]

I don't think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that's just a thought experiment; and I wouldn't consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.

Comment author: Vladimir_Nesov 10 January 2011 09:18:21PM 3 points [-]

I don't think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that's just a thought experiment; and I wouldn't consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.

I agree (if not on 1.2 figure, then still on some 1+epsilon).

It's analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others' homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it's also much easier to adjust attitude towards homosexuality than one's sexual orientation, in the long run).

Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can't be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).

Comment author: Alicorn 10 January 2011 09:24:11PM 1 point [-]

I think you mean "persecuting", although depending on what exactly you're talking about I suppose you could mean "prosecuting".

Comment author: Vladimir_Nesov 10 January 2011 09:38:29PM *  0 points [-]

Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?

Comment author: Armok_GoB 13 January 2011 10:49:50PM -1 points [-]

I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?

Comment author: Alicorn 13 January 2011 11:20:34PM 1 point [-]

Who are you talking about?