Kaj_Sotala comments on Are Deontological Moral Judgments Rationalizations? - Less Wrong

37 Post author: lukeprog 16 August 2011 04:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (168)

You are viewing a single comment's thread.

Comment author: Kaj_Sotala 16 August 2011 07:22:33PM 5 points [-]

But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.

IAWYC, but the obvious alternative explanation in this example is that the person in question does believe that killing a fetus is murder and that the doctor should be tried for it, but also realizes that expressing such a radical opinion would harm the cause. So he refuses to answer. Of course, the fact that he can get away with simply refusing to answer is suspicious, and there are plenty of more damning examples.

Regardless, I find this to be a great post. Though readers would also do well to remind themselves that you can't derive ought from is - just because deontological judgements would be defended by rationalizations, it wouldn't mean the judgements themselves would be wrong. (As moral judgments can't be right or wrong, only something you agree or disagree with. No XML tags in the universe, and so forth.)

Comment author: Will_Newsome 16 August 2011 08:24:26PM 2 points [-]

As moral judgments can't be right or wrong, only something you agree or disagree with.

This is highly contentious; did you mean to state it so confidently?

Comment author: Kaj_Sotala 16 August 2011 09:52:56PM 0 points [-]

Yes, but see also my response to Luke.

Comment author: lukeprog 16 August 2011 07:44:49PM 1 point [-]

This is a bit off topic I know, but...

moral judgments can't be right or wrong, only something you agree or disagree with

I think moral judgments can be correct or incorrect if you actually define your moral terms. They might also be correct or incorrect if we define moral terms with reference to whichever meanings for moral terms pop out of a completed cognitive neuroscience and a careful analysis of the brain of the person whose moral judgments we are evaluating.

Do you disagree?

Comment author: Kaj_Sotala 16 August 2011 09:51:16PM *  2 points [-]

I generally consider "you ought to do X" to mean "I'd prefer it if you did X", and do not think judgements of "ought" can be wrong in this sense. (Aside for the normal questions of "what does a preference mean", but I don't find those relevant in this particular situation.) I agree that there are definitions of "ought" by which moral judgements can actually be wrong.

Incidentally, since I just read your comment over at Remind Physicalists where you pointed out that upvotes (or by extension, my "great post" comment) don't convey you information about what it was about the post that was good: I found the most value in this post from the fact that it made the general argument of "our moral arguments tend to be rationalizations" with better citations and backing than I'd previously seen. The fact that it also made the case of deontology in particular tending to be rationalization was interesting, but not as valuable.

Comment author: Vladimir_Nesov 16 August 2011 10:20:00PM *  3 points [-]

I generally consider "you ought to do X" to mean "I'd prefer it if you did X", and do not think judgements of "ought" can be wrong in this sense. (Aside for the normal questions of "what does a preference mean", but I don't find those relevant in this particular situation.)

That some judgment or opinion that can be changed on further reflection (and that goes for all actions; perhaps you ate incorrect sort of cheese), motivates introducing the (more abstract) idea of correctness. Even if something is just a behavior, one can look back and rewrite the heuristics that generated it, to act differently next time. When this process itself is abstracted from the details of implementation, you get a first draft of a notion of correctness. With its help, you can avoid what you would otherwise correct and do the improved thing instead.

Comment author: Kaj_Sotala 16 August 2011 11:09:21PM 1 point [-]

You're right, though I'm not sure if "correctness" is the word I'd use for that, as it has undesirable connotations. Maybe something like "stable (upon reflection)".

Comment author: Wei_Dai 17 August 2011 02:08:58AM 1 point [-]

What are the undesirable connotations of "correctness"?

Comment author: Kaj_Sotala 17 August 2011 08:23:40AM *  1 point [-]

"Correct" is closely connected with a moral "ought", which in turn has a number of different definitions (and thus connotations) depending on who you speak with. The statement "it would be correct for Clippy to exterminate humanity and turn the planet into a paperclip factory" might be technically right if we equate "stable" and "correct", but it sure does sound odd. People who are already into the jargon might be fine with it, but it's certain to create unneeded misunderstandings with newcomers.

Also, I suspect that taking a criteria like stability under reflection and calling it correctness may act as a semantic stopsign. If we just call it stability, it's easier to ask questions like "should we require moral judgements to be stable" and "are there things other than stability that we should require". If we call it correctness, we have already framed the default hypothesis as "stability is the thing that's required".

Comment author: Wei_Dai 17 August 2011 09:45:39AM 2 points [-]

Now I'm confused about what your position is. What you said originally was:

As moral judgments can't be right or wrong, only something you agree or disagree with.

But if you're now saying that it makes sense to ask questions like "should we require moral judgements to be stable", that seems to imply that moral judgments can be wrong (or at least it's unclear that moral judgements can't be wrong). Because asking that question implies that you think the answer might be yes, in which case unstable moral judgments would be wrong. Am I missing something here?

Comment author: Kaj_Sotala 17 August 2011 11:40:51AM 2 points [-]

You're right, I was being unclear. Sorry.

When I originally said that moral judgments couldn't be right or wrong, I was defining "ought" in the common sense meaning of the word, which I believe to roughly correspond to emotivism.

When I said that we shouldn't use the word correctness to refer to stability, and that we might have various criteria for correctness, I meant "ought" or "correct" in the sense of some hypothetical goal system we may wish to give an AI.

There's some sort of a complex overlap/interaction between those two meanings in my mind, which contributed to my initial unclear usage and which prompted the mention in my original comment. Right now I'm unable to untangle my intuitions about that connection, as I hadn't realized the existence of the issue before reading your comment.

Comment author: Wei_Dai 17 August 2011 11:37:21PM 4 points [-]

When I originally said that moral judgments couldn't be right or wrong, I was defining "ought" in the common sense meaning of the word, which I believe to roughly correspond to emotivism.

Here's my argument against emotivism. First, I don't dispute that empirically most people form moral judgments from their emotional responses with little or no conscious reflection. I do dispute that this implies when they state moral judgements, those judgements do not express propositions but only emotional attitudes (and therefore can't be right or wrong).

Consider an analogy with empirical judgements. Suppose someone says "Earth is flat." Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it's the latter, then they can't be wrong (assuming they're not deliberately lying). I think we would say that a statement like "Earth is flat" does express a proposition and not just a belief, and therefore can be wrong, even if the person stating it did so based purely on gut instinct, without any conscious deliberation.

You might argue that the analogy isn't exact, because it's clear what kind of proposition is expressed by "Earth is flat", but we don't know what kind of proposition moral judgements could be expressing, nor could we find out by asking the people who are stating those moral judgements. I would answer that it's actually not obvious what "Earth is flat" means, given that the true ontology of the world is probably something like Tegmarks' Level 4 multiverse with its infinite copies of both round and flat Earths. Certainly the person saying "Earth is flat" couldn't tell you exactly what proposition they are stating. I could also bring up other examples of statements whose meanings are unclear, which we nevertheless do not think "can't be right or wrong", such as "UDT is closer to the correct decision theory than CDT is" or "given what we know about computational complexity, we should bet on P!=NP".

(To be clear, I think it may still turn out to be the case that moral judgments can't be said to mean anything, and are mere expressions of emotional attitude (or, more generally, brain output). I just don't see how anyone can state that confidently at this point.)

Right now I'm unable to untangle my intuitions about that connection, as I hadn't realized the existence of the issue before reading your comment.

I'd be interested in your thoughts once you've untangled them.

Comment author: Vladimir_Nesov 17 August 2011 08:53:53AM *  0 points [-]

I was responding to a slightly different situation: you suggested that sometimes, considerations of "correctness" or "right/wrong" don't apply. I pointed out that we can get a sketch of these notions for most things quite easily. This sketch of "correctness" is in no way intended as something taken to be the accurate principle with unlimited normative power. The question of not drowning the normative notions (in more shaky opinions) is distinct from the question of whether there are any normative notions to drown to begin with.

Comment author: Kaj_Sotala 17 August 2011 11:45:49AM 0 points [-]

I think I agree with what you're saying, but I'm not entirely sure whether I'm interpreting you correctly or whether you're being sufficiently vague that I'm falling prey to the double illusion of transparency. Could you reformulate that?

Comment author: lukeprog 16 August 2011 10:04:02PM 0 points [-]

Thanks for the detail!