In philosophy, the Principle of Charity is a technique in which you evaluate your opponent’s position as if it made the most amount of sense possible given the wording of the argument. That is, if you could interpret your opponent's argument in multiple ways, you would go for the most reasonable version. This is a good idea for several reasons. It counteracts the illusion of transparency and correspondence bias, it makes you look gracious, if your opponent really does believe a bad version of the argument sometimes he’ll say so, and, most importantly, it helps you focus on getting to the truth, rather than just trying to win a debate.

Recently I was in a discussion online, and someone argued against a position I'd taken. Rather than evaluating his argument, I looked back at the posts I'd made. I realized that my previous posts would be just as coherent if I'd written them while believing a position that was slightly different from my real one, so I replied to my opponent as if I had always believed the new position. There was no textual evidence that showed that I hadn't. In essence, I got to accuse my opponent of using a strawman regardless of whether or not he actually was. It wasn't until much later that I realized I'd applied the Principle of Charity to myself.

Now, this is bad for basically every reason it's good to apply it to other people. You get undeserved status points for being good at arguing. You exploit the non-existence of transparency. It helps you win a debate rather than trying to maintain consistent and true beliefs. And maybe worst of all, if you're good enough at getting away with it, no one knows you're doing it but you... and sometimes not even you.

Like most bad argument techniques, I wasn't aware I was doing this at a conscious level. I've probably been doing it for a long time but just didn't recognize it. I'd heard about not giving yourself too much credit, and not just trying to "win" arguments, but I had no idea I was doing both of those in this particular way. I think it's likely that this habit started from realizing that posting your opinion doesn't give people a temporary flash of insight and the ability to look into your soul and see exactly what you meanall they have to go by is the words, and (what you hope are) connotations similar to your own. Once you've internalized this truth, be very careful not to abuse it and take advantage of the fact that people don't know that you don't always believe the best form of the argument.

It's also unfair to your opponent to make them think they've misunderstood your position when they haven't. If this happens enough, they could recalibrate their argument decoding techniques, when really they were accurate to start with, and you'll have made both of you that much worse at looking for the intended version of arguments.

Ideally, this would be frequently noticed, since you are in effect lying about a large construct of beliefs, and there's probably some inconsistency between the new version and your past positions on the subject. Unfortunately though, most people aren't going to go back and check nearly as many of your past posts as you just did. If you suspect someone's doing this to you, and you're reasonably confident you don't just think so because of correspondence bias, read through their older posts (try not to go back too far though, in case they've just silently changed their mind). If that fails, it's risky, but you can try to call them on it by asking about their true rejection.

How do you prevent yourself from doing this? If someone challenges your argument, don't look for ways by which you can (retroactively) have been right all along. Say "Hm, I didn't think of that", to both yourself and your opponent, and then suggest the new version of your argument as a new version. You'll be more transparent to both yourself and your opponent, which is vital for actually gaining something out of any debate.

 

tl;dr: If someone doesn't apply the Principle of Charity to you, and they're right, don't apply it to yourselfrealize that you might just have been wrong.

New Comment
23 comments, sorted by Click to highlight new comments since:

Principle of charity trades off epistemic rationality for efficiency and signaling of respect. An improved version of the technique is LCPW (or "steel man"), where you focus your attention on the best version of the argument made by your opponent that you yourself can reconstruct. You don't require the improved argument to satisfy the exact wording used by the interlocutor, and more importantly don't need to assume that it's what they really mean, or pretend that you believe it's what they mean. The evidence about which position the other person really holds often points in a different direction from where the best version of their position can be found, so it's useful to keep separate mental buckets for these ideas.

So the improved version of your thesis is, "Don't apply the principle of charity at all, use LCPW technique instead". (As a bonus, steel manning your own beliefs means fixing your own epistemic errors.)

Principle of charity trades off epistemic rationality

I'm not sure exactly what you mean by this, but the principle of charity entails not naively believing one's first order guess of what one thinks the other person means and instead using motivated thinking in an attempt to somewhat counteract one's biases such as one's desire to win an argument. Fighting bias with bias is obviously problematic, but I wouldn't describe it as trading away epistemic rationality.

An improved version of the technique is LCPW

As you said "...importantly don't need to assume that it's what they really mean...it's useful to keep separate mental buckets for these ideas, and very important to avoid conflating them." The principle of charity is useful for reconstructing what was really meant, unlike LCPW, so each has an advantage over the other, and "improved" is not apt..

The principle of charity is useful for reconstructing what was really meant, unlike LCPW

Agreed that LCPW is not directly useful this way. On the other hand, if what you want is correct understanding of a question, it doesn't matter what anyone really thought originally, only what is the answer; or for arguments, not what were the original arguments, but what are the actually important considerations. There is a failure mode in disagreements where opponents start arguing about what they really meant, finding ground for the verbal fight unrelated to the original topic, preventing useful progress.

If taken as a counter-bias against tendency to assume a less reasonable position than should be expected, principle of charity can somewhat help, but then there are better third alternatives to this technique that don't share its glaring anti-epistemic flaws. For example, you search harder for possible reasonable interpretations, to make sure they are available for consideration, but retain expected bad interpretations in the distribution of possible intended meanings.

Huh, I'd never realized the connection between PoC and LCPW before. I'll have to think about that, although I wouldn't necessarily say LCPW is a replacement for PoC. They solve different problems in practice--like lessdazed said, PoC can be more effective at countering overconfidence in knowing what you think your opponent meant, if that's the goal. Would you mind giving an example though?

ETA:

For example, you search harder for possible reasonable interpretations, to make sure they are available for consideration, but retain expected bad interpretations in the distribution of possible intended meanings.

I agree that if you're going to use PoC, you shouldn't apply it internally and unilaterally--if responding as though your opponent made a good argument requires some unlikely assumptions, you should still be well aware of that.

[-]prase150

Also, there is the danger of seeking the best hypothesis consistent with one's own previous statements rather than just the best hypothesis. While it is usually better than no updating at all, one can end up with complex convoluted untestable ideas, as it has happened to many theologians.

The damaging effects of trusting yourself and worshiping your own ideas are only partially damped by the injunction that principle of charity shouldn't apply to yourself, for if you apply it to others, and others apply it back to you, a wrong belief can still be amplified beyond reason. (See also information cascade, affective death spiral.)

Note that this is my first main post, so in addition to feedback being appreciated, I also hope this hasn't been written about before here. It seemed original when I wrote it, but I easily could have read and forgotten about it.

Well written and an original extension of Less Wrong concepts!

And maybe worst of all, if you're good enough at getting away with it, no one knows you're doing it but you.

Or, rather, as you go on to say in the next paragraphs, the worst part is that you might not know that you're doing it. In the vein of feedback, I would probably make this clearer in this sort of punch-line sentence. To help you decide how much to update on this, I read lots, but haven't yet written many blog posts.

Excellent point, I've added to that sentence both for consistency and to make it flow better into the next paragraph. Thanks!

Great post!

Both the principle of charity and the least convenient possible world principle often do not point to just one argument.

Wikipedia says:

Since the time of Quine et al., other philosophers[who?] have formulated at least four versions of the principle of charity. These alternatives may conflict with one another, so that charity becomes a matter of taste. The four principles are:
The other uses words in the ordinary way;
The other makes true statements;
The other makes valid arguments;
The other says something interesting.

I disagree with "charity becomes a matter of taste," and think instead that one should always apply each version.

I disagree with "charity becomes a matter of taste," and think instead that one should always apply each version.

I must disagree. Sometimes it really is more important to maintain an accurate model of reality. Enabling the other's bullshit is not always virtuous or useful.

The principle of charity is a counter-bias for one's (estimated amount of) motivated cognition, underestimation of inferential distance, typical mind fallacy, etc. This is towards an accurate model of reality about the person one is speaking with.

It is superseded as a way to find truth about the subject in contention, rather than about the other person, by LCPW.

It is also a guide for responding, which doesn't mean you have to believe anything in particular, just as one can always guess "red" for the color of the next card if most are red and some are blue. For the social utility of it, sure sometimes it's not always useful, but always is pretty extreme. It's not always useful to be honest, or to not fake a seizure in a debate either.

The principle of charity is a counter-bias for one's (estimated amount of) motivated cognition, underestimation of inferential distance, typical mind fallacy, etc. This is towards an accurate model of reality about the person one is speaking with.

No, basically it just isn't. The principle of charity as used here and in general is not "be charitable to the extent that unadorned Bayesian reasoning would tell you to anyway". For most purposes applying charity to the extent that it seeks accuracy is utterly insufficient. Actually applying these principles consistently implies the use of motivated cognition to achieve perceived pragmatic goals.

No, basically it just isn't. The principle of charity as used here and in general is not "be charitable to the extent that unadorned Bayesian reasoning would tell you to anyway".

I have looked into the matter by reading articles in the Stanford Encyclopedia of Philosophy, searching through google, reading journal articles by the originator and popularizers of the phrase etc. and I now know much more about this than I had.

It is probably worth a separate discussion post.

It is probably worth a separate discussion post.

If you are willing to put some detail in it would be worth a main post too.

I think a separate discussion post would be useful. When I wrote this, I was thinking of the PoC as something like an axiom that's not explicitly built into logic, but is necessary for productive discussion because otherwise people would constantly nitpick or strawman each other, there would be no way to stop them, and so on. Based on the discussion here, though, it's seeming more like a tool intended for social situations that's usually suboptimal for truth-finding purposes, although again, it's still better than always going with your initial interpretation or always going with the least logical interpretation.

The other uses words in the ordinary way;
The other makes true statements;
The other makes valid arguments;
The other says something interesting.

I disagree with "charity becomes a matter of taste," and think instead that one should always apply each version.

There are some items missing from the list, such as:

The Great Leader is always right;
The Holy Scripture makes true statements;
The Universe is just and caring.

Motivated cognition is your enemy. Don't invite it to feast on your mind.

Well that's a butter-coated 10-degree incline if ever I've seen one. Alternatively, please elaborate on how we can't have the former without the latter.

Summary: unreflective beliefs about others that don't consider that the lens that saw them has its flaws, when the lens can see those flaws, is mere rationality as a ritual.

Those last three are importantly different than the first four because they don't refer to the possible intent of a person.

As a tool to combat inferential distance, I can do better than taking my first-order best guess of what someone means, and can in addition give special consideration to the possibility that they are using words in an unusual way to say something true/valid/interesting. I can then modify my first best-guess such that it is more likely they mean the true/valid/interesting thing than I had previously thought.

It might be that different interpretations involve them saying something true and valid but uninteresting, or potentially interesting and valid but untrue, etc. That is why special attention should be paid to applying the principle of charity multiple times. One reading might have them only committing fallacy A, another fallacy B, and it would be mistaken to modify my first-order guess of what they intended based on my idiosyncratic distaste for a particular fallacy instead of doing my best to model their likely beliefs,preferences, etc.

There is no reason to believe my interpretation of what others mean is much clouded by a wrongful bias disproportionately disbelieving that the "Holy Scripture makes true statements". Inferential distance and motivated desire to win the argument far more might see me wrongly misinterpreting someone, being wrong about the facts in the world that relate to them, than being wrong about random facts of the world. To the extent I am wrong about facts in the world, I expect to mislead myself only slightly, and this would only be noticeable for questions the truths of which are unclear.

[-]TimS00

The principle of charity is an interpretive framework, not a method for evaluating truth. I can read an argument charitably and still think it is wrong. In other words, the principle of charity can be paraphrased as "Assume your debater is not being Logically Rude." DH7 arguments help ensure that your discussions are increasing the truth of your beliefs, and they require reading your opponent generously.

I did many hours of reading about this yesterday. I recommend holding off on arguing about it or spending time researching it until I have (or haven't) posted on this topic in the near future.

DH7 arguments [...] require reading your opponent generously.

False, see this comment.