Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In philosophy, the Principle of Charity is a technique in which you evaluate your opponent’s position as if it made the most amount of sense possible given the wording of the argument. That is, if you could interpret your opponent's argument in multiple ways, you would go for the most reasonable version. This is a good idea for several reasons. It counteracts the illusion of transparency and correspondence bias, it makes you look gracious, if your opponent really does believe a bad version of the argument sometimes he’ll say so, and, most importantly, it helps you focus on getting to the truth, rather than just trying to win a debate.
Recently I was in a discussion online, and someone argued against a position I'd taken. Rather than evaluating his argument, I looked back at the posts I'd made. I realized that my previous posts would be just as coherent if I'd written them while believing a position that was slightly different from my real one, so I replied to my opponent as if I had always believed the new position. There was no textual evidence that showed that I hadn't. In essence, I got to accuse my opponent of using a strawman regardless of whether or not he actually was. It wasn't until much later that I realized I'd applied the Principle of Charity to myself.
Now, this is bad for basically every reason it's good to apply it to other people. You get undeserved status points for being good at arguing. You exploit the non-existence of transparency. It helps you win a debate rather than trying to maintain consistent and true beliefs. And maybe worst of all, if you're good enough at getting away with it, no one knows you're doing it but you... and sometimes not even you.
Like most bad argument techniques, I wasn't aware I was doing this at a conscious level. I've probably been doing it for a long time but just didn't recognize it. I'd heard about not giving yourself too much credit, and not just trying to "win" arguments, but I had no idea I was doing both of those in this particular way. I think it's likely that this habit started from realizing that posting your opinion doesn't give people a temporary flash of insight and the ability to look into your soul and see exactly what you mean—all they have to go by is the words, and (what you hope are) connotations similar to your own. Once you've internalized this truth, be very careful not to abuse it and take advantage of the fact that people don't know that you don't always believe the best form of the argument.
It's also unfair to your opponent to make them think they've misunderstood your position when they haven't. If this happens enough, they could recalibrate their argument decoding techniques, when really they were accurate to start with, and you'll have made both of you that much worse at looking for the intended version of arguments.
Ideally, this would be frequently noticed, since you are in effect lying about a large construct of beliefs, and there's probably some inconsistency between the new version and your past positions on the subject. Unfortunately though, most people aren't going to go back and check nearly as many of your past posts as you just did. If you suspect someone's doing this to you, and you're reasonably confident you don't just think so because of correspondence bias, read through their older posts (try not to go back too far though, in case they've just silently changed their mind). If that fails, it's risky, but you can try to call them on it by asking about their true rejection.
How do you prevent yourself from doing this? If someone challenges your argument, don't look for ways by which you can (retroactively) have been right all along. Say "Hm, I didn't think of that", to both yourself and your opponent, and then suggest the new version of your argument as a new version. You'll be more transparent to both yourself and your opponent, which is vital for actually gaining something out of any debate.
tl;dr: If someone doesn't apply the Principle of Charity to you, and they're right, don't apply it to yourself—realize that you might just have been wrong.
Link to source: http://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mapping: Argument Maps Improve Critical Thinking, Debate tools: an experience report
How are critical thinking skills acquired? Five perspectives: Tim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.
In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis. This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice. We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems. To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach. In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students. (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)
LW has been introduced to argument mapping before.
You can download the audio and PDFs from the 2007 Cognitive Aging Summit in Washington DC here; they're good listening. But I want to draw your attention to the graphs on page 6 of Archana Singh-Manoux's presentation. It shows the "social gradient" of intelligence. The X-axis is decreasing socioeconomic status (SES); the Y-axis is increasing performance on tests of reasoning, memory, phonemic fluency, and vocabulary. Each graph shows a line sloping from the upper left (high SES, high performance) downwards and to the right.
Does anything leap out at you as strange about these graphs?
This is a response to Eliezer Yudkowsky's The Logical Fallacy of Generalization from Fictional Evidence and Alex Flint's When does an insight count as evidence? as well as komponisto's recent request for science fiction recommendations.
My thesis is that insight forms a category that is distinct from evidence, and that fiction can provide insight, even if it can't provide much evidence. To give some idea of what I mean, I'll list the insights I gained from one particular piece of fiction (published in 1992), which have influenced my life to a large degree:
- Intelligence may be the ultimate power in this universe.
- A technological Singularity is possible.
- A bad Singularity is possible.
- It may be possible to nudge the future, in particular to make a good Singularity more likely, and a bad one less likely.
- Improving network security may be one possible way to nudge the future in a good direction. (Side note: here are my current thoughts on this.)
- An online reputation for intelligence, rationality, insight, and/or clarity can be a source of power, because it may provide a chance to change the beliefs of a few people who will make a crucial difference.
So what is insight, as opposed to evidence? First of all, notice that logically omniscient Bayesians have no use for insight. They would have known all of the above without having observed anything (assuming they had a reasonable prior). So insight must be related to logical uncertainty, and a feature only of minds that are computationally constrained. I suspect that we won't fully understand the nature of insight until the problem of logical uncertainty is solved, but here are some of my thoughts about it in the mean time:
- A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.
- An insight is kind of like a mathematical proof: in theory you could have thought of it yourself, but reading it saves you a bunch of computation.
- Recognizing an insight seems easier than coming up with it, but still of nontrivial difficulty.
So a challenge for us is to distinguish true insights from unhelpful distractions in fiction. Eliezer mentioned people who let the Matrix and Terminator dominate their thoughts about the future, and I agree that we have to be careful not to let our minds consider fiction as evidence. But is there also some skill that can be learned, to pick out the insights, and not just to ignore the distractions?
P.S., what insights have you gained from fiction?
P.P.S., I guess I should mention the name of the book for the search engines: A Fire Upon the Deep by Vernor Vinge.
"Fashion is a form of ugliness so intolerable we have to alter it every six months."
-- Oscar Wilde
For the past few decades, I and many other men my age have been locked in a battle with the clothing industry. I want simple, good-looking apparel that covers my nakedness and maybe even makes me look attractive. The clothing industry believes someone my age wants either clothing laced with profanity, clothing that objectifies women, clothing that glorifies alcohol or drug use, or clothing that makes them look like a gangster. And judging by the clothing I see people wearing, on the whole they are right.
I've been working my way through Steven Pinker's How The Mind Works, and reached the part where he quotes approvingly Quentin Bell's theory of fashion. The theory provides a good explanation for why so much clothing seems so deliberately outrageous.
So this is Utopia, is it? Well
I beg your pardon, I thought it was Hell.
-- Sir Max Beerholm, verse entitled
In a Copy of More's (or Shaw's or Wells's or Plato's or Anybody's) Utopia
This is a shorter summary of the Fun Theory Sequence with all the background theory left out - just the compressed advice to the would-be author or futurist who wishes to imagine a world where people might actually want to live:
- Think of a typical day in the life of someone who's been adapting to Utopia for a while. Don't anchor on the first moment of "hearing the good news". Heaven's "You'll never have to work again, and the streets are paved with gold!" sounds like good news to a tired and poverty-stricken peasant, but two months later it might not be so much fun. (Prolegomena to a Theory of Fun.)
- Beware of packing your Utopia with things you think people should do that aren't actually fun. Again, consider Christian Heaven: singing hymns doesn't sound like loads of endless fun, but you're supposed to enjoy praying, so no one can point this out. (Prolegomena to a Theory of Fun.)
- Making a video game easier doesn't always improve it. The same holds true of a life. Think in terms of clearing out low-quality drudgery to make way for high-quality challenge, rather than eliminating work. (High Challenge.)
- Life should contain novelty - experiences you haven't encountered before, preferably teaching you something you didn't already know. If there isn't a sufficient supply of novelty (relative to the speed at which you generalize), you'll get bored. (Complex Novelty.)
Reply to: Overconfidence is Stylish
I respectfully defend my lord Will Strunk:
"If you don't know how to pronounce a word, say it loud! If you don't know how to pronounce a word, say it loud!" This comical piece of advice struck me as sound at the time, and I still respect it. Why compound ignorance with inaudibility? Why run and hide?
How does being vague, tame, colorless, irresolute, help someone to understand your current state of uncertainty? Any more than mumbling helps them understand a word you aren't sure how to pronounce?
Goofus says: "The sky, if such a thing exists at all, might or might not have a property of color, but, if it does have color, then I feel inclined to state that it might be green."
Gallant says: "70% probability the sky is green."
Which of them sounds more confident, more definite?
But which of them has managed to quickly communicate their state of uncertainty?
(And which of them is more likely to actually, in real life, spend any time planning and preparing for the eventuality that the sky is blue?)
One of my pet topics, on which I will post more one of these days, is the Rationalist in Fiction. Most of the time - it goes almost without saying - the Rationalist is done completely wrong. In Hollywood, the Rationalist is a villain, or a cold emotionless foil, or a child who has to grow into a real human being, or a fool whose probabilities are all wrong, etcetera. Even in science fiction, the Rationalist character is rarely done right - bearing the same resemblance to a real rationalist, as the mad scientist genius inventor who designs a new nuclear reactor in a month, bears to real scientists and engineers.
Perhaps this is because most speculative fiction, generally speaking, is interested in someone battling monsters or falling in love or becoming a vampire, or whatever, not in being rational... and it would probably be worse fiction, if the author tried to make that the whole story. But that can't be the entire problem. I've read at least one author whose plots are not about rationality, but whose characters are nonetheless, in passing, realistically rational.
That author is Lawrence Watt-Evans. His work stands out for a number of reasons, the first being that it is genuinely unpredictable. Not because of a postmodernist contempt for coherence, but because there are events going on outside the hero's story, just like real life.
Most authors, if they set up a fantasy world with a horrible evil villain, and give their main character the one sword that can kill that villain, you could guess that, at the end of the book, the main character is going to kill the evil villain with the sword.
Not Lawrence Watt-Evans. In a Watt-Evans book, it's entirely possible that the evil villain will die of a heart attack halfway through the book, then the character will decide to sell the sword because they'd rather have the money, and then the character uses the money to set up an investment banking company.
Most witches don't believe in gods. They know that the gods exist, of course. They even deal with them occasionally. But they don't believe in them. They know them too well. It would be like believing in the postman.
—Terry Pratchett, Witches Abroad
Once upon a time, I was pondering the philosophy of fantasy stories—
And before anyone chides me for my "failure to understand what fantasy is about", let me say this: I was raised in an SF&F household. I have been reading fantasy stories since I was five years old. I occasionally try to write fantasy stories. And I am not the sort of person who tries to write for a genre without pondering its philosophy. Where do you think story ideas come from?
I was pondering the philosophy of fantasy stories, and it occurred to me that if there were actually dragons in our world—if you could go down to the zoo, or even to a distant mountain, and meet a fire-breathing dragon—while nobody had ever actually seen a zebra, then our fantasy stories would contain zebras aplenty, while dragons would be unexciting.
Now that's what I call painting yourself into a corner, wot? The grass is always greener on the other side of unreality.
View more: Next