"A man convinced against his will Is of the same opinion still."
I think you need a longer time span to see this is quite often false. What has happened many times is I argue with my friend or my parent and "win" while they're defending their position to the teeth. Months later, they present my argument to me as their own as if the previous discussion never happened. Some people's forgetfulness amazes me, but I suspect I've changed my mind this way without noticing too.
Admitting you're wrong is quite different from changing your mind. Even so, I hopefully don't argue to win these days anymore.
Yes yes yes yes yes.
This still amazes me every time it happens.
It can also happen without the "winning." That is, I've had experiences like:
Sam: "A, because X."
Me: "Well... I dunno. I mean, I've experienced X, sure, and I agree that X is evidence of A. But I've also experienced Y and Z, which seem to me to be evidence of B. And if I take (X and Y and Z) all together, (B and NOT A) seems much more plausible than (B and A)."
Sam: "I still think A."
Me: "Well, I agree that (X and Y and Z and A and B) is not absurdly improbable, I just think it's less likely than (X and Y and Z and not-A and B). So you might be right."
(wait a while)
Sam: "Not-A, because B."
And I know for certain that I've been Sam in exchanges like this as well. Worse, I know that in some cases I toggle. This is embarrassing, but it is what it is.
How do you know? Have people told you this?
Yeah, essentially. I've been living with the same guy for 20 years, and when he reminds me that I've said "A" in the past I can remember having said A, despite believing that I've always believed not-A, and it seems more likely that I'm mis-remembering my own earlier beliefs than that I was lying to him at the time. Similarly, when he reminds me that he's previously reminded me that I've said "A" in the past and I've had trouble believing that, I can remember that conversation, despite believing that I've always believed A.
Of course, it's certainly possible that I'm just being suggestible and editing memories realtime, but it doesn't feel that way. And if I'm that suggestible, which I might very well be, that makes it even more plausible that I've toggled. So I'm pretty confident that he's right.
What has happened many times is I argue with my friend or my parent and "win" while they're defending their position to the teeth. Months later, they present my argument to me as their own as if the previous discussion never happened.
I have known one person for whom this was a deliberate policy. He would never (he said) admit to changing his mind about anything. If he did change his mind as a result of an argument, he would merely cease advocating the view he now thought erroneous, and after some suitable lapse of time, advocate what he now believed, as if he had believed it all along.
Not that he said, but I guess it was a status thing. Another curious feature of his discourse was that on mailing lists he would never post a direct reply to anything, with a "Re:" in the subject line. He engaged with the conversations, but always framed his postings as if they were entirely new contributions -- as if one were to participate here by only posting top level articles. I assume this was also about status.
Magic the gathering analogy time!
Arguing is like getting to look at the top card of your deck and then put it on the bottom if you wish ("scrying for 1"). When you scry, what you want is to see that the top card of your deck is great and doesn't need to be bottomed. But in that case you gained very little from scrying - the actual value of scrying comes when your top card is terrible, and you need to get rid of it.
Which is to say that you don't want to lose an argument, but that losing and actually changing your mind is a major way of extracting value from them.
Another perspective on why it's hard to meaningfully win an argument: epistemic learned helplessness.* Most people, though perhaps not most people on this site, have known someone who could argue circles around them and "win" nearly any argument, to the point where "losing" an argument is so sure either way that it's not even evidence of being wrong. If the fact that I've "lost" an argument (been confronted with reasoning that I am unable to effectively refute on a conversational timescale) forces me to actually change my mind, I could end up believing anything.
Just because my argument for why I like ham sandwiches is full of holes doesn't mean I don't really like ham sandwiches.
*Edit: blog has been locked since I posted this comment. See archived version of this post at https://web.archive.org/web/20130114194332/http://squid314.livejournal.com/350090.html. Bare URL because when I put in the actual hyperlink, LW gets confused and inserts a "<" in the middle, breaking the link.
Great post. This sort of perspective is something that I'd definitely like to see more of on LessWrong.
Actually, the reason for that title was because of a point I decided to leave out, but may as well spell out here: "Deciding to talk about politics, even though this may cause you to lose some of your audience" and "Deciding to tell people they're wrong, even though this may cause you to lose some of your audience" are both tradeoffs, and it's odd that LessWrong community norms go so far in one direction on one tradeoff and so far in the other direction on the other tradeoff (at least with regards to certain subjects).
I suspect the reason for this mostly has to do with Eliezer thinking politics are not very important, but also thinking that, say, telling certain people their AI projects are dangerously stupid is very important. But not everyone agrees, and the anti-politics norm is itself a barrier to talking about how important politics are. (Personally, I suspect government action will be important for the future of AI in large part because I expect large organizations in general to be important for the future of AI.)
Like a lot of advice of this sort, this benefits from being flipped around: become able to lose arguments so you can learn from them (which is the real winning). I don't have much concrete advice on doing that, but I know it's possible because some people are more capable than others. Being surrounded by a community that respects mind-changing helps. Simply being aware of the problem also might. As might having something to protect (though probably only when you actually do).
You can’t win an argument
I'd add the caveat "it can't be known that you've won the argument". I've been in several conversations where I've got people to essentially argue away their entire position - but only as long as you don't point it out to them. As long as they can feel they haven't lost, they can end up with very different positions from their starting positions.
Firm endorsement of Carnegie, and firm endorsement of applying this rule basically everywhere. Even on Less Wrong, I do my best to clash with others' pride as little as possible. I've found the Socratic method to work fairly well. In my experience, it is most useful at finding your misunderstandings and preventing them from causing you trouble, which is well worth it.
I've had bad experiences using the Socratic method on people who are trying to win. I ask a question and they wander away from it to reiterate all of their points. And now I've used up my talking quota for a while.
On people who start out wanting to learn, it can be very effective.
It had the probably unintended effect, though, of helping to give me a deep cynicism about human nature, a cynicism which persists to this day.
I've had similar experiences:
-People refusing to draw conclusions that cast them in a negative light, and directing sadness / anger / annoyance at me for being critical.
-People accepting conclusions that cast them in a negative light, and subsequently reacting with sadness and self-anger.
However, Take Caution - this can lead to the following problem: "Person X has problem A, but what's the point of telling t...
I still that the human need to think highly of ourselves is a far more important source of human rationality
Missing verb after still, and I also think rationality should be irrationality.
To your alternative approaches I would also add Bruce Schneier's advice in Cryptographic Engineering, where he talks a little about the human element in dealing with clients. It's similar to the Socratic approach, in that you ask about a possible flaw rather than argue it exists.
Bad: "that doesn't work. Someone can just replay the messages."
Good: "what defenses does this system have against replay attacks?"
I think this happens because it takes skill to accept being wrong. I know this has essentially been mentioned on LW before (my most recent reading was in MrMind's comment on the 5 Second Level), but I don't think most people have learnt that skill.
What we learn is that if we say "yes, I was wrong", others have then jumped on us, made fun of us or made an example of us - this starts when we're kids, or in school, where if we happen to be around teachers or parents with an inferiority complex, we've quickly learnt that it's better to be absolutely...
I have never been able to get the Socratic Method to work on the Internet. In theory the Socratic Method is effective because the student has to reason their own way to the conclusion, and so they end up knowing it more deeply and thoroughly than if they were just told the conclusion by the teacher. But somehow it never works for me.
I think part of the problem is that the Socratic Method relies on the participants agreeing to take on the appropriate roles in the discussion. In particular, the "student" has to agree to play the role of the student...
FWIW, I've stopped using the Socratic Method, because, in my experience, it always elicits a strong and immediate negative reaction. People basically respond to it by saying, "stop asking me questions to which you obviously already have all the answers; do you think I'm stupid and wouldn't notice ?"
I generally agree (a lot) with this principle, especially during direct, in-person discussions. Though I still remain persuaded that there is a place for contradiction -- and even explicitly ridicule of ideas in argument.
I'm thinking specifically of my experience with religion. You mentioned the example of lurkers being able to access direct arguments. For some large chunk of the fundamentalist theist lurker crowd out there, polite, Socratic-styled arguments against their religion may not do the trick. This, I think, is because (1) theists are super good a...
I listen to all these complaints about rudeness and intemperateness, and the opinion that I come to is that there is no polite way of asking somebody: have you considered the possibility that your entire life has been devoted to a delusion?
How about "Do you ever wonder if your entire life has been devoted to a delusion? It's a frightening possibility, and most horrifying of all, no matter what actually ends up being true, you have to agree that most people do this."
Admittedly, I suspect a lot of people would completely miss the point and tell ...
How many people on LessWrong realize that when you tell someone their AI project is dangerously stupid, or that their favorite charity is a waste of money, you risk losing them forever—and not because of anything to do with the the subtler human biases, but just becasue most people hate being told they're wrong?
Well, the problem is, these two specific examples simply are not true. Many charities are reasonably effective in their stated purpose, even if "effective altruism" believers would hold that they are strictly suboptimal in terms of hum...
Relevant other post: Defecting by Accident - A Flaw Common to Analytical People
whoops, redundant, sorry!
Related to: Two Kinds of Irrationality and How to Avoid One of Them
When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:
In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.
It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.
It's important that people's resistance to being told they're wrong is quite general. It's not restricted to specific topics like religion or politics. The "You Can't Win an Argument" chapter begins with a story about a man who refused to accept that the quotation "There's a divinity that shapes our ends, rough-hew them how we will" came from Hamlet rather than the Bible. Carnegie correctly identifies the reason people can be irrational about such seemingly unimportant questions: pride.
In fact, if Carnegie's book has one overarching theme, it's the incredible power of the human need to think highly of ourselves (individually, not as a species). It opens with stories of a number of gangsters who insisted against all evidence that they were good people (including Al Capone, and a couple of now-forgotten names that were contemporary references at the time the book was written in 1936). By the end of that first chapter, those examples have been spun into what I suppose was intended to be a positive, upbeat message: "Don't criticize, condemn, or complain."
It had the probably unintended effect, though, of helping to give me a deep cynicism about human nature, a cynicism which persists to this day. In particular, I saw in a flash that what Carnegie was saying implied you could get people to support some deeply horrible causes, as long as you presented the cause in a way that told them how wonderful they are. I think I even had an inkling at the time that there was some evolutionary explanation for this. I can't claim to have exactly derived Robert Trivers' theory of self-deception on my own, but I certainly was primed to accept the idea when I got around to reading Steven Pinker in college.
(Around very roughly the same time as I read How to Win Friends and Influence People, I read Homer's epics, which served as the other early building block in my present cynicism. It was Homer who taught me there had once been a culture that held that raping women taken captive in war was a perfectly normal thing to do, even suitable behavior for "heroes.")
But such cynicism is a post for another day. When it comes to rationality, the effect of Carnegie's book was this: even after having read all of the sequences and all of HPMOR, I still think that the human need to think highly of ourselves is a far more important source of human irrationality than oh, say, the fundamental attribution error or the planning fallacy. It does seem foolish to be so strongly influenced by one book I read in my early teens, but on the other hand the evidence I've encountered since then (for example learning about Trivers' theory of self-deception) seems to me to confirm this view.
So why do I go on arguing with people and telling them they're wrong in spite of all this? Well, even if nine times out of ten arguing doesn't change anyone's mind, sometimes the one time out of ten is worth it. Sometimes. Not always. Actually, with most people I'm unlikely to try to argue with them in person. I'm much more likely to argue when I'm in a public internet forum, when even if I don't persuade the person I'm directly talking to, I might persuade some of the lurkers.
Now there are various tactics for trying to change people's minds without directly telling them they're wrong. Bryan Caplan's The Myth of the Rational Voter has a section on how to improve undegraduate economics classes, which includes the observation that: "'I'm right, you're wrong,' falls flat, but 'I'm right, the people outside this classroom are wrong, and you don't want to be like them, do you?' is, in my experience, fairly effective." Of course, this doesn't work if the other person has definitely made up their mind.
There's also the Socratic Method, which Carnegie sings the praises of. I think many people get the wrong idea about the Socratic method, because the most famous source for it is Plato's dialogues, which are works of fiction and tend to have things go much better for Socrates than they ever would in real life. From reading Xenophon's Memorabilia, my impression is that the historical Socrates was probably something of a smartass who was not very good at winning friends or influencing most of his immediate contemporaries. (They did vote to kill him, after all.)
There may be a version of the Socratic method that's more likely to actually make progress changing people's minds. I recently read Peter Boghossian's A Manual for Creating Atheists, a how-to book for atheists who want to get better at talking to believers about religion. Boghossian's approach is heavily inspired by Socrates, and the examples of conversation he gives, based on actual conversations he's had with believers, are far more believable than Plato's—indeed, I'm left wondering if he used a tape recorder.
What most stands out about those conversations is Borghossian's patience. He politely keeps asking questions as the conversation seemingly goes round in circles, sometimes even shutting up and listening as his interlocutors spend several minutes basically repeating themselves, or going off on a tangent about the leadership structure of their church.
I bet Borghossian's techniques are great if you have the time and patience to master and apply them—but you won't always have that. So while I recommend the book, I don't think it will always be an alternative to sometimes straight-up telling people they're wrong.
Oh, and then there's just plain oldfashioned trying to be polite and direct at the same time. But that doesn't always work either. As Daniel Dennett once said, "I listen to all these complaints about rudeness and intemperateness, and the opinion that I come to is that there is no polite way of asking somebody: have you considered the possibility that your entire life has been devoted to a delusion?"
In spite of all this, there's still a tradeoff you're making when you criticize people directly. I've known that for roughly half my life, and have often made the tradeoff gladly. I tend to assume other rationalists know this too, and make the tradeoff consciously as well.
But sometimes I wonder.
How many people on LessWrong realize that when you tell someone their AI project is dangerously stupid, or that their favorite charity is a waste of money, you risk losing them forever—and not because of anything to do with the the subtler human biases, but just becasue most people hate being told they're wrong?
If you are making a conscious tradeoff there, more power to you! Those things need saying! But if you're not... well, at the very least, you might want to think a little harder about what you're doing.