We see far too direct a correspondence between others’ actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants.
When someone actually offends us—commits an action of which we (rightly or wrongly) disapprove—then, I observe, the correspondence bias redoubles. There seems to be a very strong tendency to blame evil deeds on the Enemy’s mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation that would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability.
On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?
Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.
But politics is the mind-killer. Debate is war; arguments are soldiers. If the Enemy did have an evil disposition, that would be an argument in favor of your side. And any argument that favors your side must be supported, no matter how silly—otherwise you’re letting up the pressure somewhere on the battlefront. Everyone strives to outshine their neighbor in patriotic denunciation, and no one dares to contradict. Soon the Enemy has horns, bat wings, flaming breath, and fangs that drip corrosive venom. If you deny any aspect of this on merely factual grounds, you are arguing the Enemy’s side; you are a traitor. Very few people will understand that you aren’t defending the Enemy, just defending the truth.
If it took a mutant to do monstrous things, the history of the human species would look very different. Mutants would be rare.
Or maybe the fear is that understanding will lead to forgiveness. It’s easier to shoot down evil mutants. It is a more inspiring battle cry to scream, “Die, vicious scum!” instead of “Die, people who could have been just like me but grew up in a different environment!” You might feel guilty killing people who weren’t pure darkness.
This looks to me like the deep-seated yearning for a one-sided policy debate in which the best policy has no drawbacks. If an army is crossing the border or a lunatic is coming at you with a knife, the policy alternatives are (a) defend yourself or (b) lie down and die. If you defend yourself, you may have to kill. If you kill someone who could, in another world, have been your friend, that is a tragedy. And it is a tragedy. The other option, lying down and dying, is also a tragedy. Why must there be a non-tragic option? Who says that the best policy available must have no downside? If someone has to die, it may as well be the initiator of force, to discourage future violence and thereby minimize the total sum of death.
If the Enemy has an average disposition, and is acting from beliefs about their situation that would make violence a typically human response, then that doesn’t mean their beliefs are factually accurate. It doesn’t mean they’re justified. It means you’ll have to shoot down someone who is the hero of their own story, and in their novel the protagonist will die on page 80. That is a tragedy, but it is better than the alternative tragedy. It is the choice that every police officer makes, every day, to keep our neat little worlds from dissolving into chaos.
When you accurately estimate the Enemy’s psychology—when you know what is really in the Enemy’s mind—that knowledge won’t feel like landing a delicious punch on the opposing side. It won’t give you a warm feeling of righteous indignation. It won’t make you feel good about yourself. If your estimate makes you feel unbearably sad, you may be seeing the world as it really is. More rarely, an accurate estimate may send shivers of serious horror down your spine, as when dealing with true psychopaths, or neurologically intact people with beliefs that have utterly destroyed their sanity (Scientologists or Jesus Campers).
So let’s come right out and say it—the 9/11 hijackers weren’t evil mutants. They did not hate freedom. They, too, were the heroes of their own stories, and they died for what they believed was right—truth, justice, and the Islamic way. If the hijackers saw themselves that way, it doesn’t mean their beliefs were true. If the hijackers saw themselves that way, it doesn’t mean that we have to agree that what they did was justified. If the hijackers saw themselves that way, it doesn’t mean that the passengers of United Flight 93 should have stood aside and let it happen. It does mean that in another world, if they had been raised in a different environment, those hijackers might have been police officers. And that is indeed a tragedy. Welcome to Earth.
That's a very difficult question.
One part of the answer lies into understanding the nature of the debate. Basically, I consider there are two kinds of debates :
A debate between two people, who trust each other (at least to a point, like friends or family members) and without witness. In that case, the whole point of the debate is trying to discover the truth, hoping at the end the two will agree (one conceding he was wrong, finding a common ground in the middle, or finding a third option unthought at the beginning), and it's quite easy to ask the other about his real goal (immediate or distance) and to assume he'll be honest about it.
A debate between two people, but who perfectly know they won't convince each other, but they try to convince witness of the debate. That's a typical debate between political candidates in an election. In that kind of debate, you've to be very doubtful about the claimed values (terminal or instrumental) of everyone involved. That kind of debate is very hard to handle in a non-mindkilling way.
Most real life debates are somewhere in between those two archetypes.
So I make that double distinction : between disagreement in expectation and disagreement in utility function, and between debating in order to get closer to the truth, and debating in order to convince third parties. I don't have a magical solution to find out for sure in which case we are, but I'll be glad to hear some tips/hindsight on the topic.
Another way to state it : to me a political debate in front of witnesses is very like a kind of prisoner dilemma. You can cooperate, by being true on your terminal and instrumental values, being honest, pointing to the flaws of your own side when you see them, ... Or you can defect, by hiding your true values, hiding facts, avoiding your weak points, and even lying on facts.
If both cooperate, the debate will go smoothly, and is likely to end up in everyone being closer to the truth than when you started. If both defect, the debate will get dirty, the two debaters will end up more convinced of their own view than initially, but the witnesses will still, on average, be closer to the truth, because I do believe that it's easier to defend something "true" than something "false". But if one cooperates and the other defects, then the one who defects is very likely to convince the witnesses, regardless of him being right or wrong.
So for myself I tend to use a "tit-for-tat with initial cooperation and forgiving", like I do on anything that I identify as an iterated prisoner dilemma : I cooperate initially, if I get the feeling the other is defecting, I'll resort to defecting (but I'll never go as far as openly lying, that's against my ethical code of conduct), but still try to fallback to cooperate every now and then and see if the other cooperates then or not.
I think that debates of the first type are probably even rarer than you estimate; even when two people who trust each other, and who both have a deliberate intent to seek the truth, are arguing alone, political instincts and biases kick in pretty hard.
I do really like your overall strategy; I'll try to remind myself more often in the future to occasionally turn down my politics-face a bit to see if the other is willing to return to a more cooperative state.