A very interesting paper. The main thesis is that when extremely harmful tactics are used, those who employ those tactics are generally perceived by their victims as having extremely harmful goals. Thus, the victims become more motivated to oppose their goals, rather than becoming more motivated to comply. I've summarized this because I did have one quibble; while the paper usually is pretty neutral about this whole process (as in the summary where the word "heuristic" is used for it), there are a couple of places which make it sound like this is a bias or a mistake in the way people understand the motives of violent agents. But agents willing to resort to violence really are dangerous, and there's plenty of historical evidence to suggest that it's not unusual for them to escalate or to continue to employ violence in pursuit of further ends. Inferring hostility from violence also doesn't look particularly like a mistake, even if there may be rare cases where it is misleading. Humans are reluctant to inflict violence on one another, and often when they overcome this it is by rationalizing that their victims deserve it, or in other words by developing hostility toward their victims. Perhaps the hostility only developed because the violent agent had some other agenda, but it's quite likely to be present, so the victim is quite sensible to be concerned about it.
This is only codifying the bias in reason. If this is as severe a feedback loop as your reasoning suggests, then rational agents aware of this bias are all too necessary to start disabling the feedback loop. Nobody "wins" a war; one side just gets their demands met to some degree. That's a far cry from "winning" by any utility function that values human life much.
By a feedback loop, do you mean a process whereby uses of violence are likely to provoke violent responses, making everybody less willing to compromise? If so, then I entirely agree that this is worth examining, and I wish I could figure out what I said that makes it seem like you think you are saying something I'd disagree with.
Ah, interesting. I was about to examine why I disagreed with you, but upon seeing your comment again, I realized I'm not disagreeing. I understand your comment and agree with it insofar as it is written, but there is much more to say about this than where your comment ends. It's not that I disagree with the content, but the implications that arise from it having been stated the way you stated it.
This chain is basically like this:
Unfortunately, I don't know how to proceed from here to actually steer the future towards optimal (maximum?) non-violence.
"neither escalating to terrorism nor with terrorism encourages government concessions"
Is there a word missing there?
What is the cognitive bias and why is this relevant to LW? (The first does not answer the second!)
See my other comment. Effectively:
Humanity has a tendency to get caught in feedback loops of violence, coloring our reasoning on bargaining theory.
Did you mean to suggest disabling (the bias for) violence isn't a useful and/or LessWrong-relevant topic? Downvoted until further notice.
Yes, violence is an important topic. Why this paper? Is this the best paper ever written on the topic? Is it in the top 80%? Your answer seems to be that it contains the word "bias." You still haven't identified a cognitive bias. If you did, I would concede that it is in the top 50% of all papers ever written on violence.
To make a positive contribution, I suggest that people interested in violence read Randall Collins.
Max Abrahms, "The Credibility Paradox: Violence as a Double-Edged Sword in International Politics," International Studies Quarterly 2013.
I found this via Bruce Schneier's blog, which frequently features very valuable analysis clustered around societal and computer security.