Max Abrahms, "The Credibility Paradox: Violence as a Double-Edged Sword in International Politics," International Studies Quarterly 2013.

Abstract: Implicit in the rationalist literature on bargaining over the last half-century is the political utility of violence. Given our anarchical international system populated with egoistic actors, violence is thought to promote concessions by lending credibility to their threats. From the vantage of bargaining theory, then, empirical research on terrorism poses a puzzle. For non-state actors, terrorism signals a credible threat in comparison to less extreme tactical alternatives. In recent years, however, a spate of studies across disciplines and methodologies has nonetheless found that neither escalating to terrorism nor with terrorism encourages government concessions. In fact, perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise. The apparent tendency for this extreme form of violence to impede concessions challenges the external validity of bargaining theory, as traditionally understood. In this study, I propose and test an important psychological refinement to the standard rationalist narrative. Via an experiment on a national sample of adults, I find evidence of a newfound cognitive heuristic undermining the coercive logic of escalation enshrined in bargaining theory. Due to this oversight, mainstream bargaining theory overestimates the political utility of violence, particularly as an instrument of coercion.

I found this via Bruce Schneier's blog, which frequently features very valuable analysis clustered around societal and computer security.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 8:41 AM

A very interesting paper. The main thesis is that when extremely harmful tactics are used, those who employ those tactics are generally perceived by their victims as having extremely harmful goals. Thus, the victims become more motivated to oppose their goals, rather than becoming more motivated to comply. I've summarized this because I did have one quibble; while the paper usually is pretty neutral about this whole process (as in the summary where the word "heuristic" is used for it), there are a couple of places which make it sound like this is a bias or a mistake in the way people understand the motives of violent agents. But agents willing to resort to violence really are dangerous, and there's plenty of historical evidence to suggest that it's not unusual for them to escalate or to continue to employ violence in pursuit of further ends. Inferring hostility from violence also doesn't look particularly like a mistake, even if there may be rare cases where it is misleading. Humans are reluctant to inflict violence on one another, and often when they overcome this it is by rationalizing that their victims deserve it, or in other words by developing hostility toward their victims. Perhaps the hostility only developed because the violent agent had some other agenda, but it's quite likely to be present, so the victim is quite sensible to be concerned about it.

This is only codifying the bias in reason. If this is as severe a feedback loop as your reasoning suggests, then rational agents aware of this bias are all too necessary to start disabling the feedback loop. Nobody "wins" a war; one side just gets their demands met to some degree. That's a far cry from "winning" by any utility function that values human life much.

By a feedback loop, do you mean a process whereby uses of violence are likely to provoke violent responses, making everybody less willing to compromise? If so, then I entirely agree that this is worth examining, and I wish I could figure out what I said that makes it seem like you think you are saying something I'd disagree with.

Ah, interesting. I was about to examine why I disagreed with you, but upon seeing your comment again, I realized I'm not disagreeing. I understand your comment and agree with it insofar as it is written, but there is much more to say about this than where your comment ends. It's not that I disagree with the content, but the implications that arise from it having been stated the way you stated it.

This chain is basically like this:

  1. Common reasoning.
  2. Reaction to common reasoning, seeing how it plays out over time.
  3. Perception of intended disagreement, while agreeing completely.
  4. Clarification that no disagreement or other ill will is present.

Unfortunately, I don't know how to proceed from here to actually steer the future towards optimal (maximum?) non-violence.

"neither escalating to terrorism nor with terrorism encourages government concessions"

Is there a word missing there?

What is the cognitive bias and why is this relevant to LW? (The first does not answer the second!)

See my other comment. Effectively:

Humanity has a tendency to get caught in feedback loops of violence, coloring our reasoning on bargaining theory.

Did you mean to suggest disabling (the bias for) violence isn't a useful and/or LessWrong-relevant topic? Downvoted until further notice.

[This comment is no longer endorsed by its author]Reply

Yes, violence is an important topic. Why this paper? Is this the best paper ever written on the topic? Is it in the top 80%? Your answer seems to be that it contains the word "bias." You still haven't identified a cognitive bias. If you did, I would concede that it is in the top 50% of all papers ever written on violence.

To make a positive contribution, I suggest that people interested in violence read Randall Collins.

Downvote retracted.