General Juan Velasco Alvarado was the military dictator of Peru from 1968 to 1975. In 1964-5 he put down revolutionary peasant guerilla movements, defending an unequal and brutally exploitative pattern of land ownership. Afterward he became frustrated with the bickering and gridlock of Peru’s parliament. With a small cadre of military coconspirators, he planned a coup d’état. Forestalling an uprising by pro-peasant parties, he sent tanks to kidnap the democratically elected president. The parliament was closed indefinitely. On the one year anniversary of his coup, Velasco stated “Some people expected very different things and were confident, as had been the custom, that we came to power for the sole purpose of calling elections and returning to them all their privileges. The people who thought that way were and are mistaken”.[1]

What would you expect Velasco’s policy toward land ownership and peasants to be? You would probably expect him to continue their exploitation by the oligarchy land-owning families. But you would be mistaken. Velasco and his successor redistributed 45% of all arable land in Peru to peasant lead communes, which were later broken up. Land redistribution is a rare spot of consensus in development economics, improving both the lives of the poor and increasing growth. [2]

I told you this story to highlight how your attitudes toward the actor affect your predictions. It is justifiable to dislike Velasco for his violence, for ending Peruvian democracy, for his state-controlled economy. But our brains predict off of those value judgements. The affect heuristic (aka the halo/horn effect) is when one positive/negative attribute of an actor causes people to assume positive/negative attributes in another area. The affect heuristic causes attractive candidates to be hired more often, or honest people to be rated as more intelligent. Subjects told about the benefit of nuclear power are likely to rate it as having fewer risks, et cetera. Our moral attitudes toward the coup are not evidence for Velasco’s policy preference, but our brains treat them as evidence.[3] That is bad if you think predicting what policies autocrats will adopt is important. Which I do.

The same problem applies to agents you like. One of my research projects involved interviewing Jordanian policymakers. I studied policies for rural-urban water reallocation, which I broadly endorse. Because I agreed with some interviewees about this issue, over-trusted them when selecting evidence. During fieldwork, I heard rumors about high radon activity in the water for the Disi-Amman conveyance. I even found a power-point made by MWI staff advocating that Jordan’s drinking water standards be revised to protect the project. But I never looked deeply into the issue. I could have, I have a physics degree, I have operated spectroscopes, I understand radiation physics and can parse radiology articles at a high level. But I never did, because I assumed that this evidence would support my positive attitude toward my interviewees. I never noticed the bias because evidence selection is mostly subconscious. There is no time for probability-theory calculations when you have an hour to speak with a bureaucrat.

Eventually, I looked into the issue at the request of peer reviewers (when I had to). Even then, I looked for evidence only until it supported my conclusions. Radon evaporates out of water, so maybe it evaporates in the mixing reservoirs. I checked the long-time-horizon equilibrium, then assumed the evaporation was fast enough. Once the analysis agreed with my assumptions I stopped looking for more information. I would have submitted this falsehood but by chance found a ministry paper revealing that transport reduces activity by just one eighth. At last, I had to reinvestigate and conclude that the radon activity is a public health hazard worth considering. The effect of low-doses of radiation are hotly debated, but the linear no threshold model predicts excess mortality risk from a lifetime of consumption greater than 10^-4. Because I had a positive attitude toward the policy-makers, I subconsciously avoided evidence that cast them in a negative light.

What to do about the affect heuristic in comparative politics research? At first, I wrote an elegant ending paragraph suggesting not caring or assigning moral value to people or institutions. But that answer relies on rejecting your own feelings, which is morally dubious and impossible anyway. If the goal of modern comparative politics to connect political institutions to outcomes, we still have normative judgments attached to those outcomes. Brian Min wants poor people to have access to electricity. Michael Albertus wants poor people to own the land they work. Removing those concerns would not improve their work. Assigning all agency to institutions would attach the halo and horns to the institutions. In any case waking up every day saying “I have no moral attachments” and talking like spock does not “cure” you of preference.

I am unsure what researchers should do. My first two guesses are

1. Using prediction exercise to calibrate yourself. I was overconfident in my assumptions, so prediction exercises should reduce my overconfidence.

2. Only write up the obvious conclusions from your case-studies. I had plenty of subtle theories to explain Jordan’s policies, but just a few which strong evidence. I asked myself “if a young man from Mafraq reproduced my methodology, would he arrive at the same conclusions”, and left out ideas which failed the test.

Bib

[1] Albertus, M. (2015). Autocracy and redistribution. Cambridge University Press. pp. 201

[2] https://www.worldpoliticsreview.com/articles/13688/cultivating-equality-land-reforms-potential-and-challenges

[3] For the curious, Velasco claimed that while putting down the revolts he saw the conditions peasants lived in and resolved to change them. Albertus’s case study suggests he did it to destroy the power base of his rivals (rural elites) and prevent a second peasant uprising, thus securing his rule (Albertus, 2015).

I want to improve my writing and reasoning so comments and critique welcome! Write to the reader!

New Comment
3 comments, sorted by Click to highlight new comments since:

Only write up the obvious conclusions from your case-studies. I had plenty of subtle theories to explain Jordan’s policies, but just a few which strong evidence. I asked myself “if a young man from Mafraq reproduced my methodology, would he arrive at the same conclusions”, and left out ideas which failed the test.

This feels like the traditional academic approach, but it results in at least a couple of weird effects based on patterns we see in academic publishing:

  • a large amount of unpublished ideas that are known to insiders because they are shared only informally but still influence the results published in the field in a way that is opaque to outsiders and beyond comment/consideration
  • faking or exaggerating data/results in order to reach publication standards of evidence

This suggests attempts to reform academic publishing norms might be relevant here.

a large amount of unpublished ideas that are known to insiders because they are shared only informally but still influence the results published in the field in a way that is opaque to outsiders and beyond comment/consideration

That is a great point,. If I were describing my results to another expert who understood bayesian reasoning, I would speak differently. Perhaps I will do a writeup in that framework.

faking or exaggerating data/results in order to reach publication standards of evidence

so fucking true. Or dropping disconfirming evidence, which is easy to do. I had peer reviewers ask me to do this. If I find time, I will post an anonymized quote.

Thanks for writing this post -- it's relevant to my work somewhat, and more importantly I really enjoyed reading about your experience with this project.