New Comment
3 comments, sorted by Click to highlight new comments since: Today at 9:58 PM

"Ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue." So, for example, if presented with a piece of research that suggested the death penalty lowered murder rates, the participants were asked to analyse the study's methodology and imagine the results pointed the opposite way.

They called this the "consider the opposite" strategy, and the results were striking. Instructed to be fair and impartial, participants showed the exact same biases when weighing the evidence as in the original experiment. (...) The "consider the opposite" participants, on the other hand, completely overcame the biased assimilation effect – they weren't driven to rate the studies which agreed with their preconceptions as better than the ones that disagreed, and didn't become more extreme in their views regardless of which evidence they read.

If someone showed me evidence that the Jews were killing Christian babies to bake their blood into matzohs, I would probably try to explain it away. But if someone were to show me evidence that Jews weren't doing such a thing, I'd probably accept it at face value. The reason: I already have good reason to believe that they don't. Given that I have good reason to believe that they don't, evidence that they do is a lot more likely to be flawed or have some nonobvious explanation than evidence that they do.

Furthermore, the fact that someone is presenting evidence is itself evidence. If someone is presenting evidence that Jews eat babies, that may be evidence for widespread hatred of Jews, which would mean that the existing evidence that Jews eat babies, already weak, becomes weaker, and I should overall reduce my estimate of the probability that it is true. Of course,there must be some possible evidence that could increase my estimate, but it is certainly possible, for instance, to think "If there is a scientific consensus that tells me X, my estimate of X goes up, but if a rabble-rouser in the street or a pollster tells me X in the absence of scientific consensus, my estimate of X goes down".

I think the article is about something different. Something like this:

You prefer policy X.

You read in a newspaper "We asked random people on the street, and 9 out of 10 supported policy X."

You say: "Come on, the should finally give up, now that they see that pretty much everyone wants X."

Scenario A:

You are told to suspend all your biases for a moment, and evaluate impartially whether "9 out of 10 random people on the street" really makes a strong argument in favor of the policy.

Your brains automatically generates a clever excuse: "Of course, wisdom of the crowds, Aumann's agreement theorem, however you call it; if something makes sense so obviously that 9 out of 10 people notice it, we should take this very seriously!"

Scenario B:

You are told to imagine a parallel world, where the newspaper reported (let's assume that truthfully) that when asking random people on the street, 9 out of 10 opposed policy X.

Your brain automatically generates a clever excuse: "But asking random people on the street is nonsense! If you want to evaluate an impact of a policy, you need to ask experts. Random people believe all kind of stupid things. Also, how large was the random sample? Was it representative for the whole country? Without these information, the article shouldn't be taken seriously."

Then you return to the real world, and notice that the same arguments apply in the opposite direction, too. Sure, your priors in favor of the policy X remain the same, but you no longer consider the articles in the newspaper a strong evidence in your favor.