Why do people end up with differing conclusions, given the same data?

 

Model

The information we get from others can not always be 100% relied upon.  Some of the people telling you stuff are liars, some are stupid, and some are incorrectly or insufficiently informed.  Even in the case where the person giving you an opinion is honest, smart and well informed, they are still unlikely to be able to tell you accurately how reliable their own opinion is.

So our brains use an 'unreliability' factor.  Automatically we take what others tell us, and discount it by a certain amount, depending on how 'unreliable' we estimate the source to be.

We also compare what people tell us about 'known reference points' in order to update our estimates of their unreliability.

If Sally tells me that vaccines cause AIDS and I am very much more certain that this is not the case, than I am of Sally's reliability, then instead of modifying my opinion about what causes AIDS, I modify my opinion of how reliable Sally is.

If I'm only slightly more certain, then I might take the step of asking Sally her reason for thinking that, and looking at her data.

If I have a higher opinion of Sally than my own knowledge of science, and I don't much care or am unaware of what other people think about the relationship between vaccines and AIDS, then I might just accept what she says, provisionally, without checking her data.

If I have a very much higher opinion of Sally, then not only will I believe her, but my opinion of her reliability will actually increase as I assess her as some mould-breaking genius who knows things that others do not.

 

Importantly, once we have altered our opinion, based upon input that we originally considered to be fairly reliable, we are very bad at reversing that alteration, if the input later turns out to be less reliable than we originally thought.  This is called the "continued influence effect", and we can use it to explain a number of things...

 

Experiment

Let us consider a thought experiment where two subjects, Peter and Paul, are exposed to input about a particular topic (such as "Which clothes washing powder is it best to use?") from multiple sources.   Both will be exposed to the same sources, 100 in favour of using the Persil brand of washing powder, and 100 in favour of using the Bold brand of washing powder, but in a different order.

If they both start off with no strong opinion in either direction, would we expect them to end the experiment with roughly the same opinion as each other, or can we manipulate their opinions into differing, just by changing the order in which the sources are presented?

Suppose, with Peter, we start him off with 10 of the Persil side's most reputable and well argued sources, to raise Peter's confidence in sources that support Persil.

We can then run another 30 much weaker pro-Persil sources past him, and he is likely to just nods and accept, without bothering to examine the validity of the arguments too closely, because he's already convinced.

At this point, when he'll consider a source to be a bit suspect, straight away, just because they don't support Persil, we introduce him to the pro-Bold side, starting with the least reliable - the ones that are obviously stupid or manipulative.   Further more, we don't let the pro-Bold side build up momentum.   For every three poor pro-Bold sources, we interrupt with a medium reliability pro-Persil source that's rehashing pro-Persil points that Peter is by now familiar with and agrees with.

After seeing the worst 30 pro-Bold sources, Peter now don't just consider them to be a bit suspect - he considers them to be down right deceptive and mentally categorises all such sources as not worth paying attention to.   Any further pro-Bold sources, even ones that seem to be impartial and well reasoned, he's going to put down as being fakes created by malicious researchers in the pay of an evil company.

We can now, safely, expose Peter to the medium-reliability pro-Bold sources and even the good ones, and will need less and less to refute them, just a reminder to Peter of 'which side he is on', because it is less about the data now, and more about identity - he doesn't see himself as the sort of person who'd support Bold.   He's not a sheep.  He's not taken in by the hoax.

Finally, after 80 pro-Persil sources and 90 pro-Bold sources, we have 10 excellent pro-Bold sources whose independence and science can't fairly be questioned.   But it is too late for them to have much effect, and there are 20 good pro-Persil sources to balance them.

For Paul we do the reverse, starting with pro-Bold sources and only later introducing the pro-Persil side once a known reference point has been established as an anchor.

 

Simulation

Obviously, things are rarely that clear cut in real life.   But people also don't often get data from both sides of an argument at a precisely equal rate.   They bump around randomly, and once one side accumulates some headway, it is unlikely to be reversed.

We could add a third subject, Mary, and consider what is likely to happen if she is exposed to a random succession of sources, each with a 50% chance of supporting one side or the other, and each with a random value on a scale of 1(poor) to 3 (good) for honesty, validity and strength of conclusion supported by the claimed data.

If we use mathematics to make some actual models of the points at which a source agreeing or disagreeing with you affects your estimate of their reliability, we can use a computer simulation of the above thought experiment to predict how different orders of presentation will affect people's final opinion, under each model.   Then we could compare that against real-world data, to see which model best matches reality.

 

Prediction

I think, if this experiment were carried out, one of the properties that would emerge naturally from it is the backfire effect:

" The backfire effect occurs when, in the face of contradictory evidence, established beliefs do not change but actually get stronger. The effect has been demonstrated experimentally in psychological tests, where subjects are given data that either reinforces or goes against their existing biases - and in most cases people can be shown to increase their confidence in their prior position regardless of the evidence they were faced with. "

 

Further Reading

https://en.wikipedia.org/wiki/Confirmation_bias
https://en.wikipedia.org/wiki/Attitude_polarization
http://www.dartmouth.edu/~nyhan/nyhan-reifler.pdf
http://www.tandfonline.com/doi/abs/10.1080/17470216008416717
http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/
http://www.tandfonline.com/doi/abs/10.1080/14640749508401422
http://rationalwiki.org/wiki/Backfire_effect

New Comment
1 comment, sorted by Click to highlight new comments since:

In your post you're dealing entirely in hypotheticals. It would probably be useful to discuss real-life experiments and the degree to which they match what you think should happen. There are some in your "further reading" list (mislabeled References :-/), but you don't talk about them.