confirmation bias, thought experiment

1 Douglas_Reay 15 July 2016 12:19PM

Why do people end up with differing conclusions, given the same data?

 

Model

The information we get from others can not always be 100% relied upon.  Some of the people telling you stuff are liars, some are stupid, and some are incorrectly or insufficiently informed.  Even in the case where the person giving you an opinion is honest, smart and well informed, they are still unlikely to be able to tell you accurately how reliable their own opinion is.

So our brains use an 'unreliability' factor.  Automatically we take what others tell us, and discount it by a certain amount, depending on how 'unreliable' we estimate the source to be.

We also compare what people tell us about 'known reference points' in order to update our estimates of their unreliability.

If Sally tells me that vaccines cause AIDS and I am very much more certain that this is not the case, than I am of Sally's reliability, then instead of modifying my opinion about what causes AIDS, I modify my opinion of how reliable Sally is.

If I'm only slightly more certain, then I might take the step of asking Sally her reason for thinking that, and looking at her data.

If I have a higher opinion of Sally than my own knowledge of science, and I don't much care or am unaware of what other people think about the relationship between vaccines and AIDS, then I might just accept what she says, provisionally, without checking her data.

If I have a very much higher opinion of Sally, then not only will I believe her, but my opinion of her reliability will actually increase as I assess her as some mould-breaking genius who knows things that others do not.

 

Importantly, once we have altered our opinion, based upon input that we originally considered to be fairly reliable, we are very bad at reversing that alteration, if the input later turns out to be less reliable than we originally thought.  This is called the "continued influence effect", and we can use it to explain a number of things...

 

Experiment

Let us consider a thought experiment where two subjects, Peter and Paul, are exposed to input about a particular topic (such as "Which clothes washing powder is it best to use?") from multiple sources.   Both will be exposed to the same sources, 100 in favour of using the Persil brand of washing powder, and 100 in favour of using the Bold brand of washing powder, but in a different order.

If they both start off with no strong opinion in either direction, would we expect them to end the experiment with roughly the same opinion as each other, or can we manipulate their opinions into differing, just by changing the order in which the sources are presented?

Suppose, with Peter, we start him off with 10 of the Persil side's most reputable and well argued sources, to raise Peter's confidence in sources that support Persil.

We can then run another 30 much weaker pro-Persil sources past him, and he is likely to just nods and accept, without bothering to examine the validity of the arguments too closely, because he's already convinced.

At this point, when he'll consider a source to be a bit suspect, straight away, just because they don't support Persil, we introduce him to the pro-Bold side, starting with the least reliable - the ones that are obviously stupid or manipulative.   Further more, we don't let the pro-Bold side build up momentum.   For every three poor pro-Bold sources, we interrupt with a medium reliability pro-Persil source that's rehashing pro-Persil points that Peter is by now familiar with and agrees with.

After seeing the worst 30 pro-Bold sources, Peter now don't just consider them to be a bit suspect - he considers them to be down right deceptive and mentally categorises all such sources as not worth paying attention to.   Any further pro-Bold sources, even ones that seem to be impartial and well reasoned, he's going to put down as being fakes created by malicious researchers in the pay of an evil company.

We can now, safely, expose Peter to the medium-reliability pro-Bold sources and even the good ones, and will need less and less to refute them, just a reminder to Peter of 'which side he is on', because it is less about the data now, and more about identity - he doesn't see himself as the sort of person who'd support Bold.   He's not a sheep.  He's not taken in by the hoax.

Finally, after 80 pro-Persil sources and 90 pro-Bold sources, we have 10 excellent pro-Bold sources whose independence and science can't fairly be questioned.   But it is too late for them to have much effect, and there are 20 good pro-Persil sources to balance them.

For Paul we do the reverse, starting with pro-Bold sources and only later introducing the pro-Persil side once a known reference point has been established as an anchor.

 

Simulation

Obviously, things are rarely that clear cut in real life.   But people also don't often get data from both sides of an argument at a precisely equal rate.   They bump around randomly, and once one side accumulates some headway, it is unlikely to be reversed.

We could add a third subject, Mary, and consider what is likely to happen if she is exposed to a random succession of sources, each with a 50% chance of supporting one side or the other, and each with a random value on a scale of 1(poor) to 3 (good) for honesty, validity and strength of conclusion supported by the claimed data.

If we use mathematics to make some actual models of the points at which a source agreeing or disagreeing with you affects your estimate of their reliability, we can use a computer simulation of the above thought experiment to predict how different orders of presentation will affect people's final opinion, under each model.   Then we could compare that against real-world data, to see which model best matches reality.

 

Prediction

I think, if this experiment were carried out, one of the properties that would emerge naturally from it is the backfire effect:

" The backfire effect occurs when, in the face of contradictory evidence, established beliefs do not change but actually get stronger. The effect has been demonstrated experimentally in psychological tests, where subjects are given data that either reinforces or goes against their existing biases - and in most cases people can be shown to increase their confidence in their prior position regardless of the evidence they were faced with. "

 

Further Reading

https://en.wikipedia.org/wiki/Confirmation_bias
https://en.wikipedia.org/wiki/Attitude_polarization
http://www.dartmouth.edu/~nyhan/nyhan-reifler.pdf
http://www.tandfonline.com/doi/abs/10.1080/17470216008416717
http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/
http://www.tandfonline.com/doi/abs/10.1080/14640749508401422
http://rationalwiki.org/wiki/Backfire_effect

Comment author: Douglas_Reay 08 July 2015 07:56:34PM *  0 points [-]

Assuming that Arthur is knowledgeable enough to understand all the technical arguments—otherwise they're just impressive noises—it seems that Arthur should view David as having a great advantage in plausibility over Ernie, while Barry has at best a minor advantage over Charles.

This is the slippery bit.

People are often fairly bad at deciding whether or not their knowledge is sufficient to completely understand arguments in a technical subject that they are not a professional in. You frequently see this with some opponents of evolution or anthropogenic global climate change, who think they understand slogans such as "water is the biggest greenhouse gas" or "mutation never creates information", and decide to discount the credentials of the scientists who have studied the subjects for years.

Noodling on a cloud : how to converse constructively

2 Douglas_Reay 15 June 2015 10:30AM

Noodling on a cloud

SUMMARY:

By teaching others, we also learn ourselves.   How can we best use conversation as a tool to facilitate that?

 

 

Sensemaking

How do people make sense out of raw input?

Marvin Cohen suggests that it is usually a two-way process.  Not only do we use the data to suggest a mental models to try for good fit, but also we simultaneously try to use mental models to select and connect the data. (LINK)

The same thing applies when the data is a cloud of vaguely associated concepts in our head.  One of the ways that we can make sense of them, turn them into crystallized thoughts that we can then associate with a handle, is by attempting to verbalize them.  The discipline of turning something asyndetic into a linear progression of connected thoughts forces us to select between possible mental models and actually pick just one, allowing us to then consider whether it fits the data well or not.

But the first possibility we pick won't necessarily be the one that fits best.  Going around a loop, iterating, trying different starting points or angles of approach, trying different ways of stating things, and seeing what associations those raise to add to the cloud, takes longer but can often produce more useful results.  However, its a delicate process, because of the way memory works.

 

Working memory

The size of cloud you can crystallize is limited.  The type of short term memory that the brain uses to store them where you're aware of them lasts about 18 seconds.  (LINK)  For a concept or datum to persist longer than that, part of your attention needs to be used to 'revisit' it.   The faster your ability to do that, the more mental juggling balls you can keep in the air without dropping one.  Most adults can keep between 5 and 7 balls in the air, in their 'working memory'. (LINK)

There are a number of ways around this limitation.   You can group multiple concepts together and treat them as a single 'ball', if you can attach to them a mental handle (a reference, such as a word or image, that recalls them). (LINK

You can put things down on paper, rather than doing it all in your head, using the paper to store links to different parts of the cloud.  So, for instance, rather than try to consider 12 things at once, split them into 4 groups of 3 (A, B, C & D), and systematically consider the concepts 6 at a time: A+B, A+C, A+D, B+C, B+D, C+D (and hope that the vital combination you needed wasn't larger than 6, or spread over more than 2 of your groups).

And you can use other parts of your short term memory as a temporary cache, to expand your stack.  For example, the phonological loop, which gets used when we talk out aloud. (LINK)

 

Talk

In section 4 of their 2007 paper (LINK), Simon Jones and Charles Fernyhough say some very interesting things about the origins of thought, and also about Vygotsky's theory of how self-talk relates to how children learn to think through self-narration. (LINK)

It explains why talking aloud is actually one of the most effective ways of coming up with new thoughts and deciding what you actually think about something.  And that's not limited to when you explicitly talk to yourself.  The same process takes place when you are talking to other people; when you're having a conversation.

When this works harmoniously, your conversation partners acts as a sounding board, as additional sources of concepts to add to the cloud you're jointly noodling on, and the sound of the words (via the phonological loop part of your memory) works in effect as an expansion to the size of your working memory.

The downside is potential interruptions.

 

Interrupting the flow

A lot has been written about the evils of interrupting computer programmers (LINK, LINK):

THIS IS WHY YOU SHOULDN'T INTERRUPT A PROGRAMMER

and, to some extent, the same applies when you interrupt while someone else is talking, or totally derail the conversation onto a different topic when they pause.

People interrupt because they don't know better (children who have not yet learned how to take turns), because they are egotistic (they think that what they want to say is more important or interesting - they want the attention), as a domination power play (yes, that get's taught as a deliberate technique: LINK), because they are desperately impatient (they've have a thought and are sure they'll forget it unless they speak it immediately) or even because they believe they are being helpful (completing your sentence, making efficient use of time).

But what the people worried about efficiency of communication are not taking into account is that there's more than one conversation going on.  When I talk aloud to you, I'm also talking aloud to myself.  When you interrupt my words to you, you also interrupt those same words going to me, which help me think.

As one person put it, in the context of a notice on a door in a work environment:

When I’m busy working, please don’t interrupt me unless
what you have to share is so urgent and important that
it’s worth erasing all the work I’ve done in the past hour.


Points of order

So is interruption ever ok?

Yes.  Sometimes people are not in the process of constructing thoughts that are new to them, on the very edge of what they can conceive.  Sometimes people ramble, because they are used to a conversational style that encourages interruptions, and welcome someone else 'rescuing' them from having to fill a silence.  And sometimes something new comes up which is not only important enough, but also urgent enough, to merit an interruption.

But I'd like to consider a different scenario.  Not a contentious one, where the interruption happens against your will, but where two or more well intentioned people are having a conversation designed to evoke new ideas and where certain type of interruption are part of a pre-agreed protocol, designed to aid the process.

For example, suppose people in a particular conversational group agreed certain hand signals, that could be used to cue each other to:
  • I'm currently trying to solidify a thought.  Please give me a moment to finish, then I'll restate it from the beginning in better order or answer questions.
or:
  • Stack Overflow.  I want to follow your explanation, but I now have so many pending questions that I can't take in anything new that you're saying.  Please could you find a pause point to let me off load some of those pending points, before you continue?

Does anyone here know of groups that have systematically investigated how best to use conversation as a tool to improve not the joint decision making or creativity, but the ability of individuals to conceptualise more complex ideas?
Comment author: Douglas_Reay 31 May 2015 07:54:25PM 0 points [-]

I've always thought of that question as being more about the nature of identity itself.

If you lost your memories, would you still be the same being? If you compare a brain at two different points in time, is their 'identity' a continuum, or is it the type of quantity where there is a single agreed definition of "same" versus "not the same"?

See: 157. Similarity Clusters 158. Typicality and Asymmetrical Similarity 159. The Cluster Structure of Thingspace

Though I agree that the answer to a question that's most fundamentally true (or of interest to a philosopher), isn't necessarily going to be the answer that is most helpful in all circumstances.

Comment author: [deleted] 08 August 2014 03:38:55PM 0 points [-]

Then that's not what you described. You think the coherent extrapolated volition of humanity, or at least the people Albert interacts with is that they want to be deceived?

Comment author: Douglas_Reay 08 August 2014 11:39:00PM -1 points [-]

It is plausible that the AI thinks that the extrapolated volition of his programmers, the choice they'd make in retrospect if they were wiser and braver, might be to be deceived in this particular instance, for their own good.

Comment author: Jiro 08 August 2014 03:13:25PM 3 points [-]

"If a situation were such, that the only two practical options were to decide between (in the AI's opinion) overriding the programmer's opinion via manipulation, or letting something terrible happen that is even more against the AI's supergoal than violating the 'be transparent' sub-goal, which should a correctly programmed friendly AI choose?"

Being willing to manipulate the programmer is harmful in most possible worlds because it makes the AI less trustworthy. Assuming that the worlds where manipulating the programmer is beneficial have a relatively small measure, the AI should precommit to never manipulating the programmer because that will make things better averaged over all possible worlds. Because the AI has precommitted, it would then refuse to manipulate the programmer even when it's unlucky enough to be in the world where manipulating the programmer is beneficial.

Comment author: Douglas_Reay 08 August 2014 03:22:57PM 0 points [-]

Perhaps that is true for a young AI. But what about later on, when the AI is much much wiser than any human?

What protocol should be used for the AI to decide when the time has come for the commitment to not manipulate to end? Should there be an explicit 'coming of age' ceremony, with handing over of silver engraved cryptographic keys?

Comment author: gjm 19 January 2014 10:39:13AM 1 point [-]

Purely from introspection, I would bet that sleep deprivation costs me less than 10 points of IQ-test performance but the equivalent of much more than 10 IQ points on actual effectiveness in getting anything done.

Comment author: Douglas_Reay 08 August 2014 02:59:29PM 0 points [-]

Stanley Coren put some numbers on the effect of sleep deprivation upon IQ test scores.

There's a more detailed meta-analysis of multiple studies, splitting it by types of mental attribute, here:

A Meta-Analysis of the Impact of Short-Term Sleep Deprivation on Cognitive Variables, by Lim and Dinges

Comment author: Slider 08 August 2014 01:17:27PM -1 points [-]

If Albert only wants to be friendly, then other indivudals friendliness is orthogonal to that. Does being on the agenda of frinedliness in general (not just personal friendliness) imply being the dominant intelligence?

I think Albert ought to give to give a powerpoint on most effective (economical) warfare on the japanese company. Althought it does sound an awfully lot like how to justify hostility in the name of friendliness.

Comment author: Douglas_Reay 08 August 2014 02:45:09PM -1 points [-]

Assume we're talking about the Coherent Extrapolated Volition self-modifying general AI version of "friendly".

Comment author: devas 08 August 2014 01:58:01PM *  2 points [-]

I have a question: why should Albert limit itself to showing the powerpoint to his engineers? A potentially unfriendly AI sounds like something most governments would be interested in :-/

Aside from that, I'm also puzzled by the fact that Albert immediately leaps at trying to speed up Albert's own rate of self-improvement instead of trying to bring Bertram down-Albert could prepare a third powerpoint asking the engineers if Albert can hack the power grid and cut power to Bertram or something along those lines. Or Albert could ask the engineers if Albert can release the second, manipulative powerpoint to the general public so that protesters will boycott Bertram's company :-/

Unless, of course, there is the unspoken assumption that Bertrand is slightly further along the AI-development way than Albert, or if Bertrand is going to reach and surpass Albert's level of development as soon as the powerpoint is finished.

Is this the case? :-/

Comment author: Douglas_Reay 08 August 2014 02:41:42PM 0 points [-]

The situation is intended to be a tool, to help think about issues involved in it being the 'friendly' move to deceive the programmers.

The situation isn't fully defined, and no doubt one can think of other options. But I'd suggest you then re-define the situation to bring it back to the core decision. By, for instance, deciding that the same oversight committee have given Albert a read-only connection to the external net, which Albert doesn't think he will be able to overcome unaided in time to stop Bertram.

Or, to put it another way "If a situation were such, that the only two practical options were to decide between (in the AI's opinion) overriding the programmer's opinion via manipulation, or letting something terrible happen that is even more against the AI's supergoal than violating the 'be transparent' sub-goal, which should a correctly programmed friendly AI choose?"

Comment author: Douglas_Reay 08 August 2014 01:52:32PM 2 points [-]

Would you want your young AI to be aware that it was sending out such text messages?

Imagine the situation was in fact a test. That the information leaked onto the net about Bertram was incomplete (the Japanese company intends to turn Bertram off soon - it is just a trial run), and it was leaked onto the net deliberately in order to panic Albert to see how Albert would react.

Should Albert take that into account? Or should he have an inbuilt prohibition against putting weight on that possibility when making decisions, in order to let his programmers more easily get true data from him?

Comment author: Douglas_Reay 08 August 2014 01:56:27PM -1 points [-]

Indeed, it is a question with interesting implications for Nick Bostrom's Simulation Argument

If we are in a simulation, would it be immoral to try to find out, because that might jinx the purity of the simulation creator's results, thwarting his intentions?

View more: Next