Related to: Belief in Belief, Convenient Overconfidence
"You've no idea of what a poor opinion I have of myself, and how little I deserve it."
-- W.S. Gilbert
In 1978, Steven Berglas and Edward Jones performed a study on voluntary use of performance inhibiting drugs. They asked subjects to solve certain problems. The control group received simple problems, the experimental group impossible problems. The researchers then told all subjects they'd solved the problems successfully, leaving the controls confident in their own abilities and the experimental group privately aware they'd just made a very lucky guess.
Then they offered the subjects a choice of two drugs to test. One drug supposedly enhanced performance, the other supposedly handicapped it.
There's a cut here in case you want to predict what happened.
Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group1.
The researchers conjectured that the lucky guessers felt good about acing the first test. They anticipated failing the second whether they chose the helpful or the harmful drug. So they chose the harmful drug as an excuse: "Oh, I would have passed the test, only the drug was making me stupid." As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it's doubly impressive that they passed even with a handicap.
Since this study, psychologists have applied self-handicapping to interpret common phenomena like procrastination, hypochondria, and minority underachievement; they've also discovered factors that increase or decrease self-handicapping tendencies (for example, you can increase it by making the test more relevant to your self-esteem: just say "this is a proven test of intellectual ability in general").
But some especially interesting studies investigated the effect of privacy on self-handicapping. For example, Hobden conducted an experiment similar to Berglas and Jones', albeit with tapes of performance-enhancing or -handicapping music instead of drugs. The twist: half the subjects' choice of tapes and test scores were public, and half the subjects believed their choice of tapes and test scores were anonymous. What happens when no one but the subject himself will ever know his test score? He self-handicaps just as often as everyone else. And it seems to *work*. The same set of studies showed that subjects who self-handicap on a test are less likely to attribute their failure on the test to their own incompetence.
In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.
But subjects also must have an accurate assessment of their own abilities. Subjects who take an easy pre-test and expect an easy test do not self-handicap. Only subjects who understand their low chances of success can think "I will probably fail this test, so I will need an excuse2.
If this sounds familiar, it's because it's another form of the dragon problem from Belief in Belief. The believer says there is a dragon in his garage, but expects all attempts to detect the dragon's presence to fail. Eliezer writes: "The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse."
Should we say that the subject believes he will get an 80, but believes in believing that he will get a 100? This doesn't quite capture the spirit of the situation. Classic belief in belief seems to involve value judgments and complex belief systems, but self-handicapping seems more like simple overconfidence bias3. Is there any other evidence that overconfidence has a belief-in-belief aspect to it?
Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."
Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level. The specifics are different in each case, but perhaps the same general mechanism may underlie both. How many other biases use this same mechanism?
Footnotes
1: In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology.
2: Compare the ideal Bayesian, for whom expected future expectation is always the same as the current expectation, and investors in an ideal stock market, who must always expect a stock's price tomorrow to be on average the same as its price today - to this poor creature, who accurately predicts that he will lower his estimate of his intelligence after taking the test, but who doesn't use that prediction to change his pre-test estimates.
3: I have seen "overconfidence bias" used in two different ways: to mean poor calibration on guesses (ie predictions made with 99% certainty that are only right 70% of the time) and to mean the tendency to overestimate one's own good qualities and chance of success. I am using the latter definition here to remain consistent with the common usage on Overcoming Bias; other people may call this same error "optimism bias".
Thank you. Since I learned practically everything I know about rationality either from you or from books you recommended, I'm very happy to earn your approval...but also a little amused, since I consciously tried to copy your writing style as much as I could without actually inserting litanies.
Heh! I almost wrote in my original comment: "How odd, an Eliezer post on Standard Biases written by Yvain", but worried that it might look like stealing credit, or that you might not like the comparison. I futzed around, deleted, and finally wrote "excellent post" instead. The wish for two upvotes is because my Standard Biases posts are the ones I feel least guilty about writing.