Related to: Belief in Belief, Convenient Overconfidence

     "You've no idea of what a poor opinion I have of myself, and how little I deserve it."

      -- W.S. Gilbert 

In 1978, Steven Berglas and Edward Jones performed a study on voluntary use of performance inhibiting drugs. They asked subjects to solve certain problems. The control group received simple problems, the experimental group impossible problems. The researchers then told all subjects they'd solved the problems successfully, leaving the controls confident in their own abilities and the experimental group privately aware they'd just made a very lucky guess.

Then they offered the subjects a choice of two drugs to test. One drug supposedly enhanced performance, the other supposedly handicapped it.

There's a cut here in case you want to predict what happened.


Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group1.

The researchers conjectured that the lucky guessers felt good about acing the first test. They anticipated failing the second whether they chose the helpful or the harmful drug. So they chose the harmful drug as an excuse: "Oh, I would have passed the test, only the drug was making me stupid." As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it's doubly impressive that they passed even with a handicap.

Since this study, psychologists have applied self-handicapping to interpret common phenomena like procrastination, hypochondria, and minority underachievement; they've also discovered factors that increase or decrease self-handicapping tendencies (for example, you can increase it by making the test more relevant to your self-esteem: just say "this is a proven test of intellectual ability in general").

But some especially interesting studies investigated the effect of privacy on self-handicapping. For example, Hobden conducted an experiment similar to Berglas and Jones', albeit with tapes of performance-enhancing or -handicapping music instead of drugs. The twist: half the subjects' choice of tapes and test scores were public, and half the subjects believed their choice of tapes and test scores were anonymous. What happens when no one but the subject himself will ever know his test score? He self-handicaps just as often as everyone else. And it seems to *work*. The same set of studies showed that subjects who self-handicap on a test are less likely to attribute their failure on the test to their own incompetence.

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

But subjects also must have an accurate assessment of their own abilities. Subjects who take an easy pre-test and expect an easy test do not self-handicap. Only subjects who understand their low chances of success can think "I will probably fail this test, so I will need an excuse2

If this sounds familiar, it's because it's another form of the dragon problem from Belief in Belief. The believer says there is a dragon in his garage, but expects all attempts to detect the dragon's presence to fail. Eliezer writes: "The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse." 

Should we say that the subject believes he will get an 80, but believes in believing that he will get a 100? This doesn't quite capture the spirit of the situation. Classic belief in belief seems to involve value judgments and complex belief systems, but self-handicapping seems more like simple overconfidence bias3. Is there any other evidence that overconfidence has a belief-in-belief aspect to it?

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level. The specifics are different in each case, but perhaps the same general mechanism may underlie both. How many other biases use this same mechanism?

Footnotes

1: In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology.

2: Compare the ideal Bayesian, for whom expected future expectation is always the same as the current expectation, and investors in an ideal stock market, who must always expect a stock's price tomorrow to be on average the same as its price today - to this poor creature, who accurately predicts that he will lower his estimate of his intelligence after taking the test, but who doesn't use that prediction to change his pre-test estimates.

3: I have seen "overconfidence bias" used in two different ways: to mean poor calibration on guesses (ie predictions made with 99% certainty that are only right 70% of the time) and to mean the tendency to overestimate one's own good qualities and chance of success. I am using the latter definition here to remain consistent with the common usage on Overcoming Bias; other people may call this same error "optimism bias".

New Comment
63 comments, sorted by Click to highlight new comments since:
[-]Nebu401

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

I used to be pretty good at this videogame called Dance Dance Revolution (or DDR for short). I've won several province-level tournaments (both in my own province and in neighboring tournaments), did official internet rankings and ranked 10th place in North America, and 95th world wide.

People would often ask to play a match against me, and I'd always accept (figuring it was the "polite" thing to do), though I had mixed feelings about it. I very quickly realized it was a losing proposition for me: If I won, nobody noticed or remarked upon it (because I was known to be the "best" in my area), but I figured if I ever lost, people would make a big deal about it.

I often self-handicapped myself. I claimed that this was to make the match more interesting (and I often won despite self-handicap), but sometimes I wondered if perhaps I was also preparing excuses for myself so that if I ever did lose, I could blame the handicaps (and probably do so accurately, since I truly believe I could have beaten them in a "fair" match).

I had the fortune of traveling to Japan and a DDR player named Aaron who had ranked top 3 worldwide. He agreed to play a match with me, and I won the match, but it was very obvious to both of us that I had only won because of a glitch in the machine (basically, the game had unexpected froze and locked up, something I had never seen before, but when the game unfroze, I had been lucky and anticipated this before Aaron had).

So after the match, I turned to him, pulled out my digital camera and jokingly said "I can't believe I actually beat you. I gotta get a picture of this." But he had a rather serious look on his face and said something like "No, no pictures." I was a bit surprised, but I put away my camera. We didn't talk about it, but I suspected that I understood how he felt. I often felt like my reputation as the best DDR player in my province was constantly under attack. I figured he felt the same way, except world-wide, instead of provincially.

It's not necessarily an excuse for failure.

If, on some level, you are looking to demonstrate fitness (perhaps as a signaling methods to potential mates), then if you visibly handicap yourself and STILL win, you have demonstrated MORE fitness then if you had won normally. If you expect to win even with the self-handicap, then it's not just a matter of making excuses.

I think this is similar to how very often a chess master when playing against a weaker player will "give them rook odds", start with only one rook instead of two. They still expect to win, but they know that if they can still win in that circumstance, then they have demonstrated what a strong player you are.

[-]Nebu10

Coincidentally, I just saw this article which mentions self-handicapping: http://dsc.discovery.com/news/2008/10/09/puppies-play.html

[-]roland210

This reminded me of Carol Dweck's study: http://news-service.stanford.edu/news/2007/february7/dweck-020707.html

It is about having a fixed vs. growth theory of intelligence. If you think that your intelligence is fixed, you will avoid challenging tasks in order to preserve your self-image, whereas people with growth mentality will embrace it in order to improve. Important: never tell a child that it is intelligent.

I think it's more like "never praise a child for being intelligent". You can tell them they're smart if they are, just don't do it often or put any importance on it.

While it was well-intentioned, this is by far the worst thing my parents did while raising me. Even now that I'm aware of the problem, it's a constant struggle to convince myself to approach difficult problems, even though I find working on them very satisfying. Does anyone know if there's been discussion here (or elsewhere, I suppose) about individual causes of akrasia? Childhood indoctrination into a particular theory of intelligence certainly seems to be one.

Not a direct answer, but your post reminds me of This Is Why I'll Never Be an Adult.

Note that the downward spiral starts with self-congratulation, which seems to be a part of my pattern.

Great link. I follow that pattern almost precisely, unfortunately. I'll have to spend some time analyzing my self-congratulatory habits and see what can be done.

I don't have a cite, but I've read an article (a book? The Now Habit?) which claimed that procrastination is driven by the belief that getting things done is a reflection on your value as a person.

And why is akrasia a common problem among LessWrongians rather than, say, high-energy impulsiveness?

I imagine akrasia is a more natural fit for a tendency to overthink things.

My first reaction is that the 80-IQ guy needs to carry around a mental model of himself as a 100-IQ guy for status purposes, and a mental model of himself as an 80-IQ guy for accuracy purposes. Possibly neither consciously.

(Is this availability bias at work because I have recently read lots of Robin's etc. writings on status?)

If true, I don't think there's any need to say he "believes" his IQ is 100 when it is in fact 80. We could just say he has at least one public persona which he'd like to signal has an IQ of 100, and that sometimes he draws predictions using this model rather than a more correct one, like when he's guaranteed privacy.

I agree with your first paragraph, but I don't quite understand your second.

In particular, I don't understand what you mean by there being no need to say he "believes". If upon being asked he would assert that his IQ is 100, and he wouldn't be consciously aware of lying, isn't that enough to say he believes his IQ is 100 on at least one level?

(also, when I say I agree with your first paragraph, I do so on the assumption that we mean the same thing by status. In particular, I would describe the "status" in this case as closer to "self-esteem" than "real position in a social hierarchy". Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?)

Yes, it's worth another post - I hadn't heard that theory before.

::runs off to do some Google searches::

Some difficult work with Google revealed that the technical term is the "sociometer" theory - and it's fairly recent (the oldest citation I see refers to 1995), which would help explain why I hadn't heard of it before. It seems consistent with my personal experiences, so I consider it credible.

For more information:

http://www.psychwiki.com/wiki/Sociometer_Theory

Okay, I'll definitely post on sociometer theory sometime.

[-][anonymous]10

Thanks for the link CronoDAS. The 'sociometer' theory does seem credible, and certainly more so than some of the alternative theories presented there.

What I am not comfortable with is the emphasis placed on minimising the possibility of rejection from the tribe as a terminal value, to the exclusion of the other benefits of status. While explusion from a tribe can lead to physical death or at least genetic extinction, it is hardly the only benefit of high status. Surely a sensitive sociometer serves a goal somewhat more naunced than minimising this one negative outcome!

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

Why did I only stumble across this sentence two years after you wrote it?! It would've come in handy in the meanwhile, you know =) It will definitely come in handy now. Thanks!

Did Yvain end up writing said post? That theory is approximately how I model self-esteem and it serves me well but I haven't seen what a formal theory on the subject looks like.

http://lesswrong.com/lw/1kr/that_other_kind_of_status/ involves that idea; for the formal theory, Google "sociometer".

sociometer

Thanks!

If you've got more to say about it than that one line and you think it's possibly important, I'd call it another post.

[-][anonymous]10

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

I'm aware of the theory, however I've mostly picked it up from popular culture. I'd appreciate a post that described an actual scientific theory, with evidence or at least some falsifiability.

Self-esteem is another one of those null concepts like "fear of success". In my own work, for example, I've identified at least 2 (and maybe three) distinct mental processes by which behaviors described as "low self-esteem" can be produced.

One of the two could be thought of as "status-based", but the actual mechanism seems more like comparison of behaviors and traits to valued (or devalued) behavioral examples. (For instance, you get called a crybaby and laughed at -- and thus you learn that crying makes you a baby, and to be a "man" you must be "tough".)

The other mechanism is based on the ability to evoke positive responses from others, and the behaviors one learns in order to evoke those responses. Which I suppose can also be thought of as status-based, too, but it's very different in its operation. Response evocation motivates you to try different behaviors and imprint on ones that work, whereas role-judgment makes you try to conceal your less desirable behaviors and the negative identity associated with them. (Or, it motivates you to imitate and display admired traits and behaviors.)

Anyway, my main point was just to support your comments about evidence and falsifiability: rationalists should avoid throwing around high-level psychological terms like "procrastination" and "self-esteem" that don't define a mechanism -- they're usually far too overloaded and abstract to be useful, ala "phlogiston". If you want to be able to predict (or engineer!) esteem, you need to know more than that it contains a "status-ative principle". ;-)

Oddly enough, I found that too abstract to follow.

[-]pwno10

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

I wasn't aware, but it makes a lot of sense. Especially because you perception of yourself is a self-fulfilling prophecy.

Imagine a room of 100 people where none of them have any symbols pre-validated to signal for status. Upon interacting over time, I would guess that the high self-esteem people would most likely be perceived as high status.

"If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results."

Really? I think it's pretty common to be (a) not particularly good at something, (b) aware you're not particularly good at it, and (c) nonetheless not want that fact rubbed in your face if rubbing is avoidable. (Not saying this is necessarily a good thing, but I do think it's pretty common.)

[-]nolrai110

I really wonder how this sort of result applies to cultures that don't expect everyone to have high self-esteem. Such as say japan.

Excellent post - it makes me wish that the system gave out a limited number of super-votes, like 1 for every 20 karma, so that I could vote this up twice.

I hope you don't mind, but I did a quick edit to insert "a choice of" before "two drugs to test", because that wasn't clear on my first reading. (Feel free to revert if you prefer your original wording.) Also edited the self-deception tag to self_deception per previous standard.

Thank you. Since I learned practically everything I know about rationality either from you or from books you recommended, I'm very happy to earn your approval...but also a little amused, since I consciously tried to copy your writing style as much as I could without actually inserting litanies.

Heh! I almost wrote in my original comment: "How odd, an Eliezer post on Standard Biases written by Yvain", but worried that it might look like stealing credit, or that you might not like the comparison. I futzed around, deleted, and finally wrote "excellent post" instead. The wish for two upvotes is because my Standard Biases posts are the ones I feel least guilty about writing.

[-][anonymous]20

I must admit that when I wrote my reply I was operating on the assumption that I was replying to Eliezer. In fact, I even adressed it as such.

Fortunately I checked to see that nobody else had written the same point I was making before I posted.

Brilliant work Yvain!

Surely you have enough of a following here that you effectively have super-votes? Just go ahead and tell people you're voting something up, and that should generate at least two or three votes for free.

Also, 'promoting' an article seems to be a good enough option.

The idea of super-votes sounds similar to the system they have at everything2, where users are awarded a certain number of "upvotes" and a certain number of "cools" every day, depending on their level. An upvote/downvote adds/subtracts 1 point to their equivalent of karma for the post and a Cool gives the player a certain number of points, is displayed as "Cooled" on the post and is promoted to their main page.

(I reposted this as a reply because I was unfamiliar with the posting system when I first wrote it.)

[-][anonymous]140

Note that the karma system for Everything2 has changed recently. Specifically, because of abuse, downvoting no long subtracts karma.

'Cools' add twenty karma now. In the past, they only added three or so. This was changed to reflect the comparative scarcity of cools. Where in the old system, highly ranked users could cool multiple things per day, in the new system everyone is limited to one per day.

Their rationalization for these changes are listed here. I hope this information proves a bit useful to other people designing karma systems; at E2, we've been experimenting with karma systems since 1999. It'd be a shame to have that go to waste.

[-][anonymous]20

I like the sound of that system PaulG. I like the idea that I have to 'spend' a finite resource to vote something up or down. Having a finite number of supervotes or cools would make me consider my voting more thoughtfully.

[+]roland-100

Looks like it's related to learned helplessness to me.

http://en.wikipedia.org/wiki/Learned_helplessness

The relationship discussed in the literature mostly involves them as two competing explanations for underachievement. Learned helplessness is about internalizing the conception of yourself as worthless; self-handicapping is about trying as hard as you can to avoid viewing yourself as worthless. The studies I could find in ten minutes on Google Scholar mostly suggested a current consensus that run-of-the-mill underachievers are sometimes self-handicappers but not learned-helplessness victims - but ten minutes does not a literature review make.

Oh, and thank you for linking to that Wikipedia article. The sentence about how "people performed mental tasks in the presence of distracting noise...if the person could use a switch to turn off the noise, his performance improved, even though he rarely bothered to turn off the noise. Simply being aware of this option was enough to substantially counteract its distracting effect" is really, really interesting.

If self-handicapping to preserve your self-image with respect to one thing impairs your performance in many situations, then one approach would be to do some very rigorous testing, e.g. if one is concerned about psychometric intelligence, one could take several psychologist-administered WAIS-IV IQ tests on different days.

"Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level."

This is also relevant for Caplan's model of rational irrationality in political beliefs.

[-][anonymous]20

Eliezer: Super-votes is kinda like the system of "Cools" vs "upvotes" on everything2 (http://everything2.com/), where depending on your participation (they have a levels system), you are given a certain number of "Cools" and a certain number of "upvotes". Cools give more points to the user and put the article on the front page for a limited amount of time, upvotes just give the user a point or something.

"In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology"
 

would anyone like to discuss the reasons, thanks for being ambiguous! Appreciate it! 

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

I think there's a less perplexing answer: that at some level we realize that our performance is not 100% reliable, and we should shift down our estimate by an intuitive standard deviation of sorts. That way, we can under-perform in this specific case, and won't have to deal with the group dynamics of someone else's horrible disappointment because they were counting on you doing your part as well as you said you could.

First, welcome to Less Wrong! Be sure to hit the welcome thread soon.

Doesn't your hypothesis here predict compensation for overconfidence in every situation, and not just for easy tasks?

Yes it does.

...

Is there some implication I'm not getting here?

Um, I don't actually remember now– I thought that one of the results was that people compensated more for overconfidence when the tasks were not too difficult. But I don't see that, looking it over now.

No idea the extent to which EY's approval upped this, but what I can say is that I was less than half through the post before I jumped to the bottom, voted Up, and looked for any other way to indicate approval.

It's immediately surprising, interesting, obvious-in-retrospect-only, and most importantly, relevant to everyday life. Superb.

[-][anonymous]10

The ideal Bayesian [can] never predict in which direction future information will alter his own estimates, and investors in an ideal stock market, [can] never predict in which direction prices will move

I suggest rewording this, it seems like you are making a different claim than the one you intended. An ideal Bayesian can predict in which direction future information will alter his own estimates.

I have been given a coin which I know is either fair or biased (comes up heads 75% of the time). After a sequence of tosses I have arrived at, say, 95% probability that the coin is biased. The probability I assign to the next toss giving 'heads' is:

p(heads) = 0.95 0.75 + 0.05 0.5 ~= 0.74

There is a 74% chance that I will alter my estimate upwards after this coin toss.

I predict with 95% confidence that should I continue to toss the coin long enough future information will alter my estimates upwads until it reaches ~100% confidence that the coin is biased. Naturally, I predict a 5% chance that my estimates would be eventually altered downwards until they approximate 0%, a far greater change.

The same applies to some stocks in an ideal stock market. For example, some companies may have a limit on their growth potential and yet have some chance of going bankrupt. The chance that these stocks could completley lose value should suggest that they are more likely to go up than to go down for their value to be what it is now.

Can someone suggest a concise replacement for "in which direction" that applies here?

Can someone suggest a concise replacement for "in which direction" that applies here?

Expected future expectation is always the same as the current expectation.

[-][anonymous]10

Thanks Jim!

You're right. Edited to Jim's version, although it sounds kind of convoluted. I'm going to keep an eye out for how real statisticians describe this.

Expected value?

[-]Dues00

I wondered if this bias is really a manifestation of holding two contradictory ideas (a la belief in belief in belief). I wonder because, when past me was making this exact mistake, I notice that it tended to be a case of having a wide range of possible skill rank coupled with a low desire for accuracy.

If I think that my IQ is between 100 and 80 then I can have it both ways. I don't know that for sure, so I can brag: "Oh my IQ is somewhere below 100." because there is still a chance that their IQ is 100. However, if I am bout to be presented with an IQ test, I am tempted to be humble and say 80, because the test is probably going to prove me wrong in a positive way. That way I get to seem humble and smart, rather than overconfident and dumb.

Why are we surprised that the subjects were still trying to act is high status ways when they weren't being watched? This isn't like an experiment where I'm more likely to steal a candy bar if I'm anonymous. My reward for acting high status when no one is watching, is that I get to think of myself as a high status actor even when other people aren't watching. I always have an audience of at least one person. myself.

Of course, a really self-confident person would still take the inhibiting drug, because they are positive that they are going to ace the test anyway, and doing so while impaired is so much more awesome than while sober.

Excellent post!

Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group

I managed to guess this; my parents got it wrong. I thought that the control group would feel good about being right, and want it to occur more, whereas the unconfident group would feel (nihilistic? apathetic?), and so take the easy high of the drugs.

I confess I thought that the performance inhibiting drugs were euphoric; I couldn't imagine why anyone would take inhibiting drugs without some beneficial side effects. If this was wrong, I was effectively answering a different credit, so can’t really take credit for my guess.