Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Simultaneously Right and Wrong

88 Post author: Yvain 07 March 2009 10:55PM

Related to: Belief in Belief, Convenient Overconfidence

     "You've no idea of what a poor opinion I have of myself, and how little I deserve it."

      -- W.S. Gilbert

In 1978, Steven Berglas and Edward Jones performed a study on voluntary use of performance inhibiting drugs. They asked subjects to solve certain problems. The control group received simple problems, the experimental group impossible problems. The researchers then told all subjects they'd solved the problems successfully, leaving the controls confident in their own abilities and the experimental group privately aware they'd just made a very lucky guess.

Then they offered the subjects a choice of two drugs to test. One drug supposedly enhanced performance, the other supposedly handicapped it.

There's a cut here in case you want to predict what happened.

Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group1.

The researchers conjectured that the lucky guessers felt good about acing the first test. They anticipated failing the second whether they chose the helpful or the harmful drug. So they chose the harmful drug as an excuse: "Oh, I would have passed the test, only the drug was making me stupid." As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it's doubly impressive that they passed even with a handicap.

Since this study, psychologists have applied self-handicapping to interpret common phenomena like procrastination, hypochondria, and minority underachievement; they've also discovered factors that increase or decrease self-handicapping tendencies (for example, you can increase it by making the test more relevant to your self-esteem: just say "this is a proven test of intellectual ability in general").

But some especially interesting studies investigated the effect of privacy on self-handicapping. For example, Hobden conducted an experiment similar to Berglas and Jones', albeit with tapes of performance-enhancing or -handicapping music instead of drugs. The twist: half the subjects' choice of tapes and test scores were public, and half the subjects believed their choice of tapes and test scores were anonymous. What happens when no one but the subject himself will ever know his test score? He self-handicaps just as often as everyone else. And it seems to *work*. The same set of studies showed that subjects who self-handicap on a test are less likely to attribute their failure on the test to their own incompetence.

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

But subjects also must have an accurate assessment of their own abilities. Subjects who take an easy pre-test and expect an easy test do not self-handicap. Only subjects who understand their low chances of success can think "I will probably fail this test, so I will need an excuse2.

If this sounds familiar, it's because it's another form of the dragon problem from Belief in Belief. The believer says there is a dragon in his garage, but expects all attempts to detect the dragon's presence to fail. Eliezer writes: "The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse."

Should we say that the subject believes he will get an 80, but believes in believing that he will get a 100? This doesn't quite capture the spirit of the situation. Classic belief in belief seems to involve value judgments and complex belief systems, but self-handicapping seems more like simple overconfidence bias3. Is there any other evidence that overconfidence has a belief-in-belief aspect to it?

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level. The specifics are different in each case, but perhaps the same general mechanism may underlie both. How many other biases use this same mechanism?


1: In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology.

2: Compare the ideal Bayesian, for whom expected future expectation is always the same as the current expectation, and investors in an ideal stock market, who must always expect a stock's price tomorrow to be on average the same as its price today - to this poor creature, who accurately predicts that he will lower his estimate of his intelligence after taking the test, but who doesn't use that prediction to change his pre-test estimates.

3: I have seen "overconfidence bias" used in two different ways: to mean poor calibration on guesses (ie predictions made with 99% certainty that are only right 70% of the time) and to mean the tendency to overestimate one's own good qualities and chance of success. I am using the latter definition here to remain consistent with the common usage on Overcoming Bias; other people may call this same error "optimism bias".

Comments (52)

Comment author: Nebu 09 March 2009 04:26:29PM 28 points [-]

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

I used to be pretty good at this videogame called Dance Dance Revolution (or DDR for short). I've won several province-level tournaments (both in my own province and in neighboring tournaments), did official internet rankings and ranked 10th place in North America, and 95th world wide.

People would often ask to play a match against me, and I'd always accept (figuring it was the "polite" thing to do), though I had mixed feelings about it. I very quickly realized it was a losing proposition for me: If I won, nobody noticed or remarked upon it (because I was known to be the "best" in my area), but I figured if I ever lost, people would make a big deal about it.

I often self-handicapped myself. I claimed that this was to make the match more interesting (and I often won despite self-handicap), but sometimes I wondered if perhaps I was also preparing excuses for myself so that if I ever did lose, I could blame the handicaps (and probably do so accurately, since I truly believe I could have beaten them in a "fair" match).

I had the fortune of traveling to Japan and a DDR player named Aaron who had ranked top 3 worldwide. He agreed to play a match with me, and I won the match, but it was very obvious to both of us that I had only won because of a glitch in the machine (basically, the game had unexpected froze and locked up, something I had never seen before, but when the game unfroze, I had been lucky and anticipated this before Aaron had).

So after the match, I turned to him, pulled out my digital camera and jokingly said "I can't believe I actually beat you. I gotta get a picture of this." But he had a rather serious look on his face and said something like "No, no pictures." I was a bit surprised, but I put away my camera. We didn't talk about it, but I suspected that I understood how he felt. I often felt like my reputation as the best DDR player in my province was constantly under attack. I figured he felt the same way, except world-wide, instead of provincially.

Comment author: Yosarian2 30 December 2012 07:11:23PM 6 points [-]

It's not necessarily an excuse for failure.

If, on some level, you are looking to demonstrate fitness (perhaps as a signaling methods to potential mates), then if you visibly handicap yourself and STILL win, you have demonstrated MORE fitness then if you had won normally. If you expect to win even with the self-handicap, then it's not just a matter of making excuses.

I think this is similar to how very often a chess master when playing against a weaker player will "give them rook odds", start with only one rook instead of two. They still expect to win, but they know that if they can still win in that circumstance, then they have demonstrated what a strong player you are.

Comment author: Nebu 31 December 2012 12:24:20AM 0 points [-]

Coincidentally, I just saw this article which mentions self-handicapping: http://dsc.discovery.com/news/2008/10/09/puppies-play.html

Comment author: conchis 08 March 2009 01:22:36AM 13 points [-]

"If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results."

Really? I think it's pretty common to be (a) not particularly good at something, (b) aware you're not particularly good at it, and (c) nonetheless not want that fact rubbed in your face if rubbing is avoidable. (Not saying this is necessarily a good thing, but I do think it's pretty common.)

Comment author: nolrai 17 February 2010 07:17:24PM 8 points [-]

I really wonder how this sort of result applies to cultures that don't expect everyone to have high self-esteem. Such as say japan.

Comment author: roland 08 March 2009 02:32:16AM 17 points [-]

This reminded me of Carol Dweck's study: http://news-service.stanford.edu/news/2007/february7/dweck-020707.html

It is about having a fixed vs. growth theory of intelligence. If you think that your intelligence is fixed, you will avoid challenging tasks in order to preserve your self-image, whereas people with growth mentality will embrace it in order to improve. Important: never tell a child that it is intelligent.

Comment author: pdf23ds 09 March 2009 04:30:35AM 8 points [-]

I think it's more like "never praise a child for being intelligent". You can tell them they're smart if they are, just don't do it often or put any importance on it.

Comment author: WrongBot 29 June 2010 07:48:09PM 7 points [-]

While it was well-intentioned, this is by far the worst thing my parents did while raising me. Even now that I'm aware of the problem, it's a constant struggle to convince myself to approach difficult problems, even though I find working on them very satisfying. Does anyone know if there's been discussion here (or elsewhere, I suppose) about individual causes of akrasia? Childhood indoctrination into a particular theory of intelligence certainly seems to be one.

Comment author: NancyLebovitz 29 June 2010 08:57:07PM 1 point [-]

Not a direct answer, but your post reminds me of This Is Why I'll Never Be an Adult.

Note that the downward spiral starts with self-congratulation, which seems to be a part of my pattern.

Comment author: WrongBot 29 June 2010 09:02:31PM 0 points [-]

Great link. I follow that pattern almost precisely, unfortunately. I'll have to spend some time analyzing my self-congratulatory habits and see what can be done.

Comment author: NancyLebovitz 29 June 2010 09:09:39PM 2 points [-]

I don't have a cite, but I've read an article (a book? The Now Habit?) which claimed that procrastination is driven by the belief that getting things done is a reflection on your value as a person.

And why is akrasia a common problem among LessWrongians rather than, say, high-energy impulsiveness?

Comment author: mattnewport 29 June 2010 09:11:28PM 2 points [-]

I imagine akrasia is a more natural fit for a tendency to overthink things.

Comment author: GuySrinivasan 08 March 2009 12:18:04AM 13 points [-]

My first reaction is that the 80-IQ guy needs to carry around a mental model of himself as a 100-IQ guy for status purposes, and a mental model of himself as an 80-IQ guy for accuracy purposes. Possibly neither consciously.

(Is this availability bias at work because I have recently read lots of Robin's etc. writings on status?)

If true, I don't think there's any need to say he "believes" his IQ is 100 when it is in fact 80. We could just say he has at least one public persona which he'd like to signal has an IQ of 100, and that sometimes he draws predictions using this model rather than a more correct one, like when he's guaranteed privacy.

Comment author: Yvain 08 March 2009 01:10:57AM 13 points [-]

I agree with your first paragraph, but I don't quite understand your second.

In particular, I don't understand what you mean by there being no need to say he "believes". If upon being asked he would assert that his IQ is 100, and he wouldn't be consciously aware of lying, isn't that enough to say he believes his IQ is 100 on at least one level?

(also, when I say I agree with your first paragraph, I do so on the assumption that we mean the same thing by status. In particular, I would describe the "status" in this case as closer to "self-esteem" than "real position in a social hierarchy". Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?)

Comment author: CronoDAS 08 March 2009 04:25:09AM *  6 points [-]

Yes, it's worth another post - I hadn't heard that theory before.

::runs off to do some Google searches::

Some difficult work with Google revealed that the technical term is the "sociometer" theory - and it's fairly recent (the oldest citation I see refers to 1995), which would help explain why I hadn't heard of it before. It seems consistent with my personal experiences, so I consider it credible.

For more information:


Comment author: Yvain 08 March 2009 08:15:45PM 4 points [-]

Okay, I'll definitely post on sociometer theory sometime.

Comment author: cousin_it 16 May 2011 12:35:39PM 3 points [-]

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

Why did I only stumble across this sentence two years after you wrote it?! It would've come in handy in the meanwhile, you know =) It will definitely come in handy now. Thanks!

Comment author: wedrifid 16 May 2011 12:55:36PM 0 points [-]

Did Yvain end up writing said post? That theory is approximately how I model self-esteem and it serves me well but I haven't seen what a formal theory on the subject looks like.

Comment author: Yvain 16 May 2011 12:59:15PM *  4 points [-]

http://lesswrong.com/lw/1kr/that_other_kind_of_status/ involves that idea; for the formal theory, Google "sociometer".

Comment author: wedrifid 16 May 2011 01:33:18PM 0 points [-]



Comment author: Eliezer_Yudkowsky 08 March 2009 01:29:18AM 3 points [-]

If you've got more to say about it than that one line and you think it's possibly important, I'd call it another post.

Comment author: pwno 08 March 2009 01:53:02AM 0 points [-]

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

I wasn't aware, but it makes a lot of sense. Especially because you perception of yourself is a self-fulfilling prophecy.

Imagine a room of 100 people where none of them have any symbols pre-validated to signal for status. Upon interacting over time, I would guess that the high self-esteem people would most likely be perceived as high status.

Comment deleted 08 March 2009 04:04:46AM [-]
Comment author: pjeby 09 March 2009 10:34:09PM *  3 points [-]

Self-esteem is another one of those null concepts like "fear of success". In my own work, for example, I've identified at least 2 (and maybe three) distinct mental processes by which behaviors described as "low self-esteem" can be produced.

One of the two could be thought of as "status-based", but the actual mechanism seems more like comparison of behaviors and traits to valued (or devalued) behavioral examples. (For instance, you get called a crybaby and laughed at -- and thus you learn that crying makes you a baby, and to be a "man" you must be "tough".)

The other mechanism is based on the ability to evoke positive responses from others, and the behaviors one learns in order to evoke those responses. Which I suppose can also be thought of as status-based, too, but it's very different in its operation. Response evocation motivates you to try different behaviors and imprint on ones that work, whereas role-judgment makes you try to conceal your less desirable behaviors and the negative identity associated with them. (Or, it motivates you to imitate and display admired traits and behaviors.)

Anyway, my main point was just to support your comments about evidence and falsifiability: rationalists should avoid throwing around high-level psychological terms like "procrastination" and "self-esteem" that don't define a mechanism -- they're usually far too overloaded and abstract to be useful, ala "phlogiston". If you want to be able to predict (or engineer!) esteem, you need to know more than that it contains a "status-ative principle". ;-)

Comment author: Peterdjones 28 September 2012 11:15:20AM 1 point [-]

Oddly enough, I found that too abstract to follow.

Comment author: Eliezer_Yudkowsky 07 March 2009 11:51:29PM 7 points [-]

Excellent post - it makes me wish that the system gave out a limited number of super-votes, like 1 for every 20 karma, so that I could vote this up twice.

I hope you don't mind, but I did a quick edit to insert "a choice of" before "two drugs to test", because that wasn't clear on my first reading. (Feel free to revert if you prefer your original wording.) Also edited the self-deception tag to self_deception per previous standard.

Comment author: thomblake 08 March 2009 01:19:48AM 11 points [-]

Surely you have enough of a following here that you effectively have super-votes? Just go ahead and tell people you're voting something up, and that should generate at least two or three votes for free.

Also, 'promoting' an article seems to be a good enough option.

Comment author: PaulG 08 March 2009 12:32:03AM *  3 points [-]

The idea of super-votes sounds similar to the system they have at everything2, where users are awarded a certain number of "upvotes" and a certain number of "cools" every day, depending on their level. An upvote/downvote adds/subtracts 1 point to their equivalent of karma for the post and a Cool gives the player a certain number of points, is displayed as "Cooled" on the post and is promoted to their main page.

(I reposted this as a reply because I was unfamiliar with the posting system when I first wrote it.)

Comment author: Andrew 08 March 2009 06:22:40AM *  11 points [-]

Note that the karma system for Everything2 has changed recently. Specifically, because of abuse, downvoting no long subtracts karma.

'Cools' add twenty karma now. In the past, they only added three or so. This was changed to reflect the comparative scarcity of cools. Where in the old system, highly ranked users could cool multiple things per day, in the new system everyone is limited to one per day.

Their rationalization for these changes are listed here. I hope this information proves a bit useful to other people designing karma systems; at E2, we've been experimenting with karma systems since 1999. It'd be a shame to have that go to waste.

Comment author: Yvain 08 March 2009 01:28:28AM 9 points [-]

Thank you. Since I learned practically everything I know about rationality either from you or from books you recommended, I'm very happy to earn your approval...but also a little amused, since I consciously tried to copy your writing style as much as I could without actually inserting litanies.

Comment author: Eliezer_Yudkowsky 08 March 2009 02:02:35AM 8 points [-]

Heh! I almost wrote in my original comment: "How odd, an Eliezer post on Standard Biases written by Yvain", but worried that it might look like stealing credit, or that you might not like the comparison. I futzed around, deleted, and finally wrote "excellent post" instead. The wish for two upvotes is because my Standard Biases posts are the ones I feel least guilty about writing.

Comment author: CarlShulman 08 March 2009 12:06:10AM 3 points [-]

If self-handicapping to preserve your self-image with respect to one thing impairs your performance in many situations, then one approach would be to do some very rigorous testing, e.g. if one is concerned about psychometric intelligence, one could take several psychologist-administered WAIS-IV IQ tests on different days.

"Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level."

This is also relevant for Caplan's model of rational irrationality in political beliefs.

Comment author: infotropism 08 March 2009 12:00:28AM 4 points [-]

Looks like it's related to learned helplessness to me.


Comment author: Yvain 08 March 2009 01:53:36AM 7 points [-]

The relationship discussed in the literature mostly involves them as two competing explanations for underachievement. Learned helplessness is about internalizing the conception of yourself as worthless; self-handicapping is about trying as hard as you can to avoid viewing yourself as worthless. The studies I could find in ten minutes on Google Scholar mostly suggested a current consensus that run-of-the-mill underachievers are sometimes self-handicappers but not learned-helplessness victims - but ten minutes does not a literature review make.

Oh, and thank you for linking to that Wikipedia article. The sentence about how "people performed mental tasks in the presence of distracting noise...if the person could use a switch to turn off the noise, his performance improved, even though he rarely bothered to turn off the noise. Simply being aware of this option was enough to substantially counteract its distracting effect" is really, really interesting.

Comment author: talisman 09 March 2009 04:05:56AM 1 point [-]

No idea the extent to which EY's approval upped this, but what I can say is that I was less than half through the post before I jumped to the bottom, voted Up, and looked for any other way to indicate approval.

It's immediately surprising, interesting, obvious-in-retrospect-only, and most importantly, relevant to everyday life. Superb.

Comment deleted 08 March 2009 04:37:40AM [-]
Comment author: Yvain 08 March 2009 11:10:20AM 2 points [-]

You're right. Edited to Jim's version, although it sounds kind of convoluted. I'm going to keep an eye out for how real statisticians describe this.

Comment author: jimrandomh 08 March 2009 04:57:45AM *  2 points [-]

Can someone suggest a concise replacement for "in which direction" that applies here?

Expected future expectation is always the same as the current expectation.

Comment author: CronoDAS 08 March 2009 04:57:23AM 1 point [-]

Expected value?

Comment author: JJ10DMAN 10 August 2010 11:00:56AM 1 point [-]

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

I think there's a less perplexing answer: that at some level we realize that our performance is not 100% reliable, and we should shift down our estimate by an intuitive standard deviation of sorts. That way, we can under-perform in this specific case, and won't have to deal with the group dynamics of someone else's horrible disappointment because they were counting on you doing your part as well as you said you could.

Comment author: orthonormal 10 August 2010 11:27:16PM 0 points [-]

First, welcome to Less Wrong! Be sure to hit the welcome thread soon.

Doesn't your hypothesis here predict compensation for overconfidence in every situation, and not just for easy tasks?

Comment author: JJ10DMAN 15 October 2010 01:31:58PM 0 points [-]

Yes it does.


Is there some implication I'm not getting here?

Comment author: orthonormal 17 October 2010 09:52:51PM 0 points [-]

Um, I don't actually remember now– I thought that one of the results was that people compensated more for overconfidence when the tasks were not too difficult. But I don't see that, looking it over now.

Comment author: Dues 08 July 2014 04:58:57AM *  0 points [-]

I wondered if this bias is really a manifestation of holding two contradictory ideas (a la belief in belief in belief). I wonder because, when past me was making this exact mistake, I notice that it tended to be a case of having a wide range of possible skill rank coupled with a low desire for accuracy.

If I think that my IQ is between 100 and 80 then I can have it both ways. I don't know that for sure, so I can brag: "Oh my IQ is somewhere below 100." because there is still a chance that their IQ is 100. However, if I am bout to be presented with an IQ test, I am tempted to be humble and say 80, because the test is probably going to prove me wrong in a positive way. That way I get to seem humble and smart, rather than overconfident and dumb.

Why are we surprised that the subjects were still trying to act is high status ways when they weren't being watched? This isn't like an experiment where I'm more likely to steal a candy bar if I'm anonymous. My reward for acting high status when no one is watching, is that I get to think of myself as a high status actor even when other people aren't watching. I always have an audience of at least one person. myself.

Comment author: [deleted] 19 November 2013 03:22:39PM *  0 points [-]

Of course, a really self-confident person would still take the inhibiting drug, because they are positive that they are going to ace the test anyway, and doing so while impaired is so much more awesome than while sober.

Comment author: Larks 19 August 2009 07:42:16PM 0 points [-]

Excellent post!

Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group

I managed to guess this; my parents got it wrong. I thought that the control group would feel good about being right, and want it to occur more, whereas the unconfident group would feel (nihilistic? apathetic?), and so take the easy high of the drugs.

I confess I thought that the performance inhibiting drugs were euphoric; I couldn't imagine why anyone would take inhibiting drugs without some beneficial side effects. If this was wrong, I was effectively answering a different credit, so can’t really take credit for my guess.