People usually have good guesses about the origins of their behavior. If they eat, we believe them when they say it was because they were hungry; if they go to a concert, we believe them when they say they like the music, or want to go out with their friends. We usually assume people's self-reports of their motives are accurate.

Discussions of signaling usually make the opposite assumption: that our stated (and mentally accessible) reasons for actions are false. For example, a person who believes they are donating to charity to "do the right thing" might really be doing it to impress others; a person who buys an expensive watch because "you can really tell the difference in quality" might really want to conspicuously consume wealth.

Signaling theories share the behaviorist perspective that actions do not derive from thoughts, but rather that actions and thoughts are both selected behavior. In this paradigm, predicted reward might lead one to signal, but reinforcement of positive-affect producing thoughts might create the thought "I did that because I'm a nice person".

Robert Trivers is one of the founders of evolutionary psychology, responsible for ideas like reciprocal altruism and parent-offspring conflict. He also developed a theory of consciousness which provides a plausible explanation for the distinction between selected actions and selected thoughts.

TRIVERS' THEORY OF SELF-DECEPTION

Trivers starts from the same place a lot of evolutionary psychologists start from: small bands of early humans grown successful enough that food and safety were less important determinants of reproduction than social status.

The Invention of Lying may have been a very silly movie, but the core idea - that a good liar has a major advantage in a world of people unaccustomed to lies - is sound. The evolutionary invention of lying led to an "arms race" between better and better liars and more and more sophisticated mental lie detectors.

There's some controversy over exactly how good our mental lie detectors are or can be. There are certainly cases in which it is possible to catch lies reliably: my mother can identify my lies so accurately that I can't even play minor pranks on her anymore. But there's also some evidence that there are certain people who can reliably detect lies from any source at least 80% of the time without any previous training: microexpressions expert Paul Ekman calls them (sigh...I can't believe I have to write this) Truth Wizards, and identifies them at about one in four hundred people.

The psychic unity of mankind should preclude the existence of a miraculous genetic ability like this in only one in four hundred people: if it's possible, it should have achieved fixation. Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his "wizards" achieve it naturally; perhaps because they've had a lot of practice. One can speculate that in an ancestral environment with a limited number of people, more face-to-face interaction and more opportunities for lying, this sort of skill might be more common; for what it's worth, a disproportionate number of the "truth wizards" found in the study were Native Americans, though I can't find any information about how traditional their origins were or why that should matter.

If our ancestors were good at lie detection - either "truth wizard" good or just the good that comes from interacting with the same group of under two hundred people for one's entire life - then anyone who could beat the lie detectors would get the advantages that accrue from being the only person able to lie plausibly.

Trivers' theory is that the conscious/unconscious distinction is partly based around allowing people to craft narratives that paint them in a favorable light. The conscious mind gets some sanitized access to the output of the unconscious, and uses it along with its own self-serving bias to come up with a socially admirable story about its desires, emotions, and plans. The unconscious then goes and does whatever has the highest expected reward - which may be socially admirable, since social status is a reinforcer - but may not be.

HOMOSEXUALITY: A CASE STUDY

It's almost a truism by now that some of the people who most strongly oppose homosexuality may be gay themselves. The truism is supported by research: the Journal of Abnormal Psychology published a study measuring penile erection in 64 homophobic and nonhomophobic heterosexual men upon watching different types of pornography, and found significantly greater erection upon watching gay pornography in the homophobes. Although somehow this study has gone fifteen years without replication, it provides some support for the folk theory.

Since in many communities openly declaring one's self homosexual is low status or even dangerous, these men have an incentive to lie about their sexuality. Because their facade may not be perfect, they also have an incentive to take extra efforts to signal heterosexuality by for example attacking gay people (something which, in theory, a gay person would never do).

Although a few now-outed gays admit to having done this consciously, Trivers' theory offers a model in which this could also occur subconsciously. Homosexual urges never make it into the sanitized version of thought presented to consciousness, but the unconscious is able to deal with them. It objects to homosexuality (motivated by internal reinforcement - reduction of worry about personal orientation), and the conscious mind toes party line by believing that there's something morally wrong with gay people and only I have the courage and moral clarity to speak out against it.

This provides a possible evolutionary mechanism for what Freud described as reaction formation, the tendency to hide an impulse by exaggerating its opposite. A person wants to signal to others (and possibly to themselves) that they lack an unacceptable impulse, and so exaggerates the opposite as "proof".

SUMMARY

Trivers' theory has been summed up by calling consciousness "the public relations agency of the brain". It consists of a group of thoughts selected because they paint the thinker in a positive light, and of speech motivated in harmony with those thoughts. This ties together signaling, the many self-promotion biases that have thus far been discovered, and the increasing awareness that consciousness is more of a side office in the mind's organizational structure than it is a decision-maker.

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 10:48 AM

This.

I don't know if latent homosexuality in homophobes is the best example, but I've definitely seen it in myself. I will sometimes behave in certain ways, for motives I find perfectly virtuous or justified, and it is only by analysing my behaviour post-hoc that I realize it isn't consistent with the motives I thought I had - but it is consistent with much more selfish motives.

I think the example that most shocked me was back when I played an online RPG, and organised an action in a newly-coded environment. I and others on my team noticed an unexpected consequence of the rules that would make it easy for us to win. Awesome ! We built our strategy around it, proud of our cleverness, and went forward with the action.

And down came the administrators, furious that we had cheated that way.

I was INCENSED at the accusation. How were we supposed to know this was a bug and not a feature ? How dare they presume bad faith on our part ? I loudly and vocally defended our actions.

It's only later, as I was re-reading our posts on the private forum where we organised the action (posts that I realized as I re-read them the administrators had access to, and had probably read... please kill me now), that I noticed that not only did we discuss said bug, I specifically told everyone not to tell the administrators about it. At the time, my reasoning was that, well, they might decide to tell us not to use it, and we wouldn't want that, right ?

But if I'd thought there was a chance that the administrators would disapprove of us using the bug, how could I possibly think it wasn't a bug, and that using it wasn't cheating ? If I was acting in good faith how could I possibly not want to check with the administrators and make sure ?

Well, I didn't. I managed to cheat, obviously, blatantly, and have no conscious awareness I was doing so. That's not even quite true; I bet if I'd thought it through, as I did afterwards, I would have realized it. But my subconscious was damn well not going to let me think it through now was it ?

And why would my subconscious not allow me to understand I was cheating ? Well, the answer is obvious : so that I could be INCENSED and defend myself vocally, passionately and with utter sincerity once I did get accused of cheating. Heck, I probably did get away with it in some people's eyes. Those that didn't read the incriminating posts on the private forum at least.

So basically, now I don't take my motives for granted. I try to consider not only why I think I want to do something, but what motives one could infer from the actual consequences of what I want to do.

It also means I worry much less about other people's motives. If motives are a perfect guide to people's actions, then someone who thinks they truly love their partner while their actions result in abuse might just be an unfortunate klutz with anger issues, who should be pitied and given second chances instead of dumped. But if the subconscious can have selfish motives and cloak them in virtue for the benefit of the conscious mind, then that person can have the best intentions and still be an abuser, and one should very much DTMFA.

On reflection, I think it's highly likely that in the past I've gone out of my way to signal high intelligence (by learning memory tricks, "deep" quotations, display intellectual reading prominently, etc) because on some level I suspected that I'm not actually very smart and yet I hugely value massive brainpower (alas, my parents praised me for "being smart").

Interestingly (to me, anyway), I think that this has greatly diminished since I got involved with LessWrong. My belief is that interacting with actual extremely smart people made the whole thing seem silly, so I was able to get on with just trying to level up and not making such a big show about it.

That's interesting.

Of course, it makes sense that signalling exceptional intelligence stops seeming like a worthwhile strategy when everyone in the community is perceived as equally or more intelligent, but it's noteworthy and admirable that what replaced it was giving up on signaling altogether and concentrating on actual self-improvement, rather than the far more common (though less useful) tactic of signalling something else that was more reliably high-status in that community.

That's pretty cool. Good for you!

You may have knowledge about this particular case I don't, but unless we know XFrequentist is telling the truth rather than self-decieving (or we know that there is a high probability of such) we shouldn't give him positive reinforcement.

Agreed (although still appreciated, TOD)! I could easily be wrong.

The evidence I would call on to support my belief are that:

  • I spend more time actually working on stuff than I used to,
  • I get less flustered in situations where others' perception of my intellect could suffer a hit (presentations, meetings, group conversations),
  • in discussion/argument, I feel less concerned whether or not I come off as intelligent,
  • I've observed fewer people telling me that I'm smart.

I can think of alternate explanations for all these observations though. I'll ask folk at our next meetup whether they think this is accurate, and I'll also ask a few people that have known me well for the past few years. The outside view is clearly more reliable here.

because on some level I suspected that I'm not actually very smart

Every smart person has this tendency, really. From the inside, being smart doesn't feel like there's anything different about you. It just feels like intellectual tasks are easier. There's easy way to feel how hard it is for a not-smart person to learn or do something.

Discussions of signaling usually make the opposite assumption: that our stated (and mentally accessible) reasons for actions are false.

I think it's more accurate to say "often irrelevant" than "false".

​I see at least two problems with this case study.

First, what sort of sampling bias is introduced by studying only men who are willing to view such materials? It seems highly implausible to me that this effect is zero.

Second, if true, this theory should generalize to other cases of people who express an exceptionally strong opposition towards some low-status/disreputable behavior that can be practiced covertly, or some low-status beliefs that can be held in secret. Yet it's hard for me to think of any analogous examples that would be the subject of either folk theories or scientific studies.

In fact, this generalization would lead to the conclusion that respectable high-status activists who crusade against various behaviors and attitudes that are nowadays considered disreputable, evil, dangerous, etc., should be suspected that they do it because they themselves engage in such behaviors (or hold such attitudes) covertly. The funny thing is, in places and social circles where homophobia is considered disreputable, this should clearly apply to campaigners against homophobia!

Second, if true, this theory should generalize to other cases of people who express an exceptionally strong opposition towards some low-status/disreputable behavior that can be practiced covertly, or some low-status beliefs that can be held in secret. Yet it's hard for me to think of any analogous examples that would be the subject of either folk theories or scientific studies.

There are a few other scientific results of this type: search the literature under "reaction formation". For example:

Morokoff (1985): Women high in self-reported "sex guilt" have lower self-reported reaction to erotic stimuli but higher physiological arousal.

Dutton & Lake (1976): Whites with no history of prejudice and self-reported egalitarian beliefs were given bogus feedback during a task intended to convince them they were subconsciously prejudiced (falsely told that they had high skin response ratings of fear/anger when shown slides of interracial couples). After they had left the building, they were approached by either a black or white beggar. Whites who had received the false racism feedback gave more to the black beggar (though not to the white beggar) than whites who had not.

Sherman and Garkin (1980): Subjects were asked to solve a difficult riddle in which the trick answer involved sex-roles, such that after failing they felt "implicitly accused of sexism" (couldn't find the exact riddle, but I imagine something like this). Afterwards they were asked to evaluate a sex-discrimination case. People who had previously had to solve the riddle gave harsher verdicts against a man accused of sexual discrimination than those who had not.

I've heard anecdotal theories of a few similar effects - for example, that the loudest and most argumentative religious believers are the ones who secretly doubt their own faith.

Overall I probably shouldn't have included the case study because I don't think Trivers' theory stands or falls on this one point, and it's probably not much more than tangential to the whole idea of a conscious/unconscious divide.

That's extremely interesting - thanks for the references!

I've heard that any emotional response which causes an increase in blood pressure (including anxiety, anger, or disgust) will tend to increase penile circumference (which is what was measured in the homophobia study). This was discussed recently on Reddit (e.g., this comment).

First, what sort of sampling bias is introduced by studying only men who are willing to view such materials? It seems highly implausible to me that this effect is zero.

Would this have an effect on the difference between homophobes and non-homophobes? Intuitively, it should have a uniform effect across the board so that the comparison of differences is still valid (though what Unnamed mentions in response to the parent undermines this), though this is hard to know without checking.

Silly example from my life. When I was three, I liked a girl named Katy in my Sunday school class. My greatest fear was that someone else would know. So I decided that I would be mean to Katy. I also realized that if I treated her differently, someone might read into that that I liked her. So I started treating all the girls in my Sunday school class horribly. And kept it going (consistency bias) until I was twelve. There were so many times that I wasn't even sure myself if I liked or hated girls, since I always said I hated them, even though I had crushes on most of the ones I knew.

Trivers' theory has been summed up by calling consciousness "the public relations agency of the brain". It consists of a group of thoughts selected because they paint the thinker in a positive light, and of speech motivated in harmony with those thoughts.

I found Modularity and the Social Mind: Are Psychologists Too Self-Ish? to be an excellent article relating to this. It also considerably helps question the concept of unified preferences.

And it also has plenty of other LW-related stuff and intriguing ideas packed into a very small space. In covers (and to me, clarifies) various ideas from modularity of mind, to the fact that having inconsistent beliefs need not cause dissonance, to our consciousness not being optimized for having true beliefs and being the PR firm instead of the president, and to the fact that any of our beliefs/behaviors that are not subjected to public scrutiny shouldn't be expected to become consistent. Very much recommended.

Abstract: A modular view of the mind implies that there is no unitary “self” and that the mind consists of a set of informationally encapsulated systems, many of which have functions associated with navigating an inherently ambiguous and competitive social world. It is proposed that there are a set of cognitive mechanisms—a social cognitive interface (SCI)—designed for strategic manipulation of others’ representations of one’s traits, abilities, and prospects. Although constrained by plausibility, these mechanisms are not necessarily designed to maximize accuracy or to maintain consistency with other encapsulated representational systems. The modular view provides a useful framework for talking about multiple phenomena previously discussed under the rubric of the self.

Some excerpts:

Taken together, the ideas that certain cognitive systems’ functions might not be designed to generate representations that are the best estimate of what is true along with the tolerance for mutually contradictory representations that modularity affords suggest a conclusion central to our overarching thesis. In particular, these two ideas imply that one cognitive subsystem can maintain a representation that is not the best possible estimate of what is true but can nonetheless be treated as “true” for generating inferences within the encapsulated subsystem. If a more accurate representation about the actual state of the world is represented elsewhere in the cognitive system, this presents no particular difficulty. Hence, there is no particular reason to believe that the mind is designed in such a way to maintain consistency among its various representational systems.

Summarizing, we have suggested the following. First, the mind consists of a collection of specialized systems designed by natural selection and furthermore, that individual systems are informationally encapsulated with respect to at least some types of information. Second, these systems have been selected by virtue of their functional consequences, not by virtue of their ability to represent what is true. Third, the encapsulation of modular systems entails that mutually contradictory representations can be simultaneously present in the same brain with no need for these representations to be reconciled or made mutually consistent.

[...]

We hypothesize that this is a primary function of the SCI: to maintain a store of representations of negotiable facts that can be used for persuasive purposes in one’s social world.3 For this reason, a crucial feature of the SCI is that it is not designed to maximize the accuracy of its representations, an idea consistent with the wealth of data on biases in cognitive processes (Greenwald, 1980; Riess, Rosenfeld, Melburg, & Tedeschi, 1981; Sedikides & Green, 2004). Instead, it is designed to maximize its effect in persuading others. As D. Krebs and Denton (1997) observed, “It is in our interest to induce others to overestimate our value” (p. 36). Humphrey and Dennett (1998) similarly concluded that “selves . . . exist primarily to handle social interactions” (p. 47).

There are, of course, limits to what others will believe. Because humans rely on socially communicated information, they have filtering systems to prevent being misled. Inaccuracy must be restrained. Thus, as a number of authors have pointed out, “Self-presentation is . . . the result of a tradeoff between favorability and plausibility” (Baumeister, 1999a, p. 8; see also D. Krebs & Denton, 1997; Schlenker, 1975; Schlenker & Leary, 1982a; Sperber, 2000a; Tice, Butler, Muraven, & Stillwell, 1995; Van Lange & Sedikides, 1998). The findings by Tice et al. (1995) that people are more modest in their selfpresentation to friends than to strangers is interesting in this regard, suggesting that others’ knowledge reigns in the positive features one can plausibly claim. This selection pressure might have led to an additional feature of the SCI: to maintain the appearance of consistency. This implies that one important design feature of the SCI is to maintain a store of representations that allow consistency in one’s speech and behavior that constitute the most favorable and defensible set of negotiable facts that can be used for persuasive purposes.

[...]

On our view, if the brain is construed as a government, the SCI, the entity that others in your social world talk to and the entity that talks back to others in your social world, is more like the press secretary than the president.5 The press secretary does not make major decisions or necessarily know how they were made, has access to only limited information from both below (sensory) and above (decision-making operations), and is in charge of spin. The press secretary will not always know what motivated various decisions and actions, although the press secretary is often called on to explain them.

[...]

Recall our claim that the existence of mechanisms designed to allow individuals to maintain and signal favorable and defensible representations of their characteristics would have led to selection pressures on perceivers to check for the accuracy of signalers’ communications. This would include systems designed to check communication against what else is known as well as systems to check communication for within-individual consistency (Sperber, 2000a). This in turn would have led to selection to maintain consistency in one’s communicative acts. If the SCI does not have access to the real causes of one’s own behavior (Freud, 1912/1999; Nisbett & Wilson, 1977), then this might induce the construction of a narrative to give causal explanations that are sensible, a task which must be accomplished without necessarily having the benefit of all potentially relevant information (Gazzaniga, 1998). Consistency is important with respect to the information other people possess—inconsistency entails minimal cost as long as the relevant facts cannot be assembled by others.

[...]

Although rarely pointed out, there are an extraordinarily large number of cases in which it is transparent that inconsistent representations are maintained with no effort to compensate in ways outlined in the initial theory (belief change, minimizing importance of discrepant representations, and so on). The most obvious cases are religious ideas, where beliefs thoroughly inconsistent with ontological commitments are deeply held. Indeed, it has been argued that it is precisely this discrepancy that causes these beliefs to be generated and transmitted (Boyer, 1994a, 1994b, 2001; Boyer & Ramble, 2001).

[...]

Returning to the first criterion, acts that are private and unlikely to become publicly known might similarly be relatively immune to the kind of reorganization implied by dissonance-related theories. This idea resonates with Tice’s (1992) suggestion that it is correct to “question whether internalization occurs reliably under private circumstances” (p. 447). Tice and Baumeister (2001) more recently suggested that “public behavior appears capable of changing the inner self” (p. 76), an idea that fits with Shrauger and Schoeneman’s (1979) finding that “individuals’ self-perceptions and their views of others’ perceptions of them are quite congruent” (p. 565), but that these same self-perceptions are not necessarily congruent with others’ actual perceptions. In other words, people try to maintain consistency with the way they think they are perceived (see also Baumeister, 1982; Baumeister & Cairns, 1992).

For example, although Aronson (1992, p. 305) emphasized preservation of one’s sense of self, Aronson, Fried, and Stone (1991) emphasized that it was “not practicing what they are preaching” (p. 1637) that can be expected to induce change. It is crucial to mark the distinction between “preaching” and the “self-concept.” Preaching is a social act, and predicting change as a function of this manipulation entails a commitment beyond preserving the self-concept (Aronson, 1992, 1999; see also Thibodeau & Aronson, 1992). An emphasis on hypocrisy (Aronson et al., 1991; Dickerson, Thibodeau, Aronson, & Miller, 1992; Fried & Aronson, 1995; Stone, Aronson, Crain, & Winslow, 1994; Stone, Wiegand, Cooper, & Aronson, 1997) that turns on inconsistencies in publicly known information (see, e.g., Stone et al., 1997), especially public advocacy (Fried & Aronson, 1995), implies the view that the preservation of concepts surrounding the self is insufficient to induce dissonance effects without the added social element.

The psychic unity of mankind should preclude the existence of a miraculous genetic ability like this in only one in four hundred people: if it's possible, it should have achieved fixation. Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his "wizards" achieve it naturally; perhaps because they've had a lot of practice.

This doesn't follow. Just because it's not a complex genetic adaptation doesn't mean it's environmental. Liar-detection-ability might just be an additive-effect quantitative trait like height or IQ, with truth-wizardry being the extreme right tail. This is consistent with evolutionary genetics, as Eliezer's psychic unity point only applies for adaptations with multiple interdependent (and therefore non-additive) genetic parts.

But there's also some evidence that there are certain people who can reliably detect lies from any source at least 80% of the time without any previous training: microexpressions expert Paul Ekman calls them (sigh...I can't believe I have to write this) Truth Wizards, and identifies them at about one in four hundred people.

I have a Scientific American that claims this has turned out to be false. I'll try to find it and post back.

It [the conscious mind] consists of a group of thoughts selected because they paint the thinker in a positive light ...

That sounds pleasant enough that it makes me wish I belonged to Triver's species.

I cannot remember where, but I'm fairly sure I've read that Ekman's Truth Wizards are more likely to come from a background of childhood domestic violence. Google is failing me, though, so if anyone else can corroborate this (or alternatively let me know if it was spurious bullcrap I saw on Lie To Me), that would be appreciated.

Apparently, most of what one sees on Lie To Me is spurious. At any rate, viewing the show causes people to make more false positive identifications of deception relative to a control group, without being any more accurate at catching real deception:

The Impact of Lie To Me on Viewers' Actual Ability to Detect Deception

You mean, you can't detect lies by standing three inches from someone and squinting up their nostrils?

I'm fairly sure I've read that Ekman's Truth Wizards are more likely to come from a background of childhood domestic violence

I don't know if if that's true or in print, but I do remember it being mentioned on Lie To Me, in the context of Torres' background. But at least one Truth Wizard believes it's bunk, and I couldn't find anything on Ekman's blog about the subject one way or another.

See, I haven't actually seen that much of the show, and I've definitely not seen that storyline. I still can't seem to find anything to substantiate it, though, so provisionally chalking it down as spurious bullcrap seems safe.

[-][anonymous]13y00

That's from the TV series, the story of one of the main characters, Ria Torres.

[This comment is no longer endorsed by its author]Reply

Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his "wizards" achieve it naturally; perhaps because they've had a lot of practice.

If I remember it right it isn't only supposed to be about the amount of practice. It's important that you practice in an enviroment where you want to spot lies but expect people to tell the truth.

The practice in law enforcement where the agent assumes that the person they are interrogating is guilty isn't enough. In contrast the people in the secret service that guards important people get better practice. For any single person in the crowd they assume by default that the person is innocent but still check them to see if they might be guilty. As a result there are more "wizards" in the secret service than in law enforcement.

Does Trivers' theory assert that the unconscious does not buy the flattering lies that the conscious mind tells itself? If so, has the assertion been tested?