by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the "possible strategies" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Consider the following research.
In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting. Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting. The theory is that the test subjects remembered calling the experiment interesting, and either:
- Honestly figured they must have found the experiment interesting -- why else would they have said so for only $1? (This interpretation is called self-perception theory.), or
- Didn’t want to think they were the type to lie for just $1, and so deceived themselves into thinking their lie had been true. (This interpretation is one strand within cognitive dissonance theory.)
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept carefully supervised while they played -- the equivalents of Festinger’s $20 bribe. Others were given mild threats, and left unsupervised -- the equivalent of Festinger’s $1 bribe. Later, instead of asking the boys about their verbal beliefs, Freedman arranged to test their actions. He had an apparently unrelated researcher leave the boys alone with the robot, this time giving them explicit permission to play. The results were as predicted. Boys who’d been given big threats or had been supervised, on the first round, mostly played happily away. Boys who’d been given only the mild threat mostly refrained. Apparently, their brains had looked at their earlier restraint, seen no harsh threat and no experimenter supervision, and figured that not playing with the attractive, battery-operated robot was the way they wanted to act.
One interesting take-away from Freedman’s experiment is that consistency effects change what we do -- they change the “near thinking” beliefs that drive our decisions -- and not just our verbal/propositional claims about our beliefs. A second interesting take-away is that this belief-change happens even if we aren’t thinking much -- Freedman’s subjects were children, and a related “forbidden toy” experiment found a similar effect even in pre-schoolers, who just barely have propositional reasoning at all.
Okay, so how large can such “consistency effects” be? And how obvious are these effects -- now that you know the concept, are you likely to notice when consistency pressures change your beliefs or actions?
In what is perhaps the most unsettling study I’ve heard along these lines, Freedman and Fraser had an ostensible “volunteer” go door-to-door, asking homeowners to put a big, ugly “Drive Safely” sign in their yard. In the control group, homeowners were just asked, straight-off, to put up the sign. Only 19% said yes. With this baseline established, Freedman and Fraser tested out some commitment and consistency effects. First, they chose a similar group of homeowners, and they got a new “volunteer” to ask these new homeowners to put up a tiny three inch “Drive safely” sign; nearly everyone said yes. Two weeks later, the original volunteer came along to ask about the big, badly lettered signs -- and 76% of the group said yes, perhaps moved by their new self-image as people who cared about safe driving. Consistency effects were working.
The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be. So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”. The petition was innocuous enough that nearly everyone signed it. And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate. Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).
These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions, and the influences last over longer periods of time. Consistency effects make us likely to stick to our past ideas, good or bad. They make it easy to freeze ourselves into our initial postures of disagreement, or agreement. They leave us vulnerable to a variety of sales tactics. They mean that if I’m working on a cause, even a “rationalist” cause, and I say things to try to engage new people, befriend potential donors, or get core group members to collaborate with me, my beliefs are liable to move toward whatever my allies want to hear.
What to do?
Some possible strategies (I’m not recommending these, just putting them out there for consideration):
- Reduce external pressures on your speech and actions, so that you won’t make so many pressured decisions, and your brain won’t cache those pressure-distorted decisions as indicators of your real beliefs or preferences. For example:
- 1a. Avoid petitions, and other socially prompted or incentivized speech. Cialdini takes this route, in part. He writes: “[The Freedman and Fraser study] scares me enough that I am rarely willing to sign a petition anymore, even for a position I support. Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.”
- 1b. Tenure, or independent wealth.
- 1c. Anonymity.
- 1d. Leave yourself “social lines of retreat”: avoid making definite claims of a sort that would be embarrassing to retract later. Another tactic here is to tell people in advance that you often change your mind, so that you’ll be under less pressure not to.
- Only say things you don’t mind being consistent with. For example:
- 2a. Hyper-vigilant honesty. Take care never to say anything but what is best supported by the evidence, aloud or to yourself, lest you come to believe it.
- 2b. Positive hypocrisy. Speak and act like the person you wish you were, in hopes that you’ll come to be them. (Apparently this works.)
- Change or weaken your brain’s notion of “consistent”. Your brain has to be using prediction and classification methods in order to generate “consistent” behavior, and these can be hacked.
- 3a. Treat $1 like a gun. Regard the decisions you made under slight monetary or social incentives as like decisions you made at gunpoint -- decisions that say more about the external pressures you were under, or about random dice-rolls in your brain, than about the truth. Take great care not to rationalize your past actions.
- 3b. Build emotional comfort with lying, so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience. Perhaps follow Michael Vassar’s suggestion to lie on purpose in some unimportant contexts.
- 3c. Reframe your past behavior as having occurred in a different context, and as not bearing on today’s decisions. Or add context cues to trick your brain into regarding today's decision as belonging to a different category than past decisions. This is, for example, part of how conversion experiences can help people change their behavior. (For a cheap hack, try traveling.)
- 3d. More specifically, visualize your life as something you just inherited from someone else; ignore sunk words as you would aspire to ignore sunk costs.
- 3e. Re-conceptualize your actions into schemas you don’t mind propagating. If you’ve just had some conversations and come out believing the Green Sky Platform, don’t say “so, I’m a green sky-er”. Say “so, I’m someone who changes my opinions based on conversation and reasoning”. If you’ve incurred repeated library fines, don’t say “I’m so disorganized, always and everywhere”. Say “I have a pattern of forgetting library due dates; still, I’ve been getting more organized with other areas of my life, and I’ve changed harder habits many times before.”
- Make a list of the most important consistency pressures on your beliefs, and consciously compensate for them. You might either consciously move in the opposite direction (I know I’ve been hanging out with singularitarians, so I somewhat distrust my singularitarian impressions) or take extra pains to apply rationalist tools to any opinions you’re under consistency pressure to have. Perhaps write public or private critiques of your consistency-reinforced views (though Eliezer notes reasons for caution with this one).
- Build more reliably truth-indicative types of thought. Ultimately, both priming and consistency effects suggest that our baseline sanity level is low; if small interactions can have large, arbitrary effects, our thinking is likely pretty arbitrary to to begin with. Some avenues of approach:
- 5a. Improve your general rationality skill, so that your thoughts have something else to be driven by besides your random cached selves. (It wouldn’t surprise me if OB/LW-ers are less vulnerable than average to some kinds of consistency effects. We could test this.)
- 5b. Take your equals’ opinions as seriously as you take the opinions of your ten-minutes-past self. If you often discuss topics with a comparably rational friend, and you two usually end with the same opinion-difference you began with, ask yourself why. An obvious first hypothesis should be “irrational consistency effects”: maybe you’re holding onto particular conclusions, modes of analysis, etc., just because your self-concept says you believe them.
- 5c. Work more often from the raw data; explicitly distrust your beliefs about what you previously saw the evidence as implying. Re-derive the wheel, animated by a core distrust in your past self or cached conclusions. Look for new thoughts.
Andrew, suppose you're under strong social pressure to say X. The very thought of failing to affirm X, or of saying not-X, makes your stomach sink in fear -- you know how your allies would respond that that, and it'd be terrible. You'd be friendless and ostracized. Or you'd hurt the feelings of someone close to you. Or you'd be unable to get people to help you with this thing you really want to do. Or whatever.
Under those circumstances, if you're careful to always meticulously tell "your sincere beliefs", you may well find yourself rationalizing, and telling yourself you "sincerely believe" X, quite apart from the evidence for X. Instead of telling lie X to others only, you'll be liable to tell lie X to others and to yourself at once. You'll guard your mind against "thoughtcrime".
OTOH, if you leave yourself the possibility of saying X while believing Y, silently, inside your own mind, you'll be more able to think though X's truth or falsity honestly, because, when "what if X were true?" flashes through the corner of your mind, "I can't think that; everyone will hate me" won't follow as closely.
Personally I enjoy heresies too much to be worried about being biased against them, but that still leaves the problem of others responding negatively. I'm sure there are situations where lies are the least bad solution, but where possible I'd rather get good at avoiding questions, answering ambiguously or with some related opinion that you actually hold and that you know they will like, and so on. In addition to the point about eroding the moral authority of the rationalist community I'm somewhat worried about majority-pleasing lies amplifying groupthink (... (read more)