Another technique: thought quarantine. New ideas should have to endure an observation period and careful testing before they enter your repertoire, no matter how convincing they seem. If you adopt them too quickly, you risk becoming attached to them before you have a chance to notice their flaws.
I've found this to be particularly important with Eliezer's posts on OB and here. Robin Hanson's and most other posts are straightforward: they present data and then an interpretation. Eliezer's writing style also communicates what it feels like to believe his interpretation. The result is that after reading one of Eliezer's posts, my mind acts as though I believed what he was saying during the time I spend reading it. If I don't suspend judgment on Eliezer's ideas for a day or two while I consider them and their counter-arguments, they make themselves at home in my mind with inappropriate ease.
I won't go so far as to accuse Eliezer of practicing the dark arts--I agree that communicating experiences is worthwhile. I read OB for the quality of the prose as well as idea content, both of which stand out among blogs. But while this effect hasn't been articulated in comments as far as I know, I suspect that it contributes to the objections Robin Hanson and others have to Eliezer's style.
I have to say, that goes a bit beyond what I intended. But that part where I communicate the experience is really important. I wonder if there's some way to make it a bit less darkish without losing the experiential communication?
It may be time for a good Style vs Content Debate; first commenter to scream false dilemma gets a prize
We left out one strategy because we didn't have scientific support for it. But introspectively:
3. Change or weaken your brain’s notion of “consistent”. Your brain has to be using prediction and classification methods in order to generate “consistent” behavior, and these can be hacked.
Everybody's writing about removing this effect in the future. But how about in the past?
How much of your present day self concept is delusional (or at least arbitrary)?
I liked this article, though I might change my mind about that. I’m someone who changes my opinions based on conversation and reasoning.
Great post.
Here's some additional reading that supports your argument:
Distract yourself. You're more honest about your actions when you can't exert the mental energies necessary to rationalize your actions.
And the (subconcious) desire to avoid appearing hypocritical is a huge motivator.
I've noticed this in myself often. I faithfully watched LOST through the third season, explaining to my friends who had lost interest around the first season that it was, in fact, an awesome show. And then I realized it kind of sucked.
This is a great post, especially all of the technique suggestions.
Socially Required Token Disagreement: I'm especially surprised by the "drive safely" study - and it's especially weird since "keeping America beautiful" would seem to contradict putting a big ugly sign on your front lawn. Maybe the effect wasn't through the person's support for vague feel-good propositions, but through their changed attitude to following requests by strangers knocking on their door.
2b. Positive hypocrisy. Speak and act like the person you wish you were, in hopes that you’ll come to be them. (Apparently this works.)
This does work. I found that when I noticed I was quiet and didn't talk to people often, that I didn't like being that way. I wanted to reach out. It took four years to break the habit, but now my friends know me to be a generally "out spoken and outgoing" person. In other words, I had an image of what I wanted to be (more outgoing) and thought of what would an outgoing person do (talk to the person sitting ...
This is something I've noticed as a factor in my own behavior since I was a child. I never tried isolating myself from social influences to avoid pressure on my personality though, rather, at an early age, I consciously rejected the idea that I had a distinct "true" personality that existed irrespective of circumstance. My strategies mainly focused on creating my own pressures to conform to so that I could shape my contextual behavior in directions I wanted.
My first top level post is actually an example of this, since I created it not just to pro...
Whenever you are about to make a decision which you do not care much about, one way or the other, use one of the following algorithms:
Consistency effects seem to me to be the sort of error that more intelligent people might be MORE prone to, and is thus, it seems to me, particularly important to flag.
BTW, 3E seems to me to be by far the most important of the rationality suggestions given, largely because it actually seems practical.
Could more people please share data on how one of the above techniques, or some other technique for reducing consistency pressures, has actually helped their rationality? Or how such a technique has harmed their rationality, or has just been a waste of time? The techniques list is just a list of guesses, and while I'm planning on using more of them than I have been using... it would be nice to have even anecdotal data on what helps and doesn't help.
For example, many of you write anonymously; what effects do you notice from doing so?
Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?
How can you tell whether one's self might be getting hijacked or if it's getting rescued from a past hijacking?
E.g. I've been a long-time OB reader but took a couple of months off (part of a broader tactic to free myself of a possible RSS info addiction, and also to build some more connections with local people & issues via Twitter). I brought OB back into my daily reading list last week, read a few of Robin's posts and wondered where Eliezer was at...
Now I find myself here at LW, articulating thoughts to myself as I read and catch up, feeling impell...
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”.
Along the lines of 3a, I consider the word 'wrong', and all variations, as if they were uttered at gunpoint. From my own observation the difference between the two is little more than degree. Describing (usually internally) such social influences in terms of raw power allows them to be navigated without having my self concept warped into undesirable patterns or riddled with undesired natural categories.
This reminds me heavily of some studies I've read about pathological cases involving, e.g., split-brain patients or those with right-hemisphere injuries, wherein the patient will rationalize things they have no concious control over. For instance, the phenomenon of Anosognosia as mentioned in this Less Wrong post.
The most parsimonious hypothesis seems, to me at least, that long-term memory uses extremely lossy compression, recording only rough sketches of experience and action, and that causal relationships and motivations are actually reconstructed on the...
I think it's more accurate to say that memory is not for remembering things. Memory is for making predictions of the future, so our brains are not optimized for remembering exact sequences of events, only the historical probability of successive events, at varying levels of abstraction. (E.g. pattern X is followed by pattern Y 80% of the time).
This is pretty easy to see when you add in the fact that emotionally-significant events involving pleasure or pain are also more readily recalled; in a sense, they're given uneven weight in the probability distribution.
This simple probability-driven system is enough to drive most of our actions, while the verbal system is used mainly to rationalize our actions to ourselves and others. The only difference between us and split-brain or anosognosiacs, is that we don't rationalize our externalities as much.... but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)
Anyway, the prediction basis is why it's so hard to remember if you locked the door this time -- your brain really only cares if you usually lock the door, not whether you did it this time. (Unless there was also something unusual that happened when you locked the door this time.)
Oh man. I already knew about this effect when I spent the summer in Atlanta. But I am not very good under social pressure, so when all the panhandling gentlemen recognized my clueless wandering around the streets and started demanding a dollar or bus fare, I felt like I had to give it to them. (I did this until I ran out of cash, and then I just said "Sorry! No cash! I need to catch the bus! Bye!") One guy asked for 80 cents for water from the vending machine, so I offered him my water. To which he promptly replied that he would like my water in ...
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.
Courtesy reference: A Fable of Science and Politics describes a conflict between two fictional underground factions, who were in disagreement regarding the appearance of the sky, long lost to them.
This post seems to imply that the self-consistency bias is irrational, but it doesn't argue strongly that becoming more self-inconsistent leads to better outcomes. In fact, it hints that the self-consistency bias is strong and natural, which would suggest that it might be beneficial in the EEA.
For example, it may be more beneficial to consistently carry through plans than to switch to the best-appearing alternative at every step.
Another idea: Possibly the "incorporate small decisions into one's self-concept" is a way of seeking individuation; being an unusual person may be advantageous in some way.
Agree with and like the post. Two related avenues for application:
Using this effect to accelerate one's own behavior modification by making commitments in the direction of the type of person one wants to become. (e.g. donating even small amounts to SIAI to view oneself as rationally altruistic, speaking in favor of weight loss as a way to achieve weight loss goals, etc.). Obviously this would need to be used cautiously to avoid cementing sub-optimal goals.
Memetics: Applying these techniques on others may help them adopt your goals without your needing to explicitly push them too hard. Again, caution and foresight advisable.
This effect is interesting from the rationalism perspective because it has three separate effects: (1) making us believe false things about our past mental states (or even false things about the world); (2) creating a disconnect between why we claim/believe we are saying or doing something, and why we actually are; and (3) the change in our behaviors/desires themselves. While (1) and (2) clearly represent decreases in rationality/sanity, what can we say about (3)? Don't we all believe Hume around here?
These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions,
If they wanted to show that, they should have had a control group that observed other people taking those actions.
Observing yourself doing something is still observing it, and priming could still account for the results.
I think this is a great post.
Really made me think about how this might apply to me, and I've already decided to make a few changes based on some of your suggestions (mostly in how I phrase things when describing myself).
As social creatures, I wonder if the effect is stronger when these consistency effects rise up in group situations. Does our brain try harder to stay consistent with an identity that makes us part of a group rather than an "individual" identity?
This certainly could explain a few things about how solid political/tribal/religious ...
...The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be. So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”. The petition was innocuous enough that nearly everyone signed it. And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate. Notice that the “keep America b
I am working on my own time to understand how to apply Value of Information Analysis to the projects I handle for an oil company, particularly regarding geoscientific matters of reservoir characterization.
This is a really good post. I particularly like the suggestion that we don't have to infer and cache conclusions about ourselves when we screw up and don't return a library book. (Of course, other people would be rational to cache a conclusion about us because thinking differently wouldn't be a self-fulfilling prophecy.)
I must be falling to the dark side because I read this and thought "so this is how I can convince people of things: give them a dollar to say they agree with me."
3e sounds a lot like narrative therapy. If you're interested in that method, reading more about narrative therapy could help.
More evidence that social work and LW have many similar aims, and methods that can be used for both.
Although in this post the authors have emphasised more on the hijacking when we say things I find it somehow related to the "cognitive costs of doing things" ( see http://lesswrong.com/lw/5in/the_cognitive_costs_to_doing_things/ ) When you do something you always pay a price. Maybe in the list of what to do could be added to be prepared and estimate the price you are going to pay when you are planning to do something.
Also beware of the "beauty bias" (see www.overcomingbias.com) if a handsome/beautiful person tells you something its seems you are more likely to agree with him/her.
One thing I would point out is that the arguments presented here are a considerable effort into the examination of one’s own personal psyche, and that of the common psyche.
While it can be a definite benefit to examine this topic, I advise caution of moderation in the attempt.
I admire the authors own example in doing the equivalent: "I’m not recommending these, just putting them out there for consideration"
My main point of argument is that examination need not be experimentation, we can form hypothesis for consideration and not be burdened with th...
By listing those "suggestions," you are causing people at least one person to try to use them even though they are in my judgment largely worthless or at least not worth the time and effort required to try to adopt them (this judgment means little compared to actual evidence of their relative effectiveness, but since I haven't seen any it will have to suffice as a prior). I have also seen no plausible argument here that this type of bias actually causes unhappiness, and so I therefore care nothing about it.
The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.
"Affirmative a...
by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the "possible strategies" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Consider the following research.
In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting. Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting. The theory is that the test subjects remembered calling the experiment interesting, and either:
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept carefully supervised while they played -- the equivalents of Festinger’s $20 bribe. Others were given mild threats, and left unsupervised -- the equivalent of Festinger’s $1 bribe. Later, instead of asking the boys about their verbal beliefs, Freedman arranged to test their actions. He had an apparently unrelated researcher leave the boys alone with the robot, this time giving them explicit permission to play. The results were as predicted. Boys who’d been given big threats or had been supervised, on the first round, mostly played happily away. Boys who’d been given only the mild threat mostly refrained. Apparently, their brains had looked at their earlier restraint, seen no harsh threat and no experimenter supervision, and figured that not playing with the attractive, battery-operated robot was the way they wanted to act.
One interesting take-away from Freedman’s experiment is that consistency effects change what we do -- they change the “near thinking” beliefs that drive our decisions -- and not just our verbal/propositional claims about our beliefs. A second interesting take-away is that this belief-change happens even if we aren’t thinking much -- Freedman’s subjects were children, and a related “forbidden toy” experiment found a similar effect even in pre-schoolers, who just barely have propositional reasoning at all.
Okay, so how large can such “consistency effects” be? And how obvious are these effects -- now that you know the concept, are you likely to notice when consistency pressures change your beliefs or actions?
In what is perhaps the most unsettling study I’ve heard along these lines, Freedman and Fraser had an ostensible “volunteer” go door-to-door, asking homeowners to put a big, ugly “Drive Safely” sign in their yard. In the control group, homeowners were just asked, straight-off, to put up the sign. Only 19% said yes. With this baseline established, Freedman and Fraser tested out some commitment and consistency effects. First, they chose a similar group of homeowners, and they got a new “volunteer” to ask these new homeowners to put up a tiny three inch “Drive safely” sign; nearly everyone said yes. Two weeks later, the original volunteer came along to ask about the big, badly lettered signs -- and 76% of the group said yes, perhaps moved by their new self-image as people who cared about safe driving. Consistency effects were working.
The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be. So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”. The petition was innocuous enough that nearly everyone signed it. And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate. Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).
These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions, and the influences last over longer periods of time. Consistency effects make us likely to stick to our past ideas, good or bad. They make it easy to freeze ourselves into our initial postures of disagreement, or agreement. They leave us vulnerable to a variety of sales tactics. They mean that if I’m working on a cause, even a “rationalist” cause, and I say things to try to engage new people, befriend potential donors, or get core group members to collaborate with me, my beliefs are liable to move toward whatever my allies want to hear.
What to do?
Some possible strategies (I’m not recommending these, just putting them out there for consideration):