This often works for me: think of some smart, critical and rational friend that you have, and then imagine/visualize presenting your argument to them. Or suppose that you just put up your argument on the Internet to be critiqued. Is any part of your reasoning such that you'd actually prefer not to present it in public, knowing that it won't hold up to scrutiny?
For me at least, if I imagine presenting my reasoning to someone else, I suddenly become a lot more conscious about its weak spots. And then I try to remind myself that if I can't think of a reason why those weak spots should hold up to scrutiny, my instinct should be to abandon the argument instead of just hoping that those problems won't come up.
Anecdotal supporting evidence: In my last days as a religious person, I found myself imagining myself presenting my pro-religion arguments to the author of the Sequences, and literally could not fantasize a scenario where he found them convincing.
"Well," says Lucy, "I suppose it could have played a role in suggesting that I eat a whole chocolate cake, but the reason why I decided to do it was to support the sugar industry. Lots of people have jobs in the sugar industry, and they've been having some trouble lately."
The obvious response to Lucy is "Is buying and eating that whole choclate cake really the best way to help the sugar industry? How does it compare to other strategies, like directly giving them money?"
The generalization is that when you say you are taking action A in pursuit of goal G, ask what other actions might be more effective at achieving G.
Set your baseline to always assume you are rationalizing, and then have to prove that you aren't, rather than vice versa.
Something I do, that I'm surprised I don't see mentioned here, is to just assume that any point I am trying to make, or anything I think, is a rationalization (which is pretty likely).
So instead of having to think "Am I rationalizing?" (which can be hard to figure out), I change my baseline, to ALWAYS assume "I am probably rationalizing. How am I rationalizing?" and go from there. Sort of a quick run-through of what biases and semi-hidden desires could be influencing my decisions or statements at any given time. From there I can either accept or reject these "rationalizations".
This also ends up leading to many disclaimers in conversations, as mentioned in the OP. (i.e. "Well I can't know for sure what I HAD thought, because by now hindsight bias has taken hold...." or "Well, I'm completely anchored off your estimate now, but...") I see "Conversation Disclaimers" themselves to be a major skill. Maybe an exercise could be made out of that?
Quick idea: Have people in pairs have a conversation about a debatable subject. Every sentence they say has to be pre-empted with a disclaimer.
(Note: This is my immediate reaction to this post. I'll give it more thought later.)
Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.
"Defective" isn't quite enough; you want a prescription to replace it with. Saying "this is a bad habit" seems less useful than saying "here is a good habit."
There are two obvious prescriptions I see: provide correct rationales for decisions, or do not provide rationales for decisions. Which prescription you shoot for has a radically different impact on what exercises you do, and so should be talked about quite a bit. It may be desirable to try and wipe out rationalization, first, and then progress to correct rationales.
One exercise might be asking "who will this convince?" and "whose desires do I want to maximize?". Lucy probably doesn't actually expect Marvin to be swayed by the plight of Big Sugar, and probably doesn't actually suspect that Marvin will believe she's motivated by the plight of Big Sugar, and so that deflection may be the wrong play because it's incredible.
It seems to me that social incentives will swallow most internal incentives here. If I can get more out of others by ra...
"Break down what your parts have to say into parts" would be an interesting counter to rationalization - I think I'll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.
My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.
She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.
OK, here's an exercise that could at least help noticing how motivated cognition gives us incorrect beliefs; it's like a mix between calibration and paranoid debating.
Requirements
This exercise requires a set of "interesting questions" with a numerical answer, for example "How many people were killed by guns in New York in 1990-1999?" or "What is the unemployment rate in China" (the questions should be at least related to political/social issues, no "What's the population of Mozambique").
It is also best done in a classroom-type place with a big video projector, and a bunch of computers with internet connections. Somebody who knows Excel should also need to prepare a special spreadsheet.
Step one: Crafting Arguments
Students are together in a room, with one computer each (or they can take turns using a computer, timing isn't critical); an organizer goes to each student and gives him a paper with the question, then flips a coin and announces "high!" if it's heads, and "low!" if it's tails.
Each student then has 30 minutes to prepare a set of arguments for why that particular value is high, or low. The result should be one powerp...
We might also need an exercise just for getting people to understand the concept of motivated cognition at all.
"Motivated cognition" in the first place seems like a poor label because most thinking is motivated. It's redundant and arguing against "motivated cognition" at first glance sounds like arguing against any kind of motivated thinking. That's problematic because good thinking is also motivated, i.e. "I'll invent FAI because it would help the world."
One interesting thing I've heard repeated and found to be true about...
how to verify that a skill has been acquired
I suppose that one way to test #2 and #3 is to ask the participants (informally, so it does not appear like a test) how useful or successful they think some other skill learning activity was. This is probably a part of the procedure already. There is a significant pressure to say that "they learned a lot and are now better at it", due to several biases, including motivated cognition. When asked to elaborate, they are likely to put forth a number of arguments. To a trained eye, it should be easy to s...
One of the motivations of motivated cognition is consistency. People want to be predictable and they want to be seen as stable. So I suggest demotivating it. Have people read chapter 3 of Cialdini's Influence. I particularly like the Emerson quote:
A foolish consistency is the hobgoblin of small minds.
Yes! Teach that preferring consistency over accuracy is low-status!
I predict this will have no effect unless coupled with the following technique:
Detachment. You are not your beliefs; you are not your actions: When the opportunity to think arises, within f...
One of the key markers of rationalization I've seen is rationalizations ignore tradeoffs and other option. This is obviously true only about the rationalizations about actions and policies. For instance "I want to eat the whole cake to help the sugar industry..." never finishes "...and this help to the sugar industry is worth any ill health effects." or "...and this is more efficient than other ways to help the sugar industry,"
One activity that might help is to give people a plausible proposition to argue for in their own li...
I notice that you don't mention any existing studies. Is there a reason for this? A fairly cursory search (no backtracking of citations, or significant effort to use synonyms), brings up several relevant articles (mostly paywalled) and a book. It doesn't seem like a a super well studied field, but I don't see why its being completely ignored. The (semi) relevant stuff I found.
The Human Brain as an Evolved Rationalization Machine http://www.epjournal.net/wp-content/uploads/EP102934.pdf
Rational processing or rationalization? The effect of disconfirming i...
On the other hand, LW!Hermione failed to reproduce this experiment
I'm not sure how wide of an audience this post is targeting, but the !-notation feels gratuitous here. How about:
On the other hand, Hermione (LW user) failed to reproduce this experiment
The title strikes me as slightly problematic: I think of skills as (positive) ways to achieve outcomes, and framing "avoidance" of some mistake as a skill doesn't feel quite right.
Rationalizing anything: ask participants questions like "why did you have that cake?" or even "yesterday, you hit your kid. Why?". These questions should not be based on reality, and the instructor(s) should probably do the first two or three rounds themselves to get people used to these kinds of questions. Despite the fact that the facts are blatantly false, the participant should come up with a reason that sounds as impressive as possible. ("Yes, I hit my child yesterday. Fourteen generations of Smiths have been brought up that w...
I am somewhat surprised that I cannot find the honest Devil's advocacy anywhere on the list of techniques of spotting rationalizations. I understand that EY dislikes it, because it's easy to "invent arguments for anything", but how easy is it to invent a good argument against something you deeply believe in? And by "good" I mean an argument that does not appear silly at the first or even second glance (so, no "chocolate cake in the asteroid belt" nonsense). Or maybe this is covered by the "The world is not like X and I be...
Val also used an upside-down W-diagram with the two worlds at the top and the four beliefs at the bottom, to emphasize the idea that the world is there first, and is fixed, and we have only a choice of what to believe within a fixed world, not a choice of which background world to live in.
I don't know what a "W-diagram" is (and a simple Google search didn't help), so I don't see how this works. Perhaps a picture could help explain.
Point taken. Done.
(I had been quietly hoping to see if it's possible to change my account name from "Mercurial" to "Valentine," but I planned on doing this in the ephemeral land of "later." This just works. Thanks for pointing that out!)
A mantra I use: "I want things to be a certain way. But I don't want to believe they are. I want to believe what is true."
A little more explanation: There was a time when I would say to myself things like, "I want to think that people are basically good." or "I want to believe that my life has not been a mistake." But now I realize that what I want is actually for people to be basically good; I don't want to think that, not if it isn't true. I don't want to believe that my life has not been a mistake; I want my life to not be ...
The Retrench or Retreat Game
You're making a decision and want to check if you're rationalizing. Imagine a friend or someone else whose opinion you respect raises a criticism of/question about your stated reason. Which feels easier?
Maybe this is better as a "True Rejection" exercise, but if you find it a lot easier/more comfortable to shift arguments, your surface answer may have been a rationalization.
The original post mentions some techniques for getting people to avoid rationalizing once they've realized they're doing it, but an earlier step is to get them to realize that they're doing it.
The key to this may be that a person who is rationalizing without realizing it is arguing with him/herself without realizing it, since it's easier to recognize (and to accept) that you're arguing than that you're rationalizing. Accordingly, getting people to realize that they're rationalizing would involve getting them to realize that they're the one that they're ar...
EXERCISE
Ask the person to do an unpleasant task (dishwashing standing on slippery floor), which will cause the unconscious desire to finish the task.
While the task is executed, the person answers questions which can be TRUE or FALSE, the task only ends if the person complete a certain number of TRUE responses, but if the answers are wrong, it decreases the score for the task.
I don't always have a problem with motivated cognition, but when I do, my brain usually makes it some or all of the way through the following steps:
Shinzen Young's "Do Nothing" Meditation:
http://www.youtube.com/watch?v=cZ6cdIaUZCA (Note that he is "neuroscience aware.")
http://www.shinzen.org/Retreat%20Reading/FiveWays.pdf
http://www.basicmindfulness.org/
(Also the discussion of "Willingness" in http://www.amazon.com/Acceptance-Commitment-Therapy-Second-Edition/dp/1609189620/)
The goal is to raise the baseline of equanimity and mindfulness. You naturally start allowing emotion and inner talk to thunder through you without letting it drive behavior. This is a prerequisite for ...
Whenever you find yourself about to express a belief, ask yourself if you really believe it it first. This works for me, but may not work for people not feeling curiosity or for people not already in the habit of being honest with themselves.
...often that thought causes me to not know what I believe.
Of course the hard part is figuring out a way to reliably put people in a situation where they will find themselves rationalizing. Social pressure would work on a lot of people, but probably not so well on LW types.
This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?
Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).
Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such a...
I've reading about the CFAE exercise prizes. Now that there are some great tools I can learn, ay tips for mitigating the initial pain of going from any given irrational state to a more rational one? Any stories of how learning these rationality exercises have made an impact on some part of the quality of your lives?
Exercise: Trick people into rationalizing
Engineer a situation where subjects will perform an action because you tricked them into doing it, and then ask them for reasons why they performed that action. Specifics here that actually work are beyond my pay-grade. But I'm reminded of an experiment where a subject was hypnotized and given an umbrella. After the hypnosis, the researcher asked "Why do you have an umbrella?" and the subject, initially looking a bit shocked at the umbrella, answered "I thought it was going to rain".
It would h...
Think about a time when you were badly disappointed. Then think of any opportunities you had previously, to notice that things weren't going to turn out the way you'd expected. In particular, cases where you came across an argument or evidence that in hindsight should have changed your opinion, but you dismissed by a clever argument, or by not thinking about it.
Think about what you could have done in the time you had between your missed opportunity and the disappointment. Plans you could have changed to make the best of a bad situation. Mentally contrast the actual outcome with the outcome had you been forewarned, and allow yourself to be very annoyed at your past self for throwing away time.
Instructor writes a decision on the board and asks all participants to give factors to consider before making that decision. Participants are then asked as a group if each factor (for typical possible values of the factor) is 1. necessary and 2. sufficient to determine whether the decision should be taken.
I would ask Lucy if she would still eat the cake if the sugar industry were doing fine. More generally, I would ask if there is any action Lucy is taking that they would not take if the sugar industry were doing fine.
Ask the participants to come up with a belief that they feel is important to them but hasn't been necessary (though they may mistakenly think it sufficient) for making any meaningful decisions.
(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills. The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil. We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt. This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired. See here for details.)
The following awards have been made: $550 to Palladias, $550 to Stefie_K, $50 to lincolnquirk, and $50 to John_Maxwell_IV. See the bottom for details. If you've earned a prize, please PM StephenCole to claim it. (If you strongly believe that one of your suggestions Really Would Have Worked, consider trying it at your local Less Wrong meetup. If it works there, send us some participant comments; this may make us update enough to test it.)
Motivated cognition is the way (all? most?) brains generate false landscapes of justification in the presence of attachments and flinches. It's not enough for the human brain to attach to the sunk cost of a PhD program, so that we are impelled in our actions to stay - no, that attachment can also go off and spin a justificational landscape to convince the other parts of ourselves, even the part that knows about consequentialism and the sunk cost fallacy, to stay in the PhD program.
We're almost certain that the subject matter of "motivated cognition" isn't a single unit, probably more like 3 or 8 units. We're also highly uncertain of where to start teaching it. Where we start will probably end up being determined by where we get the best suggestions for exercises that can teach it - i.e., end up being determined by what we (the community) can figure out how to teach well.
The cognitive patterns that we use to actually combat motivated cognition seem to break out along the following lines:
And also:
Exercises to teach all of these are desired, but I'm setting apart the Rationalization Patterns into a separate SotW, since there are so many that I'm worried 1-4 won't get fair treatment otherwise. This SotW will focus on items 1-3 above; #4 seems like more of a separate unit.
Conceptual understanding / insights / theoretical background:
The core reasons why rationalization doesn't work are given in The Bottom Line and Rationalization. The Bayesian analysis of selective search is given in What Evidence Filtered Evidence? and Conservation of Expected Evidence.
For further discussion, see the entire Against Rationalization sequence, also The Meditation on Curiosity (for the Litany of Tarski).
Some key concepts (it'd be nice if some exercise taught a gut-level understanding thereof, although as always the goal is to t each skills rather than concepts):
(We might also need an exercise just for getting people to understand the concept of motivated cognition at all. When Anna and Michael ran their first session on motivated cognition, they found that while most participants immediately recognized the notion of 'rationalization' from examples like Lucy above, several people had no idea what they were talking about - they didn't see why anyone would ever want to use a technique like the Litany of Tarski. Yes, we know you're skeptical, we also couldn't see how that could possibly be true a priori, but sometimes the evidence just punches you in the nose. After some investigation, it seems entirely possible that Alicorn has simply never rationalized, ever. Other cases (not Alicorn's) suggest that some people might have a very low need for verbal justification; even if they feel guilty about breaking their diet, they feel no urge to invent an elaborate excuse - they just break their diet. On the other hand, LW!Hermione failed to reproduce this experiment - she couldn't find anyone who didn't immediately recognize "rationalization" after 10 tries with her friends. We notice we are confused.)
(The upshot is that part of the challenge of constructing a first unit on motivated cognition may be to "Explain to some participants what the heck a 'rationalization' is, when they don't remember any internal experience of that" or might even be "Filter out attendees who don't rationalize in the first place, and have them do a different unit instead." Please don't be fascinated by this problem at the expense of the primary purpose of the unit, though; we're probably going to award at most 1 prize on this subtopic, and more likely 0, and there's an existing thread for further discussion.)
Countering the rationalization impulse / restoring truth-seeking:
The Tarski method: This is the new name of what we were previously calling the Litany of Tarski: "If the sky is blue, I want to believe the sky is blue; if the sky is not blue, I want to believe the sky is not blue; let me not become attached to beliefs I may not want."
Example: Suppose you walk outside on a fall day wearing a short-sleeved shirt, when you feel a slightly chill breath of air on your arms. You wonder if you should go back into the house and get a sweater. But that seems like work; and so your mind quickly notes that the Sun might come out soon and then you wouldn't need the sweater.
Diagram:
Visualizing all 4 quadrants of this binary proposition - the world is like A and I believe A, the world is like B and I believe A, etc. - should, in principle, emotionally confirm the truth of the proposition: "If it will be cold, I want to believe it's cold; if it's not cold, I want to believe it's not cold; let me not become attached to beliefs I may not want."
Eliezer and Anna, when using this method against the temptation to believe X, visualize only the quadrant "The world is not like X and I believe X" to remind themselves of the consequences; e.g. we would only visualize the "You are cold!" quadrant. Michael Smith (aka "Val", short for Valentine) says that after some practice on this technique as a kata, he was able to visualize all 4 quadrants quickly and that visualizing all 4 seemed to help.
Val also used an upside-down W-diagram with the two worlds at the top and the four beliefs at the bottom, to emphasize the idea that the world is there first, and is fixed, and we have only a choice of what to believe within a fixed world, not a choice of which background world to live in. The Tarski Method embodies a "Start from the world" mental process in which you visualize the world being there first, and your belief coming afterward; a similar "Start from the world" rule is likewise emphasized in the Bayes unit, wherein one starts from a world and asks about the probability of the evidence, rather than starting from the evidence and trying to make it match up with a world.
When we actually tested a unit based on asking people to draw Tarski squares, it didn't work very well - possibly because people didn't seem to understand what it was for, or when they would use it; possibly because it wasn't a group exercise. In any case, we already tried teaching this the obvious way ("Go draw Tarski squares!") and it didn't work. But it still seems worth teaching if someone can invent a better exercise, because it's something that multiple CfAR people actually use to counter the rationalization impulse / restore truthseeking in real life.
Become Curious: Detect non-curiosity and become curious. Anna's main alarm signal is when she notices that she's not curious in the middle of a conversation - that she doesn't have an impulse-to-find-out the answer - and then try to make herself curious about the subject of discussion. Besides visualizing the not-X-and-believe-X quadrant of the Tarski diagram, this is also something you may be able to do by brute introspection - remember the feeling of curiosity, and try to call it up. (This is probably in the top 3 most important things I learned from Anna. -- EY)
Take Pride in Your Objectivity: Julia teaches this as a primary counter in her Combat Reflexes unit (how to avoid instantly defending or attacking). Eliezer does this every time he admits he's wrong on the Internet - congratulates himself on being such a great rationalist, in order to apply counter-hedons to the flash of pain that would otherwise be associated.
Visualize a Fixed Probability: This is what Eliezer used as a child to stop being scared of the dark - he would deliberately visualize a murderer standing with a knife behind a door, then visualize his own thoughts having no effect on the fixed probability that any such murderer was actually present. In other words, the notion of a "true probability" that his thoughts couldn't affect, countered the fear of thoughts affecting reality. Visualizing there being a fixed frequency of worlds, or a lawful probability that a Bayesian agent would assign, can help in perceiving the futility of rationalization because you're trying to use arguments to move a lawful probability that is fixed. This is also part of the domain of Lawful Uncertainty, the notion that there are still rules which apply even when we're unsure (not presently part of any unit).
Imagine the Revelation: Anna imagines that the answer is about to be looked up on the Internet, that Omega is about to reveal the answer, etc., to check if her thoughts would change if she was potentially about to be embarrassed right now. This detects belief-alief divergence, but also provides truthseeking impulse.
Knowing the Rules: And finally, if you have sufficient mastery of probability theory or decision theory, you may have a procedure to follow which is lawful enough, and sufficiently well-understood, that rationalization can't influence it much without the mistake being blatant even to you. (In a sense, this is what most of Less Wrong is about - reducing the amount of self-honesty required by increasing the obviousness of mistakes.)
Noticing flinches and attachments, and raising them to conscious attention:
A trigger for use of curiosity-restoration or the Tarski Method: Noticing what it feels like for your mind to:
Anna's anti-rationalization makes heavy use of noticing suspect situations where the outside view says she might rationalize - cases where her status is at stake, and so on - and specific keywords like "I believe that" or "No, I really believe that". She wants to try training people to notice likely contexts for rationalization, and to figure out keywords that might indicate rationalization in themselves. (Eliezer has never tried to train himself to notice keywords because he figures his brain will just train itself to avoid the trigger phrase; and he worries about likely-context training because he's seen failure modes where no amount of evidence or sound argument is enough to overcome the suspicion of rationalization once it's been invoked.)
Awards for previous SotW suggestions:
$550 to Palladias for the Monday-Tuesday game, which has been tested ($50) and now adopted ($500) into the Be Specific unit (though it might be moved to some sort of Anticipation unit later on).
$550 to Stefie_K for her suggestion to have the instructor pretend to be someone who really wants you to invest in their company, but is never specific; also $50 to daenrys for the "More Specific!" improv-game suggestion. In combination these inspired the Vague Consultant game ("Hi, I'm a consultant, I'm here to improve your business processes!" "How?" "By consulting with stakeholders!") which has now been adopted into the Be Specific unit.
$50 to lincolnquirk for the "Channel Paul Graham" game, which we tested. We all thought this would work - it was our highest-rated candidate suggestion - but it didn't get positive audience feedback. Congratulations to lincolnquirk on a good suggestion nonetheless.
We haven't yet tested, but definitely intend to at least test, and are hence already awarding $50 to, the following idea:
$50 to John Maxwell IV for the Choose Your Own Adventure suggestion for the Consequentialism unit.
To claim a prize, send a LessWrong private message (so we know it originates from the same LW user account) to StephenCole.