Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SotW: Avoid Motivated Cognition

20 Eliezer_Yudkowsky 28 May 2012 03:57PM

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


The following awards have been made:  $550 to Palladias, $550 to Stefie_K, $50 to lincolnquirk, and $50 to John_Maxwell_IV.  See the bottom for details.  If you've earned a prize, please PM StephenCole to claim it.  (If you strongly believe that one of your suggestions Really Would Have Worked, consider trying it at your local Less Wrong meetup.  If it works there, send us some participant comments; this may make us update enough to test it.)


Lucy and Marvin are walking down the street one day, when they pass a shop showing a large chocolate cake in the window.

"Hm," says Lucy, "I think I'll buy and eat that chocolate cake."

"What, the whole thing?" says Marvin.  "Now?"

"Yes," says Lucy, "I want to support the sugar industry."

There is a slight pause.

"I don't suppose that your liking chocolate cake has anything to do with your decision?" says Marvin.

"Well," says Lucy, "I suppose it could have played a role in suggesting that I eat a whole chocolate cake, but the reason why I decided to do it was to support the sugar industry.  Lots of people have jobs in the sugar industry, and they've been having some trouble lately."


Motivated cognition is the way (all? most?) brains generate false landscapes of justification in the presence of attachments and flinches.  It's not enough for the human brain to attach to the sunk cost of a PhD program, so that we are impelled in our actions to stay - no, that attachment can also go off and spin a justificational landscape to convince the other parts of ourselves, even the part that knows about consequentialism and the sunk cost fallacy, to stay in the PhD program.

We're almost certain that the subject matter of "motivated cognition" isn't a single unit, probably more like 3 or 8 units.  We're also highly uncertain of where to start teaching it.  Where we start will probably end up being determined by where we get the best suggestions for exercises that can teach it - i.e., end up being determined by what we (the community) can figure out how to teach well.

The cognitive patterns that we use to actually combat motivated cognition seem to break out along the following lines:

  1. Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.
  2. Ways to reduce the strength of the rationalization impulse, or restore truth-seeking in the presence of motivation: e.g., Anna's "Become Curious" technique.
  3. Noticing the internal attachment or internal flinch, so that you can invoke the other skills; realizing when you're in a situation that makes you liable to rationalize.
  4. Realigning the internal parts that are trying to persuade each other: belief-alief or goal-urge reconciliation procedures.

And also:

  • Pattern recognition of the many styles of warped justification landscape that rationalization creates - being able to recognize "motivated skepticism" or "rehearsing the evidence" or "motivated uncertainty".
  • Specific counters to rationalization styles, like "Set betting odds" as a counter to motivated uncertainty.

Exercises to teach all of these are desired, but I'm setting apart the Rationalization Patterns into a separate SotW, since there are so many that I'm worried 1-4 won't get fair treatment otherwise.  This SotW will focus on items 1-3 above; #4 seems like more of a separate unit.

continue reading »

SotW: Be Specific

39 Eliezer_Yudkowsky 03 April 2012 06:11AM

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


Exercise Prize:  Be Specific

During YCombinator's Startup School 2011, Paul Graham and Harj Tagger did "office hours" onstage.  One pair of entrepreneurs were doing a matchmaking (dating) startup, and Paul and Harj were trying to figure out what their startup did, exactly - for example, what their startup could do that the existing low-tech solution couldn't.  (Video.)

Harj:  Low-tech like, you know, just like word of mouth, telling someone "hey, you should like, meet up with my friend" or "we're getting drinks, why don't you come along?" Like, what can the software do that's specifically better than that?

Entrepreneur:  I think that our software specifically is providing the better connections for people, um...

Paul: Providing the better connections for people...?

Entrepreneur:  I mean, one way you can think about it, I don't know if this is the right answer, but... there's a lot of things that are happening in real life that they're trying to mimic online, maybe that's not the correct way to...  Look at it like this: to give them an online tool to also do this, like they're already doing in real life, maybe they could reach, uh expand their reach through the online website.

This had been happening with most of the startups Paul and Harj were interrogating - they just could not seem to provide a customer use-case - and I couldn't stand it any more; which is why at this point I whispered audibly enough for a few nearby people to hear, "Be specific!  Be specific!"

A moment later, on stage:

Paul:  Hm.  Not very specific.

I got some strange looks from the people sitting next to me.

I hope this provides some background for my guess that around half of Paul Graham's advantage is based on years of incubator experience, and the other half is unusual rationality skills of the sort that the Center for Modern Rationality is trying to figure out how to teach.  Obviously this is only a very rough conjecture.  But you can see the basis for the hope that - after a fair amount more work - we'll be able to offer a 2-day course for YCombinator entrepreneurs that eliminates 50% of the overhead from their conversations with Paul Graham.

(Also, note how this post starts off with a specific example - an instance of the concrete-abstract writing pattern in which you state the example first and the generalization afterward.  This is one of the most common bits of nonfiction writing advice I dispense:  "Open with the concrete example, not the abstract explanation!")

continue reading »

SotW: Check Consequentialism

38 Eliezer_Yudkowsky 29 March 2012 01:35AM

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


Exercise Prize:  Check Consequentialism

In philosophy, "consequentialism" is the belief that doing the right thing makes the world a better place, i.e., that actions should be chosen on the basis of their probable outcomes.  It seems like the mental habit of checking consequentialism, asking "What positive future events does this action cause?", would catch numerous cognitive fallacies.

For example, the mental habit of consequentialism would counter the sunk cost fallacy - if a PhD wouldn't really lead to much in the way of desirable job opportunities or a higher income, and the only reason you're still pursuing your PhD is that otherwise all your previous years of work will have been wasted, you will find yourself encountering a blank screen at the point where you try to imagine a positive future outcome of spending another two years working toward your PhD - you will not be able to state what good future events happen as a result.

Or consider the problem of living in the should-universe; if you're thinking, I'm not going to talk to my boyfriend about X because he should know it already, you might be able to spot this as an instance of should-universe thinking (planning/choosing/acting/feeling as though within / by-comparison-to an image of an ideal perfect universe) by having done exercises specifically to sensitize you to should-ness.  Or, if you've practiced the more general skill of Checking Consequentialism, you might notice a problem on asking "What happens if I talk / don't talk to my boyfriend?" - providing that you're sufficiently adept to constrain your consequentialist visualization to what actually happens as opposed to what should happen.

Discussion:

The skill of Checking Consequentialism isn't quite as simple as telling people to ask, "What positive result do I get?"  By itself, this mental query is probably going to return any apparent justification - for example, in the sunk-cost-PhD example, asking "What good thing happens as a result?" will just return, "All my years of work won't have been wasted!  That's good!"  Any choice people are tempted by seems good for some reason, and executing a query about "good reasons" will just return this.

The novel part of Checking Consequentialism is the ability to discriminate "consequentialist reasons" from "non-consequentialist reasons" - being able to distinguish that "Because a PhD gets me a 50% higher salary" talks about future positive consequences, while "Because I don't want my years of work to have been wasted" doesn't.

continue reading »