You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Cognitive Bias Mnemonics

5 Terdragon 05 April 2015 03:27AM

How many cognitive biases can you name, off the top of your head?

Try it, before moving on.

Give yourself sixty seconds.

Make a list.

Write them down.

I know that I've read about a number of biases by now, but they don't come to mind very easily. If I wish to become wary enough to spot cognitive biases in my own thought, then I might appreciate being able to quickly summon many examples of cognitive biases to mind. This would also make it easier to share examples of cognitive biases with others.

I plan to create a set of mnemonics for important biases, to make it easier for myself to remember them (and, as a consequence, to make it easier to spot them and eliminate them). I'll imagine each bias as an item; by visualizing the collection of items, I can remember the biases. If I really want to make sure that I don't forget any, they could be placed along a path in a mind palace.

Example mnemonic: Hindsight bias is an old leather boot. It's an old leather boot because that reminds me of the past, which clues the name of the bias. And anyways, psshh, why is everyone so excited about the idea of footwear? Anyone could have come up with that! It's just like clothes, but for feet! I could have invented it myself, it's so obvious! Hindsight bias: it could happen to you.

Using various lists of cognitive biases, I'm going to be performing this exercise myself and making mnemonics to remember them by. I might post these at some point, but if you're interested in the outcome, I recommend trying to make mnemonics for yourself first -- the associations will be more meaningful to you, personally, that way.

But beware that conceptualizing a bias as a mnemonic might not be perfect, just like conceptualizing biases as named ideas might not be perfect -- more on that here.

For the comments: What witty mnemonics can you come up with?

I need some help debugging my approach to informal models and reasoning

0 [deleted] 30 October 2013 10:10PM

I'm having trouble understanding the process I should use when I am considering new models as they might apply to old data, like memories. This is primarily when reasoning with respect to qualitative models, like those that come out of development psychology, business, or military strategy. These models can be either normative or descriptive, but the big trait that they all seem to share is that they were all conceptualized with reference to the inside view more than the outside view - they were based on either memories or intuition, so they will have a lot of implicit internal structure, or they will have a lot of bullshit. Re-framing my own experiences as a way of finding out whether these models are useful is thus reliant on system one more than system two. Unfortunately now we're in the realm of bias.

My concrete examples of models that I am evaluating are (a) when I am attempting to digest the information contained in the "Principles" document (as discussed here) and for which situations the information might apply in; (b) learning Alfred Adler's "individual psychology" from The Rawness, which also expands the ideas and (c) the mighty OODA loop.

When I brought up the OODA loop during a meetup with the Vancouver Rationalists I ended up making some mistakes regarding the "theories" from which it was derived, adding the idea of "clout" to my mental toolkit. But it also makes me wary that my instinctive approach to learning about qualitative models such as this might have other weaknesses.

I asked at another meetup, "What is the best way to internalize advice from books?" and someone responded with thinking about concrete situations where the idea might have been useful. 

As a strategy to evaluate the truth of a model I can see this backfiring. Due to the reliance on System One in both model structuring and model evaluation, hindsight bias is likely to be an issue, or a form of Forer effect. I could then make erroneous judgements on how effectively the model will predict an outcome, and use the model in ineffective ways (ironically this is brought up by the author on The Rawness). In most cases I believe that this is better than nothing, but I don't think it's good enough either. It does seem possible to be mindful of the actual conceptual points and just wait for relevance, but the reason why we reflect is so that we are primed to see certain patterns again when they come up, so that doesn't seem like enough either.

As a way of evaluating model usefulness I can see this go two ways. On one hand, many long-standing problems exist due to mental ruts, and benefit from re-framing the issue in light of new information. When I read books I often experience a linkage between statements that a book makes and goals that I have, or situations I want to make sense of (similar to Josh Kaufman and his usage of the McDowell's Reading Grid). However, this experience has little to do with the model being correct.

Here are three questions I have, although more will likely come up:

  • What are the most common mistakes humans make when figuring out if a qualitative model applies to their experiences or not?
  • How can they be worked around, removed, or compensated for?
  • Can we make statements about when "informal" models (i.e. not specified in formal language or not mappable to mathematical descriptions other than in structures like semantic webs) are generally useful to have and when they generally fail?
  • etc.

Request: Interesting Invertible Facts

19 Eliezer_Yudkowsky 08 October 2010 08:02PM

I'm writing the section of the rationality book dealing with hindsight bias, and I'd like to write my own, less racially charged and less America-specific, version of the Hindsight Devalues Science example - in the original, facts like "Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)" which is actually an inverted version of the truth, that still sounds plausible enough that people will try to explain it even though it's wrong.

I'm looking for facts that are experimentally verified and invertible, i.e., I can give five examples that are the opposite of the usual results without people catching on.

Divia (today's writing assistant) has suggested facts about marriage and facts about happiness as possible sources of examples, but neither of us can think of a good set of facts offhand and Googling didn't help me much.  Five related facts would be nice, but failing that I'll just take five facts.  My own brain just seems to be very bad at answering this kind of query for some reason; I literally can't think of five things I know.

(Note also that I have a general policy of keeping anything related to religion out of the rationality book - that there be no mention of it whatsoever.)