Biases of Intuitive and Logical Thinkers

27 pwno 13 August 2013 03:50AM

Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.

I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.

The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)

Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.

Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.


Intuition-dominant Biases


Overlooking crucial details

Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.

Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.

This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.

Expecting solutions to sound a certain way

The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.

Why is it believable to the audience that Billy can be so confident about his answer?

Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.

Not recognizing precise language

Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.

Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.

This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.

The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.

The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.

Believing their level of understanding is deeper than what it is

Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.

When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.

One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.


Logic-dominant Biases


Ignoring information they cannot immediately fit into a framework

Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.

For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.  

Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.

Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.

Ignoring their emotions

Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.

Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.

You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.

Making rules too strict

Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.

Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.

The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.

When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence. 

continue reading »

Intuitions Aren't Shared That Way

31 lukeprog 29 November 2012 06:19AM

Part of the sequence: Rationality and Philosophy

Consider these two versions of the famous trolley problem:

Stranger: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a stranger standing on the side track with his back turned, and if you proceed to thrown the switch, the five people will be saved, but the person on the side track will be killed.

Child: A train, its brakes failed, is rushing toward five people. The only way to save the five people is to throw the switch sitting next to you, which will turn the train onto a side track, thereby preventing it from killing the five people. However, there is a 12-year-old boy standing on the side track with his back turned, and if you proceed to throw the switch, the five people will be saved, but the boy on the side track will be killed.

Here it is: a standard-form philosophical thought experiment. In standard analytic philosophy, the next step is to engage in conceptual analysis — a process in which we use our intuitions as evidence for one theory over another. For example, if your intuitions say that it is "morally right" to throw the switch in both cases above, then these intuitions may be counted as evidence for consequentialism, for moral realism, for agent neutrality, and so on.

Alexander (2012) explains:

Philosophical intuitions play an important role in contemporary philosophy. Philosophical intuitions provide data to be explained by our philosophical theories [and] evidence that may be adduced in arguments for their truth... In this way, the role... of intuitional evidence in philosophy is similar to the role... of perceptual evidence in science...

Is knowledge simply justified true belief? Is a belief justified just in case it is caused by a reliable cognitive mechanism? Does a name refer to whatever object uniquely or best satisfies the description associated with it? Is a person morally responsible for an action only if she could have acted otherwise? Is an action morally right just in case it provides the greatest benefit for the greatest number of people all else being equal? When confronted with these kinds of questions, philosophers often appeal to philosophical intuitions about real or imagined cases...

...there is widespread agreement about the role that [intuitions] play in contemporary philosophical practice... We advance philosophical theories on the basis of their ability to explain our philosophical intuitions, and appeal to them as evidence that those theories are true...

In particular, notice that philosophers do not appeal to their intuitions as merely an exercise in autobiography. Philosophers are not merely trying to map the contours of their own idiosyncratic concepts. That could be interesting, but it wouldn't be worth decades of publicly-funded philosophical research. Instead, philosophers appeal to their intuitions as evidence for what is true in general about a concept, or true about the world.

continue reading »

Are Deontological Moral Judgments Rationalizations?

37 lukeprog 16 August 2011 04:40PM

In 2007, Chris Matthews of Hardball interviewed David O'steen, executive director of a pro-life organization. Matthews asked:

I have always wondered something about the pro-life movement. If you believe that killing [a fetus] is murder, why don't you bring murder charges or seek a murder penalty against a woman who has an abortion? Why do you let her off, if you really believe it's murder?1

O'steen replied that "we have never sought criminal penalties against a woman," which isn't an answer but a re-statement of the reason for the question. When pressed, he added that we don't know "how she‘s been forced into this." When pressed again, O'steen abandoned these responses and tried to give a consequentialist answer. He claimed that implementing "civil penalties" and taking away the "financial incentives" of abortion doctors would more successfully "protect unborn children."

But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.2

Pro-life demonstrators in Illinois were asked a similar question: "If [abortion] was illegal, should there be a penalty for the women who get abortions illegally?" None of them (on the video) thought that women who had illegal abortions should be punished as murders, an ample demonstration of moral rationalization. And I'm sure we can all think of examples where it looks like someone has settled on an intuitive moral judgment and then invented rationalizations later.3

More controversially, some have suggested that rule-based deontological moral judgments generally tend to be rationalizations. Perhaps we can even dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.

Long-time deontologists and utilitarians may already be up in arms to fight another war between Blues and Greens, but these are empirical questions. What do the scientific studies suggest?

continue reading »

Rationalist Judo, or Using the Availability Heuristic to Win

21 jschulter 15 July 2011 08:39AM

During the sessions at the 2011 rationality minicamp, we learned that some of our biases can be used constructively, rather than just tolerated and avoided.

For example, in an excellent article discussing intuitions and the way they are formed, psychologist Robin Hogarth recommends that "if people want to shape their intuitions, [they should] make conscious efforts to inhabit environments that expose them to the experiences and information that form the intuitions that they want."

Another example: Carl Shulman remarked that due to the availability heuristic we anticipate car crashes with frequencies determined by how many people we know of or have heard about who have gotten into one. So if you don't fear car crashes but you want to acquire a more accurate level of concern about driving, you could seek out news or footage of car crashes. Video footage may work best, because experiential data unconsciously inform our intuitions more effectively than, say, written data.

This fact may lie behind many effective strategies for getting your brain to do what you want it to do:

  • Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.
  • Rejection therapy, which many of us minicampers found helpful and effective. The point is to ask people for things they will probably deny you, which trains your body to realize that nothing bad happens when you are rejected. After a time, this improves social confidence.
  • As looking glass self theory states,1 we are shaped by how others see us. This is largely due to the experience of having people react to us in certain ways.

In The Mystery of the Haunted Rationalist we see a someone whose stated beliefs don't match their anticipations. Now we can actually use the brain's machinery to get it to do what we want it to: alieve that ghosts aren't real or dangerous. One method would be for our ghost stricken friend to get people to tell her detailed stories about pleasant nights they spent in haunted houses (complete with spooky details) where nothing bad happened. Alternatively, she could read some books or watch some videos with similar content. Best of all would be if she spent a month living in a 'haunted' house, perhaps after doing some of the other things to soothe her nerves. There are many who will attest that eventually one 'gets used to' the scary noises and frightening atmosphere of an old house, and ceases to be scared when sleeping in similar houses.

I attribute the effectiveness of these tactics mostly to successful persuasion of the non-conscious brain using experiential data.

So, it seems we have a (potentially very powerful) new technique to add to our rationalist arsenal. To summarize:

  1. Find something you want to alieve.
  2. Determine what experiences that alief should cause you to anticipate.
  3. Have those experiences, by proxy if necessary, artificial or not.
  4. Test whether you now anticipate what you want to.
  5. If the test reveals progress, but not enough, repeat.

Examples:

  • Want to alieve that boxing is dangerous2? Watch some footage of boxers being punched painfully in the face, and ask a good boxer to win a fight against you in a painful but non-damaging manner. Now are you reluctant to box someone you have a good chance of beating?
  • Want to alieve that driving is dangerous? Watch footage of lots of car crashes, see Red Asphalt, and take a class from professional stunt drivers on how to crash safely. Now are you more reluctant to drive?
  • Want to alieve that flying is not very dangerous? Get a pilot's view of a flight, and pay attention to how boring it is. Sit next to a pilot while they undergo a very realistic flight simulation that covers many possible accidents, and watch them successfully navigate each scenario. Now are you more willing to fly?
  • Want to alieve snakes are generally not dangerous? Watch videos of safe snake interactions. Watch a pet store employee deal with a snake safely. Play with a snake under supervision without incident. Now do you exhibit less fear when encountering a snake?
  • Want to alieve you are part of the Less Wrong community? Interact with other community members as though you are one, attend meetups, make friends in the community. Now do you empathize more strongly with contributors on Less Wrong than with those elsewhere on the internet?

It can be annoying when our unconsciously moderated aliefs don't match our rationality-influenced beliefs, but luckily our aliefs can be trained.

 

1 Thanks to Hugh Ristik for talking about this at minicamp.

2 Credit for this example goes to Brandon Reinhart.

Special thanks to Luke for all the help

When Intuitions Are Useful

14 lukeprog 09 May 2011 07:40PM

Part of the sequence: Rationality and Philosophy

In this series, I have examined how intuitions work so that I can clarify how rationalists1 should and shouldn't use their intuitions2 when solving philosophical problems. Understanding the cognitive algorithms that generate our intuitions can dissolve traditional philosophical problems. As Brian Talbot puts it:

...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases...3

Knowing how intuitions work can also tell us something about how we can train them to make them render more accurate judgments.4

 

Problems with intuition

In most philosophy, intuitions play the role that observations do in science: they support and undermine various theories.5 Conceptual analyses are rejected when intuitive counterexamples are presented. Moral theories are rejected when they lead to intuitively revolting results. Theories of mind and language and metaphysics rise and fall depending on how well they can be made to fit our intuitions, even in bizarre science fiction hypothetical scenarios.6

But why trust our intuitions? Our intuitions often turn out to contradict each other,7 or they are contradicted by empirical evidence,8 or they vary between people and between groups of people.9 Compared to scientific methods, the philosopher's use of intuitions as his primary tool doesn't seem to have been very productive.10 Also, we can't calibrate our intuitions, because wherever we have a non-intuition standard against which to calibrate our intuitions, we don't need to use intuition in the first place.11 Moreover, philosophers have typically known very little about where their intuitions come from, and why they should trust them in the first place!12

Defenders of intuitionist philosophy reply that we can't do philosophy without intuitions.13 Others point out that we have similar worries about the reliability of of sense perception.14 But these replies do not solve the problem. As Talbot says,3 these responses "give us reasons to want to trust intuitions... but no evidence that they are particularly reliable or useful."

The way forward is not to give a priori arguments for or against the use of intuitions. The way forward is to explore what cognitive science can tell us about how our intuitions work (as we've been doing) so that we have some idea about when they work and when they don't.

continue reading »

Intuition and Unconscious Learning

32 lukeprog 06 May 2011 06:47PM

Part of the sequence: Rationality and Philosophy

We have already examined two sources of our intuitions: the attribute substitution heuristics and our evolved psychology. Today we look at a third source of our intuitions: unconscious learning

 

Unconscious learning

The 'learning perspective' on intuition is compatible with the heuristics and biases literature and with evolutionary psychology, but adds a deeper understanding of what is going on 'under the hood.' The learning perspective says that many intuitions rely on representations that reflect the entirety of experiences stored in long-term memory. Such intuitions merely reproduce statistical regularities in long-term memory.1

An example will help explain:

Assume you run into a man at the 20th anniversary party of your high school class graduation. You immediately sense a feeling of dislike. To avoid getting into a conversation, you signal and shout some words to a couple of old friends sitting at a distant table. While you are walking toward them, you try to remember the man’s name, which pops into your mind after some time; and suddenly, you also remember that it was he who always did nasty things to you such as taking your secret letters and showing them to the rest of the class. You applaud the product of your intuition (the immediate feeling) that has helped you to make the right decision (avoiding interaction). Recall of prior experiences was not necessary to make this decision. The decision was solely based on a feeling, which reflected prior knowledge without awareness.2

Learning perspective theorists would suggest that your feeling of dislike - your intuition that you shouldn't talk to the man - came from something like an (unconscious) regularities analysis of your experiences with that man that were stored in long-term memory, and those experiences turned out to be mostly negative. As such, your intuition can make use of rapid parallel processing to draw on the whole sum of experiences in long-term memory, rather than using a slower, sequential-processing judgment algorithm.

It is difficult to track the source of any particular intuition (though we can try3), but there is evidence to suggest that unconscious learning is a common source of our intuitions.

continue reading »

Your Evolved Intuitions

15 lukeprog 05 May 2011 04:21PM

Part of the sequence: Rationality and Philosophy

We have already examined one source of our intuitions: attribute substitution heuristics. Today we examine a second source of our intuitions: biological evolution.

 

Evolutionary psychology

Evolutionary psychology1 has been covered on Less Wrong many times before, but let's review anyway.

Lions walk on four legs and hunt for food. Skunks defend themselves with a spray. Spiders make webs. Each species is shaped by selection pressures, and is different from that of other species.

Certain evolved psychological mechanisms in humans are part of what makes us like each other and not like lions, skunks, and spiders.

These mechanisms evolved to solve specific adaptive problems. It is not an accident that people around the world prefer calorie-rich foods,2 that women around the world prefer men with resources,3 that men around the world prefer women with signs of fertility,4 or that most of us inherently fear snakes and spiders but not cars and electrical outlets.5

An an example of evolutionary psychology at work, consider the 'hunter-gatherer hypothesis' that men evolved psychological mechanisms to aid in hunting, while women evolved psychological mechanisms to aid in gathering.6 This hypothesis leads to a list of bold predictions. If the hypothesis is correct, then:

  1. Men in modern tribal societies should spend a lot of time hunting, and women more time gathering.
  2. Humans should show a greater tendency toward strong male coalitions than similar species in which males do not hunt much, because strong male coalitions are required to hunt big game.
  3. Because meat from most game comes in quantities larger than a single hunter can consume, and because hunting success is highly variable (one week may be a success, but perhaps not the next week), humans should exhibit food sharing and reciprocal altruism.
  4. We should expect to see a sexual division of labor, due to the different traits conducive for hunting vs. gathering.
  5. Men should exploit status gains to be had from 'showing off' large hunting successes.
  6. Men should have superior cognitive ability to navigate across large distances and perform 3D mental rotation tasks required for throwing spears and similar hunting acts. Women should have superior cognitive ability with spacial location memory and object arrays.

And as it turns out, all these predictions are correct.7 (And no, evolutionary psychologists do not only offer 'postdictions' or 'just so' stories. Besides, probability theory does not have separate categories for 'predictions' and 'postdictions'.)

continue reading »

How You Make Judgments: The Elephant and its Rider

42 lukeprog 15 April 2011 01:02AM

Part of the sequence: Rationality and Philosophy

Whether you're doing science or philosophy, flirting or playing music, the first and most important tool you are using is your mind. To use your tool well, it will help to know how it works. Today we explore how your mind makes judgments.

From Plato to Freud, many have remarked that humans seem to have more than one mind.1 Today, detailed 'dual-process' models are being tested by psychologists and neuroscientists:

Since the 1970s dual-process theories have been developed [to explain] various aspects of human psychology... Typically, one of the processes is characterized as fast, effortless, automatic, nonconscious, inflexible, heavily contextualized, and undemanding of working memory, and the other as slow, effortful, controlled, conscious, flexible, decontextualized, and demanding of working memory.2

Dual-process theories for reasoning,3 learning and memory,4 decision-making,5 belief,6 and social cognition7 are now widely accepted to be correct to some degree,8 with researchers currently working out the details.9 Dual-process theories even seem to be appropriate for some nonhuman primates.10

Naturally, some have wondered if there might be a "grand unifying dual-process theory that can incorporate them all."11 We might call such theories dual-system theories of mind,12 and several have been proposed.13 Such unified theories face problems, though. 'Type 1' (fast, nonconscious) processes probably involve many nonconscious architectures,14 and brain imaging studies show a wide variety of brain systems at work at different times when subjects engage in 'type 2' (slow, conscious) processes.15

Still, perhaps there is a sense in which one 'mind' relies mostly on type 1 processes, and a second 'mind' relies mostly on type 2 processes. One suggestion is that Mind 1 is evolutionarily old and thus shared with other animals, while Mind 2 is recently evolved and particularly developed in humans. (But not fully unique to humans, because some animals do seem to exhibit a distinction between stimulus-controlled and higher-order controlled behavior.16) But this theory faces problems. A standard motivation for dual-process theories of reasoning is the conflict between cognitive biases (from type 1 processes) and logical reasoning (type 2 processes).17 For example, logic and belief bias often conflict.18 But both logic and belief bias can be located in the pre-frontal cortex, an evolutionarily new system.19 So either Mind 1 is not entirely old, or Mind 2 is not entirely composed of type 2 processes.

We won't try to untangle these mysteries here. Instead, we'll focus on one of the most successful dual-process theories: Kahneman and Frederick's dual-process theory of judgment.20

continue reading »

Consider Representative Data Sets

6 Vladimir_Nesov 06 May 2009 01:49AM

In this article, I consider the standard biases in drawing factual conclusions that are not related to emotional reactions, and describe a simple model summarizing what goes wrong with the reasoning in these cases, that in turn suggests a way of systematically avoiding this kind of problems.

The following model is used to describe the process of getting from a question to a (potentially biased) answer for the purposes of this article. First, you ask yourself a question. Second, in the context of the question, a data set is presented before your mind, either directly, by you looking at the explicit statements of fact, or indirectly, by associated facts becoming salient to your attention, triggered by the explicit data items or by the question. Third, you construct an intuitive model of some phenomenon, that allows to see its properties, as a result of considering the data set. And finally, you pronounce the answer, that is read out as one of the properties of the model you've just constructed.

This description is meant to present mental paintbrush handles, to refer to the things you can see introspectively, and things you could operate consciously if you choose to.

Most of the biases in the considered class may be seen as particular ways in which you pay attention to a wrong data set, not representative of the phenomenon you model to get to the answer you seek. As a result, the intuitive model gets systematically wrong, and the answer read out from it gets biased. Below I review the specific biases, to identify the ways in which things go wrong in each particular case, and then I summarize the classes of mistakes of reasoning playing major roles in these biases and correspondingly the ways of avoiding those mistakes.

continue reading »

View more: Next