Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Reductionist research strategies and their biases

16 PhilGoetz 06 February 2015 04:11AM

I read an extract of (Wimsatt 1980) [1] which includes a list of common biases in reductionist research. I suppose most of us are reductionists most of the time, so these may be worth looking at.

This is not an attack on reductionism! If you think reductionism is too sacred for such treatment, you've got a bigger problem than anything on this list.

Here's Wimsatt's list, with some additions from the parts of his 2007 book Re-engineering Philosophy for Limited Beings that I can see on Google books. His lists often lack specific examples, so I came up with my own examples and inserted them in [brackets].

continue reading »

Biases of Intuitive and Logical Thinkers

27 pwno 13 August 2013 03:50AM

Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.

I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.

The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)

Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.

Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.


Intuition-dominant Biases


Overlooking crucial details

Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.

Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.

This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.

Expecting solutions to sound a certain way

The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.

Why is it believable to the audience that Billy can be so confident about his answer?

Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.

Not recognizing precise language

Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.

Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.

This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.

The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.

The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.

Believing their level of understanding is deeper than what it is

Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.

When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.

One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.


Logic-dominant Biases


Ignoring information they cannot immediately fit into a framework

Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.

For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.  

Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.

Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.

Ignoring their emotions

Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.

Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.

You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.

Making rules too strict

Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.

Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.

The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.

When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence. 

continue reading »

How to Measure Anything

51 lukeprog 07 August 2013 04:05AM

Douglas Hubbard’s How to Measure Anything is one of my favorite how-to books. I hope this summary inspires you to buy the book; it’s worth it.

The book opens:

Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods.

The sciences have many established measurement methods, so Hubbard’s book focuses on the measurement of “business intangibles” that are important for decision-making but tricky to measure: things like management effectiveness, the “flexibility” to create new products, the risk of bankruptcy, and public image.

 

Basic Ideas

A measurement is an observation that quantitatively reduces uncertainty. Measurements might not yield precise, certain judgments, but they do reduce your uncertainty.

To be measured, the object of measurement must be described clearly, in terms of observables. A good way to clarify a vague object of measurement like “IT security” is to ask “What is IT security, and why do you care?” Such probing can reveal that “IT security” means things like a reduction in unauthorized intrusions and malware attacks, which the IT department cares about because these things result in lost productivity, fraud losses, and legal liabilities.

Uncertainty is the lack of certainty: the true outcome/state/value is not known.

Risk is a state of uncertainty in which some of the possibilities involve a loss.

Much pessimism about measurement comes from a lack of experience making measurements. Hubbard, who is far more experienced with measurement than his readers, says:

  1. Your problem is not as unique as you think.
  2. You have more data than you think.
  3. You need less data than you think.
  4. An adequate amount of new data is more accessible than you think.


Applied Information Economics

Hubbard calls his method “Applied Information Economics” (AIE). It consists of 5 steps:

  1. Define a decision problem and the relevant variables. (Start with the decision you need to make, then figure out which variables would make your decision easier if you had better estimates of their values.)
  2. Determine what you know. (Quantify your uncertainty about those variables in terms of ranges and probabilities.)
  3. Pick a variable, and compute the value of additional information for that variable. (Repeat until you find a variable with reasonably high information value. If no remaining variables have enough information value to justify the cost of measuring them, skip to step 5.)
  4. Apply the relevant measurement instrument(s) to the high-information-value variable. (Then go back to step 3.)
  5. Make a decision and act on it. (When you’ve done as much uncertainty reduction as is economically justified, it’s time to act!)

These steps are elaborated below.

continue reading »

Exploring the Idea Space Efficiently

22 Elithrion 08 April 2012 04:28AM

Simon is writing a calculus textbook. Since there are a lot of textbooks on the market, he wants to make his distinctive by including a lot of original examples. To do this, he decides to first check what sorts of examples are in some of the other books, and then make sure to avoid those. Unfortunately, after skimming through several other books, he finds himself completely unable to think of original examples—his mind keeps returning to the examples he's just read instead of coming up with new ones.

What he's experiencing here is another aspect of priming or anchoring. The way it appears to happen in my brain is that it decides to anchor on the examples it's already seen and explore the idea-space from there, moving from an idea only to ideas that are closely related to it (similarly to a depth-first search)

At first, this search strategy might not seem so bad—in fact, it's ideal if there is one best solution and the closer you get to it the better. For example, if you were shooting arrows at a target, all you'd need to consider is how close to the center you can hit. Where we run into problems, however, is trying to come up with multiple solutions (such as multiple examples of the applications of calculus), or trying to come up with the best solution when there are many plausible solutions. In these cases, our brain's default search algorithm will often grab the first idea it can think of and try to refine it, even if what we really need is a completely different idea.

continue reading »

Fictional Bias

0 thomblake 02 April 2012 02:10AM

As rationalists, we are trained to maintain constant vigilance against common errors in our own thinking.  Still, we must be especially careful of biases that are unusually common amongst our kind.

Consider the following scenario: Frodo Baggins is buying pants.  Which of these is he most likely to buy:

(a) 32/30

(b) 48/32

(c) 30/20

continue reading »

I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias

54 Kaj_Sotala 08 March 2012 05:39AM

Abstract: "Close-call counterfactuals", claims of what could have almost happened but didn't, can be used to either defend a belief or to attack it. People have a tendency to reject counterfactuals as improbable when those counterfactuals threaten a belief (the "I was not almost wrong" defense), but to embrace counterfactuals that support a belief (the "I was almost right" defense). This behavior is the strongest in people who score high on a test for need for closure and simplicity. Exploring counterfactual worlds can be used to reduce overconfidence, but it can also lead to logically incoherent answers, especially in people who score low on a test for need for closure and simplicity.

”I was not almost wrong”

Dr. Zany, the Nefarious Scientist, has a theory which he intends to use to achieve his goal of world domination. ”As you know, I have long been a student of human nature”, he tells his assistant, AS-01. (Dr. Zany has always wanted to have an intelligent robot as his assistant. Unfortunately, for some reason all the robots he has built have only been interested in eradicating the color blue from the universe. And blue is his favorite color. So for now, he has resorted to just hiring a human assistant and referring to her with a robot-like name.)

”During my studies, I have discovered the following. Whenever my archnemesis, Captain Anvil, shows up at a scene, the media will very quickly show up to make a report about it, and they prefer to send the report live. While this is going on, the whole city – including the police forces! - will be captivated by the report about Captain Anvil, and neglect to pay attention to anything else. This happened once, and a bank was robbed on the other side of the city while nobody was paying any attention. Thus, I know how to commit the perfect crime – I simply need to create a diversion that attracts Captain Anvil, and then nobody will notice me. History tells us that this is the inevitable outcome of Captain Anvil showing up!”

But to Dr. Zany's annoyance, AS-01 is always doubting him. Dr. Zany has often considered turning her into a brain-in-a-vat as punishment, but she makes the best tuna sandwiches Dr. Zany has ever tasted. He's forced to tolerate her impundence, or he'll lose that culinary pleasure.

”But Dr. Zany”, AS-01 says. ”Suppose that some TV reporter had happened to be on her way to where Captain Anvil was, and on her route she saw the bank robbery. Then part of the media attention would have been diverted, and the police would have heard about the robbery. That might happen to you, too!”

Dr. Zany's favorite belief is now being threatened. It might not be inevitable that Captain Anvil showing up will actually let criminals elsewhere act unhindered! AS-01 has presented a plausible-sounding counterfactual, ”if a TV reporter had seen the robbery, then the city's attention had been diverted to the other crime scene”. Although the historical record does not show that Dr. Zany's theory would have been wrong, the counterfactual suggests that he might be almost wrong.

There are now three tactics that Dr. Zany can use to defend his belief (warrantedly or not):

1. Challenge the mutability of the antecedent. Since AS-01's counterfactual is of the form ”if A, then B”, Dr. Zany could question the plausibility of A.

”Baloney!” exclaims Dr. Zany. ”No TV reporter could ever have wandered past, let alone seen the robbery!”

That seems a little hard to believe, however.

2. Challenge the causal principles linking the antecedent to the consequent. Dr. Zany is not logically required to accept the ”then” in ”if A, then B”. There are always unstated background assumptions that he can question.

”Humbug!” shouts Dr. Zany. ”Yes, a reporter could have seen the robbery and alerted the media, but given the choice of covering such a minor incident and continuing to report on Captain Anvil, they would not have cared about the bank robbery!”

3. Concede the counterfactual, but insist that it does not matter for the overall theory.

”Inconceivable!” yelps Dr. Zany. ”Even if the city's attention would have been diverted to the robbery, the robbers would have escaped by then! So Captain Anvil's presence would have allowed them to succeed regardless!”


Empirical work suggests that it's not only Dr. Zany who wants to stick to his beliefs. Let us for a moment turn our attention away from supervillains, and look at professional historians and analysts of world politics. In order to make sense of something as complicated as world history, experts resort to various simplifying strategies. For instance, one explanatory schema is called neorealist balancing. Neorealist balancing claims that ”when one state threatens to become too powerful, other states coalesce against it, thereby preserving the balance of power”. Among other things, it implies that Hitler's failure was predetermined by a fundemental law of world politics.

continue reading »

Using degrees of freedom to change the past for fun and profit

41 CarlShulman 07 March 2012 02:51AM

Follow-up to: Follow-up on ESP study: "We don't publish replications"Feed the Spinoff Heuristic!

Related to: Parapsychology: the control group for scienceDealing with the high quantity of scientific error in medicine

Using the same method as in Study 1, we asked 20 University of Pennsylvania undergraduates to listen to either “When I’m Sixty-Four” by The Beatles or “Kalimba.” Then, in an ostensibly unrelated task, they indicated their birth date (mm/dd/yyyy) and their father’s age. We used father’s age to control for variation in baseline age across participants. An ANCOVA revealed the predicted effect: According to their birth dates, people were nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (adjusted M = 20.1 years) rather than to “Kalimba” (adjusted M = 21.5 years), F(1, 17) = 4.92, p = .040

That's from "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant," which runs simulations of a version of Shalizi's "neutral model of inquiry," with random (null) experimental results, augmented with a handful of choices in the setup and analysis of an experiment. Even before accounting for publication bias, these few choices produced a desired result "significant at the 5% level" 60.7% of the time, and at the 1% level 21.5% at the time.

I found it because of another paper claiming time-defying effects, during a search through all of the papers on Google Scholar citing Daryl Bem's precognition paper, which I discussed in a past post about the problems of publication bias and selection over the course of a study. For Bem, Richard Wiseman established a registry for the methods, and tests of the registered studies could be set prior to seeing the data (in addition to avoiding the file drawer).

Now a number of purported replications have been completed, with several available as preprints online, including a large "straight replication" carefully following the methods in Bem's paper, with some interesting findings discussed below. The picture does not look good for psi, and is a good reminder of the sheer cumulative power of applying a biased filter to many small choices.

continue reading »

How to Fix Science

50 lukeprog 07 March 2012 02:51AM

Like The Cognitive Science of Rationality, this is a post for beginners. Send the link to your friends!

Science is broken. We know why, and we know how to fix it. What we lack is the will to change things.

 

In 2005, several analyses suggested that most published results in medicine are false. A 2008 review showed that perhaps 80% of academic journal articles mistake "statistical significance" for "significance" in the colloquial meaning of the word, an elementary error every introductory statistics textbook warns against. This year, a detailed investigation showed that half of published neuroscience papers contain one particular simple statistical mistake.

Also this year, a respected senior psychologist published in a leading journal a study claiming to show evidence of precognition. The editors explained that the paper was accepted because it was written clearly and followed the usual standards for experimental design and statistical methods.

Science writer Jonah Lehrer asks: "Is there something wrong with the scientific method?"

Yes, there is.

This shouldn't be a surprise. What we currently call "science" isn't the best method for uncovering nature's secrets; it's just the first set of methods we've collected that wasn't totally useless like personal anecdote and authority generally are.

As time passes we learn new things about how to do science better. The Ancient Greeks practiced some science, but few scientists tested hypotheses against mathematical models before Ibn al-Haytham's 11th-century Book of Optics (which also contained hints of Occam's razor and positivism). Around the same time, Al-Biruni emphasized the importance of repeated trials for reducing the effect of accidents and errors. Galileo brought mathematics to greater prominence in scientific method, Bacon described eliminative induction, Newton demonstrated the power of consilience (unification), Peirce clarified the roles of deduction, induction, and abduction, and Popper emphasized the importance of falsification. We've also discovered the usefulness of peer review, control groups, blind and double-blind studies, plus a variety of statistical methods, and added these to "the" scientific method.

In many ways, the best science done today is better than ever — but it still has problems, and most science is done poorly. The good news is that we know what these problems are and we know multiple ways to fix them. What we lack is the will to change things.

This post won't list all the problems with science, nor will it list all the promising solutions for any of these problems. (Here's one I left out.) Below, I only describe a few of the basics.

continue reading »

Ambiguity in cognitive bias names; a refresher

25 nerfhammer 21 February 2012 04:37AM

This came on the nyc list, I thought I would adapt it here.

Cognitive biases have names. That's what makes them memetic. It's easier to think about something that has a name. Though I think the benefits outweigh the costs, there is also the risk of a little Albert: a concept living on after the original research has been found to be much more ambiguous than first realized.

There are many errors that are possible with respect to named ideas, and despite being studied generally scientifically, cognitive biases are no exception. There is no equivalent to cognitive biases as the Académie Française is to French.

Let's describe some. Here they are:

  • different people in different fields will "discover" virtually the same bias but not be aware of each other and assign it different names. For example, see the Curse of Knowledge which I think George Loewenstein came up with  vs. the Historian's Fallacy by David Hackett Fischer, presentist bias, creeping determinism, and probably many others, not all of them scientific. Sometimes researchers in seemingly closely related subfields are remarkably insular to each other. 
  • researchers will use one term predominantly while an offshoot will decide they don't like the name and use a different one. For example the Fundamental Attribution Error has also been called the overattribution effect, the correspondence bias, the attribution bias, and the actor-observer effect. In this case the older term still predominates, and is used in intro textbooks without asterisks. Of the naming errors this is one of the least harmful, since everyone agrees what the FAE is, some just prefer a different name for it.
  • an author will decide he doesn't like the names of some biases will invent idiosyncratic names of his own. Jonathan Baron has a good textbook on cognitive bias but he uses names of his own invention half the time.
  • the same term will sometimes have different polysemous meanings. For example the "Zeigarnik Effect" has been used to refer to a memory bias in having a superior recall for unfinished tasks, and the term has also been used to refer to an attentional bias in which unfinished tasks tend intrude on consciousness; almost, but not quite exactly, the same thing. The term "confirmation bias" has several different but related meanings, for example, to seek out confirming information, to notice confirming information, to ask confirming questions, etc. which are not all quite exactly the same thing. The different meanings may have completely different contexts, boundary conditions etc., leading to confusion. Furthermore some of the senses may be at least partially disproven but not necessarily others, for example, the tendency to ask confirming questions has turned out to be more complicated than once thought. You might never know from reading about the attentional Zeigarnik that there is also a memory Zeiganik effect that is conceptually somewhat different. I recall seeing even prominent researchers occasionally making mistakes of this category. Of all the naming ambiguities I think is the most dangerous.
  • an offshoot of researchers may knowingly use the same term with a conflicting definition. For example "heuristic" in "Heuristics and Biases" versus "Fast and Frugal Heuristics", the latter of which was an intentional reaction to the former. In this case those involved know there is a disagreement in meaning, but those unfamiliar to the topic might be confused.[This is a point of contention which I'm willing to yield on]
  • the same term may be redefined by researchers who may not aware of each other. There has been more than one paper trying to introduce a bias to call "the disconfirmation effect". But this only happens for really obscure biases.
  • a bias may have different components which do not have names of their own and/or a bias may overlap partially but not completely with another bias. For instance, hindsight bias has different components one of which has some overlap with the curse of knowledge. 
  • the same bias term will be used as a rough category of experimental effect and also as a singular bias. For example, the term "an actor-observer bias" could refer to any difference in actors and observers, whereas "the actor-observer bias" refers to the Fundamental Attribution Error specifically; the same is true of "an" vs. "the" attribution bias, also referring to the FAE. This could confuse only those who are unfamiliar with the terminology.
  • sometimes authors have tried to enforce strict, distinct meanings for the subterms "bias" vs. "effect" vs. "neglect" vs. "error" or "fallacy"; other times, perhaps more often, these terms are used only by convention. For example the conjunction fallacy vs. the conjunction error, correspondence bias vs. the fundamental attribution error, base rate neglect vs. base rate error. Sometimes the originators of a bias try to use the terminology precisely while later authors citing it aren't as careful. Sometimes even the originators of a bias do not try to choose a subterm carefully. You might suspect what permutation of a term catches on is based on whichever has a better ring to it.

Is risk aversion really irrational ?

42 kilobug 31 January 2012 08:34PM
Disclaimer: this started as a comment to Risk aversion vs. concave utility function but it grew way too big so I turned it into a full-blown article. I posted it to main since I believe it to be useful enough, and since it replies to an article of main.

Abstract

When you have to chose between two options, one with a certain (or almost certain) outcome, and another which involves more risk, even if in term of utilons (paperclips, money, ...) the gamble has a higher expectancy, there is always a cost in a gamble : between the time when you take your decision and know if your gamble fails or succeeded (between the time you bought your lottery ticket,and the time the winning number is called), you've less precise information about the world than if you took the "safe" option. That uncertainty may force you to make suboptimal choices during that period of doubt, meaning that "risk aversion" is not totally irrational.

Even shorter : knowledge has value since it allows you to optimize, taking a risk temporary lowers your knoweldge, and this is a cost.

Where does risk aversion comes from ?

In his (or her?) article, dvasya gave one possible reason for it : risk aversion comes from a concave utility function. Take food for example. When you're really hungry, didn't eat for days, a bit of food has a very high value. But when you just ate, and have some stocks of food at home, food has low value. Many other things follow, more or less strongly, a non-linear utility function.

But if you adjust the bets for the utility, then, if you're a perfect utility maximizer, you should chose the highest expectancy, regardless of the risk involved. Between being sure of getting 10 utilons and having a 0.1 chance of getting 101 utilons (and 0.9 chance to get nothing), you should chose to take the bet. Or you're not rational, says dvasya.

My first objection to it is that we aren't perfect utility maximizer. We run on limited (and flawed) hardware. We have a limited power for making computation. The first problem of taking a risk is that it'll make all further computations much harder. You buy a lottery ticket, and until you know if you won or not, every time you decide what to do, you'll have to ponder things like "if I win the lottery, then I'll buy a new house, so is it really worth it to fix that broken door now ?" Asking yourself all those questions mean you're less Free to Optimize, and will use your limited hardware to ponder those issues, leading to stress, fatigue and less-efficient decision making.

For us humans with limited and buggy hardware, those problems are significant, and are the main reason for which I am personally (slightly) risk-averse. I don't like uncertainty, it makes planning harder, it makes me waste precious computing power in pondering what to do. But that doesn't seem apply to a perfect utility maximizer, with infinite computing power. So, it seems to be a consequence of biases, if not a bias in itself. Is it really ?

The double-bet of Clippy

So, let's take Clippy. Clippy is a pet paper-clip optimizer, using the utility function proposed by dvasya : u = sqrt(p), where p is the number of paperclips in the room he lives in. In addition to being cute and loving paperclips, our Clippy has lots of computing power, so much he has no issue with tracking probabilities. Now, we'll offer our Clippy to take bets, and see what he should do.

Timeless double-bet

At the beginning, we put 9 paperclips in the room. Clippy has a utilon of 3. He purrs a bit to show us he's happy of those 9 paperclips, looks at us with his lovely eyes, and hopes we'll give him more.

But we offer him a bet : either we give him 7 paperclips, or we flip a coin. If the coin comes up heads, we give him 18 paperclips. If it comes up tails, we give him nothing.

If Clippy doesn't take the bet, he gets 16 paperclips in total, so u=4. If Clippy takes the bet, he has 9 paperclips (u=3) with p=0.5 or 9+18=27 paperclips (u=5.20) with p=0.5. His utility expectancy is u=4.10, so he should take the bet.

Now, regardless of whatever he took the first bet (called B1 starting from now), we offer him a second bet (B2) : this time, he has to pay us 9 paperclips to enter. Then, we roll a 10-sided die. If it gives 1 or 2, we give him a jackpot of 100 paperclips, else nothing. Clippy can be in three states when offered the second deal :

  1. He didn't take B1. Then, he has 16 clips. If he doesn't take B2, he'll stay with 16 clips, and u=4. If takes B2, he'll have 7 clips with p=0.8 and 107 clips with p=0.2, for an expected utility of u=4.19.
  2. He did take B1, and lost it. He has 9 clips. If he doesn't take B2, he'll stay with 9 clips, and u=3. If takes B2, he'll have 0 clips with p=0.8 and 100 clips with p=0.2, for an expected utility of u=2.
  3. He did take B1, and won it. He has 27 clips. If he doesn't take B2, he'll stay with 27 clips, and u=5.20. If takes B2, he'll have 18 clips with p=0.8 and 118 clips with p=0.2, for an expected utility of u=5.57.

So, if Clippy didn't take the first bet or if he won it, he should take the second bet. If he did take the first bet and lost it, he can't afford to take the second bet, since he's risking a very bad outcome : no more paperclips, not even a single tiny one !

And the devil "time" comes in...

Now, let's make things a bit more complicated, and realistic. Before we were running things fully sequentially : first we resolved B1, and then we offered and resolved B2. But let's change a tiny bit B1. We don't flip the coin and give the clips to Clippy now. Clippy tells us if he takes B1 or not, but we'll wait one day before giving him the clips if he didn't take the bet, or before flipping the coin and then giving him the clips if he did take the bet.

The utility function of Clippy doesn't involve time, and we'll consider it doesn't change if he gets the clips tomorrow instead of today. So for him, the new B1 is exactly like the old B1.

But now, we offer him B2 after Clippy made his choice in B1 (taking the bet or not) but before flipping the coin for B1, if he did take the bet.

Now, for Clippy, we only have two situations : he took B1 or he didn't. If he didn't take B1, we are in the same situation than before, with an expected utility of u=4.19.

If he did take B1, we have to consider 4 possibilities :

  1. He loses the two bets. Then he ends up with no paperclip (9+0-9), and is very unhappy. He has u=0 utilons. That'll arise with p=0.4.
  2. He wins B1 and loses B2. Then he ends up with 9+18-9 = 18 paperclips, so u=4.24 with p=0.4.
  3. He loses B1 and wins B2. Then he ends up with 9-9+100 = 100 paperclips, so u=10 with p = 0.1.
  4. He wins both bets. Then he gets 9+18-9+100 = 118 paperclips, so u=10.86 with p=0.1.

At the end, if he takes B2, he ends up with an expectancy of u=3.78.

So, if Clippy takes B1, he then shouldn't take B2. Since he doesn't know if he won or lost B1, he can't afford the risk to take B2.

But should he take B1 at first ? If, when offered to take B1, he knows he'll be offered to take B2 later on, then he should refuse B1 and take B2, for an utility of 4.19. If, when offered B1, he doesn't know about B2, then taking B1 seems the more rational choice. But once he took B1, until he knows if he won or not, he cannot afford to take B2.

The Python code

For people interested about those issues, here is a simple Python script I used to fine tune that numerical parameters of  double-bet issue so my numbers lead to the problem I was pointing to. Feel free to play with it ;)

A hunter-gatherer tale

If you didn't like my Clippy, despite him being cute, and purring of happiness when he sees paperclips, let's shift to another tale.

Daneel is a young hunter-gatherer. He's smart, but his father committed a crime when he was still a baby, and was exiled from the tribe. Daneel doesn't know much about the crime - no one speaks about it, and he doesn't dare to bring the topic by himself. He has a low social status in the tribe because of that story. Nonetheless, he's attracted to Dors, the daughter of the chief. And he knows Dors likes him back, for she always smiles at him when she sees him, never makes fun of him, and gave him a nice knife after his coming-of-age ceremony.

According to the laws of the tribe, Dors can chose her husband freely, and the husband will become the new chief. But Dors also have to chose a husband that is accepted by the rest of the tribe, if the tribe doesn't accept the leadership, they could revolt, or fail to obey. And that could lead to disaster for the whole tribe. Daneel knows he has to raise his status in the tribe if he wants Dors to be able to chose him.

So Daneel wanders further and further in the forest. He wants to find something new to show the tribe his usefulness. That day, going a bit further than usual, he finds a place which is more humid than the forest the tribe usually wanders in. It has a new kind of trees, he never saw before. Lots of them. And they carry a yellow-red fruit which looks yummy. "I could tell about that place to the others, and bring them a few fruits. But then, what if the fruit makes them sick ? They'll blame me, I'll lose all chances... they may even banish me. But I can do better. I'll eat one of the fruits myself. If tomorrow I'm not sick, then I'll bring fruits to the tribe, and show them where I found them. They'll praise me for it. And maybe Dors will then be able to take me more seriously... and if I get sick, well, everyone gets sick every now and then, just one fruit shouldn't kill me, it won't be a big deal". So Daneel makes his utility calculation (I told you he was smart !), finds a positive outcome. So he takes the risk, he picks one fruit, and eats it. Sweet, a bit acid but not too much. Nice !

Now, Daneel goes back to the tribe. On the way back, he got a rabbit, a few roots and plants for the shaman, an average day. But then, he sees the tribe gathered around the central totem. In the middle of the tribe, Dors with... no... not him... Eto ! Eto is the strongest lad of Daneel's age. He wants Dors too. And he's strong, and very skilled with the bow. The other hunters like him, he's a real man. And Eto's father died proudly, defending the tribe's stock of dried meat against hungry wolves two winters ago. But no ! Not that ! Eto is asking Dors to marry him. In public. Dors can refuse, but if she does with no reason, she'll alienate half of the tribe against her, she can't afford it. Eto is way too popular.

"Hey, Daneel ! You want Dors ? Challenge Eto ! He's strong and good with the bow, but in unarmed combat, you can defeat him, I know it.", whispers Hari, one of the few friends of Daneel.

Daneel starts thinking faster he never did. "Ok, I can challenge Eto in unarmed combat. If I lose, I'll be wounded, Eto won't be nice with me. But he won't kill or cripple me, that would make half of the tribe to hate him. If I lose, it'll confirm I'm physical weak, but I'll also win prestige for daring to defy the strong Eto, so it shouldn't change much. And if I win, Dors will be able to refuse Eto, since he lost a fight against someone weaker than him, that's a huge win. So I should take that gamble... but then, there is the fruit. If the fruit gets me sick, in addition of my wounds from Eto, I may die. Even if I win ! And if I lose, get beaten, and then gets sick... they'll probably let me die. They won't take care of a fatherless lad who lose a fight and then gets sick. Too weak to be worth it. So... should I take the gamble ? If Eto waited just one day more... Or if only I knew if I'll get sick or not..."

The key : information loss

Until Clippy knows ? If Daneel knew ? That's the key of risk aversion, and why a perfect utility maximizer, if he has a concave utility function in at least some aspects, should still have some risk aversion. Because risk comes with information loss. That's the difference between the timeless double-bet and the one with one day of delay for Clippy. Or the problem Daneel got stuck into.

If you take a bet, until you know the outcome of your bet, you'll have less information about the state of the world, and especially about the state that directly concerns you, than if you chose the safe situation (a situation with a lower deviation). Having less information means you're less free to optimize.

Even a perfect utility maximizer can't know what bets he'll be offered, and what decisions he'll have to take, unless he's omniscient (and then he wouldn't take bets or risks, but he would know the future - probability only reflects lack of information). So he has to consider the loss of information of taking a bet.

In real life, the most common case of it is the non-linearity of bad effects : you can lose 0.5L of blood without too much side-effects (drink lots a water, sleep well, and next day you're ok, that's what happens when you go give your blood), but if you lose 2L, you'll likely die. Or if you lose some money, you'll be in trouble, but if you lose the same amount again, you may end up being kicked from you house since you can't pay the rent - and that'll be more than twice as bad as the initial lost.

So when you took a bet, risking to get a bad effect, you can't afford to take another bet (even with, in absolute, a higher gain expectancy), until you know if you won or lose the first bet - because losing them both means death, or being kicked from your house, or ultimate pain of not having any paperclip.

Taking a bet always as a cost : it costs you part of your ability to predict, and therefore to optimize.

A possible solution

A possible solution to that problem would be to consider all possible decisions you may to take while in the time period when you don't know if you lost or won your first bet, ponder them with the probability of being offered those decisions, and their possible outcomes if you take the first bet and you don't. But how do you compute "their possible outcomes" ? That needs to consider all the possible bets you could be offered during the time required for the resolution of your second bet, and their possible outcomes. So you need to... stack overflow: maximal recursion depth exceeded.

Since taking a bet will affect your ability to evaluate possible outcomes in the future, you've a "strange loop to the meta-level", an infinite recursion. Your decision algorithm has to consider the impact the decision will have on the future instances of your decision algorithm.

I don't know if there is a mathematical solution to that infinite recursion that manages to make it converge (like you can in some cases). But the problem looks really hard, and may not be computable.

Just factoring an average "risk aversion" that penalizes outcome which involve a risk (and the more you've to wait to know if you won or lose, the higher the penalty) sounds more a way to fix that problem than a bias.

View more: Next