Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Is Sunk Cost Fallacy a Fallacy?

19 gwern 04 February 2012 04:33AM

I just finished the first draft of my essay, "Are Sunk Costs Fallacies?"; there is still material I need to go through, but the bulk of the material is now there. The formatting is too gnarly to post here, so I ask everyone's forgiveness in clicking through.

To summarize:

  1. sunk costs are probably issues in big organizations
    • but maybe not ones that can be helped
  2. sunk costs are not issues in animals
  3. they appear to be in children & adults
    • but many apparent problems can be explained as part of a learning strategy
  4. there are few clear indications sunk costs are genuine problems
  5. much of what we call 'sunk cost' looks like simple carelessness & thoughtlessness

(If any of that seems unlikely or absurd to you, click through. I've worked very hard to provide multiple citations where possible, and fulltext for practically everything.)

I started this a while ago; but Luke/SIAI paid for much of the work, and that motivation plus academic library access made this essay more comprehensive than it would have been and finished months in advance.

 

The Substitution Principle

69 Kaj_Sotala 28 January 2012 04:20AM

Partial re-interpretation of: The Curse of Identity
Also related to: Humans Are Not Automatically Strategic, The Affect Heuristic, The Planning Fallacy, The Availability Heuristic, The Conjunction Fallacy, Urges vs. Goals, Your Inner Google, signaling, etc...

What are the best careers for making a lot of money?

Maybe you've thought about this question a lot, and have researched it enough to have a well-formed opinion. But the chances are that even if you hadn't, some sort of an answer popped into your mind right away. Doctors make a lot of money, maybe, or lawyers, or bankers. Rock stars, perhaps.

You probably realize that this is a difficult question. For one, there's the question of who we're talking about. One person's strengths and weaknesses might make them more suited for a particular career path, while for another person, another career is better. Second, the question is not clearly defined. Is a career with a small chance of making it rich and a large chance of remaining poor a better option than a career with a large chance of becoming wealthy but no chance of becoming rich? Third, whoever is asking this question probably does so because they are thinking about what to do with their lives. So you probably don't want to answer on the basis of what career lets you make a lot of money today, but on the basis of which one will do so in the near future. That requires tricky technological and social forecasting, which is quite difficult. And so on.

Yet, despite all of these uncertainties, some sort of an answer probably came to your mind as soon as you heard the question. And if you hadn't considered the question before, your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them.

The thing is, it probably didn't.

Daniel Kahneman, in Thinking, Fast and Slow, extensively discusses what I call the Substitution Principle:

If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. (Kahneman, p. 97)

System 1, if you recall, is the quick, dirty and parallel part of our brains that renders instant judgements, without thinking about them in too much detail. In this case, the actual question that was asked was ”what are the best careers for making a lot of money”. The question that was actually answered was ”what careers have I come to associate with wealth”.

Here are some other examples of substitution that Kahneman gives:

continue reading »

1001 PredictionBook Nights

51 gwern 08 October 2011 04:04PM

I explain what I've learned from creating and judging thousands of predictions on personal and real-world matters: the challenges of maintenance, the limitations of prediction markets, the interesting applications to my other essays, skepticism about pundits and unreflective persons' opinions, my own biases like optimism & planning fallacy, 3 very useful heuristics/approaches, and the costs of these activities in general.

Plus an extremely geeky parody of Fate/Stay Night.

This essay exists as a large section of my page on predictions markets on gwern.net: http://www.gwern.net/Prediction%20markets#1001-predictionbook-nights

Prospect Theory: A Framework for Understanding Cognitive Biases

66 Yvain 10 July 2011 05:20AM

Related to: Shane Legg on Prospect Theory and Computational Finance

This post is on prospect theory partly because it fits the theme of replacing simple utility functions with complicated reward functions, but mostly because somehow Less Wrong doesn't have any posts on prospect theory yet and that needs to change.

Kahneman and Tversky, the first researchers to identify and rigorously study cognitive biases, proved that a simple version of expected utility theory did not accurately describe human behavior. Their response was to develop prospect theory, a model of how people really make decisions. Although the math is less elegant than that of expected utility, and the shapes of the curves have to be experimentally derived, it is worth a look because it successfully predicts many of the standard biases.

(source: Wikipedia)

A prospect theory agent tasked with a decision first sets it within a frame with a convenient zero point, allowing em to classify the results of the decision as either losses or gains. Ey then computes a subjective expected utility, where the subjective expected utility equals the subjective value times the subjective probability. The subjective value is calculated from the real value using a value function similar to the one on the left-hand graph, and the subjective probability is calculated from the real probability using a weighting function similar to the one on the right-hand graph.

continue reading »

Fun and Games with Cognitive Biases

62 Cosmos 18 February 2011 08:38PM

You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases.  The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's.  Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.

While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own.  Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:

  1. How do we recognize it?
  2. How do we correct it?
  3. How do we use its existence to help us win?

The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias.  To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect.  We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.

Results of this exercise are posted below the fold.  I encourage you to try the exercise for yourself before looking at our answers.

continue reading »

Make your training useful

93 AnnaSalamon 12 February 2011 02:14AM

As Tom slips on the ice puddle, his arm automatically pulls back to slap the ground.  He’s been taking Jiu-Jitsu for only a month, but, already, he’s practiced falling hundreds of times.  Tom’s training keeps him from getting hurt.

By contrast, Sandra is in her second year of university mathematics.  She got an “A” in calculus and in several more advanced courses, and she can easily recite that “derivatives” are “rates of change”.  But when she goes on her afternoon walk and stares at the local businesses, she doesn’t see derivatives.

For many of us, rationality is more like Sandra’s calculus than Tom’s martial arts.  You may think “overconfidence” when you hear an explicit probability (“It’s 99% likely I’ll make it to Boston on Tuesday”).  But when no probability is mentioned -- or, worse, when you act on a belief without noticing that belief at all -- your training has little impact.

Learn error patterns ahead of time

If you want to notice errors while you’re making them, think ahead of time about what your errors might look like. List the circumstances in which to watch out and the alternative action to try then.

Here's an example of what your lists might look like.  A bunch of visiting fellows generated this list at one of our rationality trainings last summer; I’m including their list here (with some edits) because I found the specific suggestions useful, and because you may be able to use it as a model for your own lists.

continue reading »

The Trolley Problem: Dodging moral questions

13 Desrtopa 05 December 2010 04:58AM

The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.

However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.

continue reading »

So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning

48 Matt_Simpson 14 July 2010 04:51PM

Related to: The Conjunction Fallacy, Conjunction Controversy

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

EDIT 2: The author no longer holds the views presented in this post. See this comment.

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

What is the probability that Linda is:

(a) a bank teller

(b) a bank teller and active in the feminist movement

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

There are 100 people who fit the description above. How many of them are:

(a) bank tellers

(b) bank tellers and active in the feminist movement

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

continue reading »

Your intuitions are not magic

65 Kaj_Sotala 10 June 2010 12:11AM

This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. If you're new and this article got you curious, we recommend the Sequences.

People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them. Statistical tests are complicated mathematical techniques, and to work, they tend to make numerous assumptions. The problem is that if those assumptions are not valid, most statistical tests do not cleanly fail and produce obviously false results. Neither do they require you to carry out impossible mathematical operations, like dividing by zero. Instead, they simply produce results that do not tell you what you think they tell you. As a formal system, pure math exists only inside our heads. We can try to apply it to the real world, but if we are misapplying it, nothing in the system itself will tell us that we're making a mistake.

Examples of misapplied statistics have been discussed here before. Cyan discussed a "test" that could only produce one outcome. PhilGoetz critiqued a statistical method which implicitly assumed that taking a healthy dose of vitamins had a comparable effect as taking a toxic dose.

Even a very simple statistical technique, like taking the correlation between two variables, might be misleading if you forget about the assumptions it's making. When someone says "correlation", they are most commonly talking about Pearson's correlation coefficient, which seeks to gauge whether there's a linear relationship between two variables. In other words, if X increases, does Y also tend to increase. (Or decrease.) However, like with vitamin dosages and their effects on health, two variables might have a non-linear relationship. Increasing X might increase Y up to a certain point, after which increasing X would decrease Y. Simply calculating Pearson's correlation on two such variables might cause someone to get a low correlation, and therefore conclude that there's no relationship or there's only a weak relationship between the two. (See also Anscombe's quartet.)

The lesson here, then, is that not understanding how your analytical tools work will get you incorrect results when you try to analyze something. A person who doesn't stop to consider the assumptions of the techniques she's using is, in effect, thinking that her techniques are magical. No matter how she might use them, they will always produce the right results. Of course, assuming that makes about as much sense as assuming that your hammer is magical and can be used to repair anything. Even if you had a broken window, you could fix that by hitting it with your magic hammer. But I'm not only talking about statistics here, for the same principle can be applied in a more general manner.

continue reading »

Are these cognitive biases, biases?

35 Kaj_Sotala 23 December 2009 05:27PM

Continuing my special report on people who don't think human reasoning is all that bad, I'll now briefly present some studies which claim that phenomena other researchers have considered signs of faulty reasoning aren't actually that. I found these from Gigerenzer (2004), which I in turn found when I went looking for further work done on the Take the Best algorithm.

Before we get to the list - what is Gigerenzer's exact claim when he lists these previous studies? Well, he's saying that minds aren't actually biased, but may make judgments that seem biased in certain environments.

Table 4.1 Twelve examples of phenomena that were first interpreted as "cognitive illusions" but later revalued as reasonable judgments given the environmental structure. [...]

The general argument is that an unbiased mind plus environmental structure (such as unsystematic error, unequal sample sizes, skewed distributions) is sufficient to produce the phenomenon. Note that other factors can also contribute to some of the phenomena. The moral is not that people would never err, but that in order to understand good and bad judgments, one needs to analyze the structure of the problem or of the natural environment.

On to the actual examples. Of the twelve examples referenced, I've included three for now.

continue reading »

View more: Next