While I disagree with some views of the Fast and Frugal crowd—in my opinion they make a few too many lemons into lemonade—it also seems to me that they tend to develop the most psychologically realistic models of any school of decision theory. Most experiments present the subjects with options, and the subject chooses an option, and that’s the experimental result. The frugalists realized that in real life, you have to generate your options, and they studied how subjects did that.

    Likewise, although many experiments present evidence on a silver platter, in real life you have to gather evidence, which may be costly, and at some point decide that you have enough evidence to stop and choose. When you’re buying a house, you don’t get exactly ten houses to choose from, and you aren’t led on a guided tour of all of them before you’re allowed to decide anything. You look at one house, and another, and compare them to each other; you adjust your aspirations—reconsider how much you really need to be close to your workplace and how much you’re really willing to pay; you decide which house to look at next; and at some point you decide that you’ve seen enough houses, and choose.

    Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.

    I suggest that an analogous bias in psychologically realistic search is motivated stopping and motivated continuation: when we have a hidden motive for choosing the “best” current option, we have a hidden motive to stop, and choose, and reject consideration of any more options. When we have a hidden motive to reject the current best option, we have a hidden motive to suspend judgment pending additional evidence, to generate more options—to find something, anything, to do instead of coming to a conclusion.

    A major historical scandal in statistics was R. A. Fisher, an eminent founder of the field, insisting that no causal link had been established between smoking and lung cancer. “Correlation is not causation,” he testified to Congress. Perhaps smokers had a gene which both predisposed them to smoke and predisposed them to lung cancer.

    Or maybe Fisher’s being employed as a consultant for tobacco firms gave him a hidden motive to decide that the evidence already gathered was insufficient to come to a conclusion, and it was better to keep looking. Fisher was also a smoker himself, and died of colon cancer in 1962.1

    Like many other forms of motivated skepticism, motivated continuation can try to disguise itself as virtuous rationality. Who can argue against gathering more evidence?2

    I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later.3

    As for motivated stopping, it appears in every place a third alternative is feared, and wherever you have an argument whose obvious counterargument you would rather not see, and in other places as well. It appears when you pursue a course of action that makes you feel good just for acting, and so you’d rather not investigate how well your plan really worked, for fear of destroying the warm glow of moral satisfaction you paid good money to purchase.4 It appears wherever your beliefs and anticipations get out of sync, so you have a reason to fear any new evidence gathered.5

    The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there’s a lot of fast cheap evidence you haven’t gathered yet—there are websites you could visit, there are counter-counter arguments you could consider, or you haven’t closed your eyes for five minutes by the clock trying to think of a better option. You should suspect motivated continuation when some evidence is leaning in a way you don’t like, but you decide that more evidence is needed—expensive evidence that you know you can’t gather anytime soon, as opposed to something you’re going to look up on Google in thirty minutes—before you’ll have to do anything uncomfortable.

    1Ad hominem note: Fisher was a frequentist. Bayesians are more reasonable about inferring probable causality; see Judea Pearl’s Causality: Models, Reasoning, and Inference.

    2Compare Robin Hanson, “Cut Medicine In Half,” Overcoming Bias (blog), September 10, 2007, http://www.overcomingbias.com/2007/09/cut-medicine-in.html.

    3Apparent contradiction resolved as follows: Spending one hour discussing the problem, with your mind carefully cleared of all conclusions, is different from waiting ten years on another $20 million study.

    4See “‘Can’t Say No’ Spending.” http://lesswrong.com/lw/kb/cant_say_no_spending.

    5See “Belief in Belief” in Map and Territory.

    New to LessWrong?

    New Comment
    8 comments, sorted by Click to highlight new comments since: Today at 8:14 AM

    Eliezer, are you familiar with Russell and Wefald's book "Do the Right Thing"?

    It's fairly old (1991), but it's a good example of how people in AI view limited rationality.

    Maybe you could exploit this, if the question you're gathering evidence for is important enough to warrant all that costly searching. Spending hours digging through obscure journals is not something most people do for fun, but if you can come up with a pet theory which needs reinforcing, most people would rather do the evidence-gathering than be forced to give it up.

    'Motivated stopping'? What springs to my mind is psi tests. If you regard psi tests as a possibly infinite series, then when you cut off testing and start analysing can produce any result you want.

    'Lucky streaks' can occur at any any time in a string of random numbers.

    That's why in psi testing you must calculate the exact number of tests required to show an effect of the size you expect and do precisely that number of tests, no more and no less. And you are not allowed to throw away the tests that resulted in average or negative results either.

    My favourite example of motivated stopping is Lazzarini's experimental "verification" of the Buffon needle formula.

    (Drop toothpicks at random on a plane ruled with evenly spaced parallel lines. The average number of line-crossings per toothpick is related to pi. Lazzarini did the experiment and got pi to 6 decimal places. It seems clear that he did this by doing trials in batches whose size made it likely that he'd get an estimate equivalent to pi = 355/113, which happens to be very close, and then did one batch at a time until he happened to hit it on the nose.

    Completely off-topic, here's a beautiful derivation of the formula: Expectations are additive, so the expected number of line-crossings is proportional to the length of the toothpick and doesn't depend on what shape it actually is. So consider a circular "toothpick" whose diameter equals the spacing between the lines. No matter how you drop this, you get 2 crossings. Therefore the constant of proportionality is 2/pi. Therefore the expected number of crossings for any toothpick of length L, in units where the line-spacing is 1, is 2L/pi. If L<1 then this is also the probability of getting a crossing at all, since you can't get more than one.)

    To put it differently, motivated stopping is a problem in pi tests just like it is in psi tests. :-)

    I find this article relevant to the whole series Amanda Knox posts/comments.