Consider Representative Data Sets

6 Vladimir_Nesov 06 May 2009 01:49AM

In this article, I consider the standard biases in drawing factual conclusions that are not related to emotional reactions, and describe a simple model summarizing what goes wrong with the reasoning in these cases, that in turn suggests a way of systematically avoiding this kind of problems.

The following model is used to describe the process of getting from a question to a (potentially biased) answer for the purposes of this article. First, you ask yourself a question. Second, in the context of the question, a data set is presented before your mind, either directly, by you looking at the explicit statements of fact, or indirectly, by associated facts becoming salient to your attention, triggered by the explicit data items or by the question. Third, you construct an intuitive model of some phenomenon, that allows to see its properties, as a result of considering the data set. And finally, you pronounce the answer, that is read out as one of the properties of the model you've just constructed.

This description is meant to present mental paintbrush handles, to refer to the things you can see introspectively, and things you could operate consciously if you choose to.

Most of the biases in the considered class may be seen as particular ways in which you pay attention to a wrong data set, not representative of the phenomenon you model to get to the answer you seek. As a result, the intuitive model gets systematically wrong, and the answer read out from it gets biased. Below I review the specific biases, to identify the ways in which things go wrong in each particular case, and then I summarize the classes of mistakes of reasoning playing major roles in these biases and correspondingly the ways of avoiding those mistakes.

continue reading »

Allais Hack -- Transform Your Decisions!

18 MBlume 03 May 2009 10:37PM

The Allais Paradox, though not actually a paradox, was a classic experiment which showed that decisions made by humans do not demonstrate consistent preferences. If you actually want to accomplish something, rather than simply feel good about your decisions, this is rather disturbing.

When something like the Allais Paradox is presented all in one go, it's fairly easy to see that the two cases are equivalent, and ensure that your decisions are consistent. But if I clone you right now, present one of you with gamble 1, and one of you with gamble 2, you might not fare so well. The question is how to consistently advance your own preferences even when you're only looking at one side of the problem.

Obviously, one solution is to actually construct a utility function in money, and apply it rigorously to all decisions. Logarithmic in your total net worth is usually a good place to start. Next you can assign a number of utilons to each year you live, a negative number to each day you are sick, a number for each sunrise you witness...

I would humbly suggest that a less drastic strategy might be to familiarize yourself with the ways in which you can transform a decision which should make no difference unto decision theory, and actually get in the habit of applying these transformations to decisions you make in real life.

So, let us say that I present you with Allais Gamble #2: choose between A: 34% chance of winning $24,000, and 66% chance of winning nothing, and B: 33% chance of winning $27,000, and 67% chance of winning nothing.

Before snapping to a judgment, try some of the following transforms:

continue reading »

Fire and Motion

4 rwallace 29 April 2009 04:06PM

Related to: Extreme Rationality: It's Not That Great

On the recent topics of "rationality is all very well but how do we translate understanding into winning?" and "isn't akrasia the most common limiting factor?", one of the best (non-recent) articles on practical rationality that I've come across is:

http://www.joelonsoftware.com/articles/fog0000000339.html

Interestingly, it uses a different kind of martial art as a metaphor. I conjecture it to be the sort of metaphor that just works well for humans.

(Most of Spolsky's posts are good reading even if you're not a programmer. I'm not in the New York real estate market but I still enjoyed his posts on that topic. He's just that good a writer.)

Sunk Cost Fallacy

30 Z_M_Davis 12 April 2009 05:30PM

Related to: Just Lose Hope Already, The Allais Paradox, Cached Selves

In economics we have this concept of sunk costs, referring to costs that have already been incurred, but which cannot be recouped. Sunk cost fallacy refers to the fallacy of honoring sunk costs, which decision-theoretically should just be ignored. The canonical example goes something like this: you have purchased a nonrefundable movie ticket in advance. (For the nitpickers in the audience, I will also specify that the ticket is nontransferable and that you weren't planning on meeting anyone.) When the night of the show comes, you notice that you don't actually feel like going out, and would actually enjoy yourself more at home. Do you go to the movie anyway?

A lot of people say yes, to avoid wasting the ticket. But on further consideration, it would seem that these people are simply getting it wrong. The ticket is a sunk cost: it's already paid for, and you can't do anything with it but go to the movie. But we've stipulated that you don't want to go to the movie. The theater owners don't care whether you go; they already have their money. The other theater-goers, insofar as they can be said to have a preference, would actually rather you stayed home, making the theater marginally less crowded. If you go to the movie to satisfy your intuition about not wasting the ticket, you're not actually helping anyone. Of course, you're entitled to your values, if not your belief. If you really do place terminal value on using something because you've paid for it, well, fine, I guess. But we should all try to notice exactly what it is we're doing, in case it turns out to not be what we want. Please, think it through.

Dearest reader, if you're now about to scrap your intuition against wasting things, I implore you: don't! The moral of the parable of the movie ticket is not that waste is okay; it's that you should implement your waste-reduction interventions at a time when they can actually help. If you can anticipate your enthusiasm waning on the night of the show, don't purchase the nonrefundable ticket in the first place!

continue reading »

How Much Thought

37 jimrandomh 12 April 2009 04:56AM

We have many built in heuristics, and most of them are trouble. The absurdity heuristic makes us reject reasonable things out of hand, so we should take the time to fully understand things that seem absurd at first. Some of our beliefs are not reasoned, but inherited; we should sniff those out and discard them. We repeat cached thoughts, so we should clear and rethink them. The affect heuristic is a tricky one; to work around it, we have to take the outside view. Everything we see and do primes us, so for really important decisions, we should never leave our rooms. We fail to attribute agency to things which should have it, like opinions, so if less drastic means don't work, we should modify English to make ourseves do so.

All of these articles bear the same message, the same message that can be easily found in the subtext of every book, treatise and example of rationality. Think more. Look for the third alternative. Challenge your deeply held beliefs. Drive through semantic stop signs. Prepare a line of retreat. If you don't understand, you should make an extraordinary effort. When you do find cause to change your beliefs, complete a checklist, run a script and follow a ritual. Recheck your answers, because thinking helps; more thought is always better.

The problem is, there's only a limited amount of time in each day. To spend more time thinking about something, we must spend less time on something else. The more we think about each topic, the fewer topics we have time to think about at all. Rationalism gives us a long list of extra things to think about, and angles to think about them from, without guidance on where or how much to apply them. This can make us overthink some things and disastrously underthink others. Our worst mistakes are not those where our thoughts went astray, but those we failed to think about at all. The time between when we learn rationality techniques and when we learn where to apply them is the valley.

continue reading »

Secret Identities vs. Groupthink

19 Swimmy 09 April 2009 08:26PM

From Marginal Revolution:

A new meta-analysis (pdf) of 72 studies, involving 4,795 groups and over 17,000 individuals has shown that groups tend to spend most of their time discussing the information shared by members, which is therefore redundant, rather than discussing information known only to one or a minority of members. This is important because those groups that do share unique information tend to make better decisions.

Another important factor is how much group members talk to each other. Ironically, Jessica Mesmer-Magnus and Leslie DeChurch found that groups that talked more tended to share less unique information.

A result that shouldn't surprise this group. I've noticed obvious attempts to avoid this tendency in Less Wrong (for instance, Yvain's avoiding further Christian-bashing). We've had at least one post asking specifically for information that was unique. And I don't know about the rest of you, but I've already had plenty of new food for thought on Less Wrong.

But are we tapping the full potential? Each of us has, or should have, a secret identity. The nice thing about those identities is that they give us access to unique knowledge. We've been asked (though I can't find the link) to avoid large posts applying learned rationality techniques to controversial topics, for fear of killing minds, which seems reasonable to me. Is there a better way to allow discipline-specific knowledge to be shared among Less Wrong readers without setting off our politicosensors? It seems beneficial not only for improved rationality training, but also to enhance our secret identities. For instance, I, as an economist-in-training, would like to know not just what an anthropologist can tell me, but what a Bayesian-trained anthropologist can tell me.

Never Leave Your Room

66 Yvain 18 March 2009 12:30AM

Related to: Priming and Contamination

Psychologists define "priming" as the ability of a stimulus to activate the brain in such a way as to affect responses to later stimuli. If that doesn't sound sufficiently ominous, feel free to re-word it as "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

For example, let's say you walk into a room and notice a briefcase in the corner. Your brain is now the proud owner of the activated concept "briefcase". It is "primed" to think about briefcases, and by extension about offices, business, competition, and ambition. For the next few minutes, you will shift ever so slightly towards perceiving all social interactions as competitive, and towards behaving competitively yourself. These slight shifts will be large enough to be measured by, for example, how much money you offer during the Ultimatum Game. If that sounds too much like some sort of weird New Age sympathetic magic to believe, all I can say is Kay, Wheeler, Bargh, and Ross, 2004.1

We've been discussing the costs and benefits of Santa Claus recently. Well, here's one benefit: show Dutch children an image of St. Nicholas' hat, and they'll be more likely to share candy with others. Why? The researchers hypothesize that the hat activates the concept of St. Nicholas, and St. Nicholas activates an idealized concept of sharing and giving. The child is now primed to view sharing positively. Of course, the same effect can be used for evil. In the same study, kids shown the Toys 'R' Us logo refused to share their precious candy with anyone.

But this effect is limited to a few psych laboratories, right? It hasn't done anything like, you know, determine the outcome of a bunch of major elections?

continue reading »

Tarski Statements as Rationalist Exercise

11 Vladimir_Nesov 17 March 2009 07:47PM

Related to: Dissolving the Question, The Second Law of Thermodynamics, and Engines of Cognition, The Meditation on Curiosity.

The sentence "snow is white" is true if, and only if, snow is white.

-- A. Tarski

Several days ago I've spent a couple of hours trying to teach my 15 year old brother how to properly construct Tarski statements. It's quite nontrivial to get right. Learning to place facts and representations in the separate mental buckets is one of the fundamental tools for a rationalist. In our model of the world, information propagates from object to object, from mind to mind. To ascertain the validity of your belief, you need to research the whole network of factors that led you to attain the belief. The simplest relation is between a fact and its representation, idealized to represent correctness or incorrectness only, without yet worrying about probabilities. The same object or the same property can be interpreted to mean different things in different relations and contexts, indicating the truth of one statement or another, and it's important not to conflate those.

Let's say you are watching news on TV and the next item is an interview with a sasquatch. The sasquatch answers the questions about his family in decent English, with a slight British accent.

What do you actually observe, how should you interpret the data? Did you "see a sasquatch"? Did you learn the facts about sasquatch's family? Is there a fact of the matter, as to whether the sasquatch's daughter is 5 years old, as opposed to 4 or 6?

continue reading »

The Least Convenient Possible World

165 Yvain 14 March 2009 02:11AM

Related to: Is That Your True Rejection?

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

   -- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

continue reading »

The Mistake Script

12 jimrandomh 09 March 2009 05:35PM

Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you're reading an article and you encounter a fallacy. What do you do? Consider the following script:

  1. Reread the argument to determine whether it's really an error. (If not, resume reading.)
  2. Verify that the error is relevant to the point of the article. (If not, resume reading.)
  3. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't.

This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don't. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure:

  1. Reread the argument to determine whether it's really an error. (If not, resume reading.)
  2. Verify that the error is relevant to the point of the article. (If not, resume reading.)
  3. Decide whether you agree with the article's conclusion. If you are sure you do, stop reading. If you aren't sure what the conclusion is or aren't sure you agree with it, continue.
  4. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't.

This extra step protects us from confirmation bias and the "echo chamber" effect. We might try adding more steps, to reduce bias even further:

continue reading »

View more: Prev | Next