Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Teaching a short class on Bayes' Theorem?

8 Tesseract 07 December 2011 09:45PM

At my college, there's a week before Spring Semester each year in which anyone who wants to can teach a class on any subject, and students go to whatever ones they feel like. I'm thinking about teaching a class on Bayes' Theorem. It would be informal, one to two hours long, and focused mostly on non-obvious applications of it (epistemology, the representativeness heuristic, etc.)

At the moment, I'm thinking about how to design the class, so I'd appreciate any suggestions as to what content I should cover, the best format, clear ways to explain it, cool things related to Bayes' Theorem, good links, and so forth.

Ideas for heuristics and biases research topic?

3 Tesseract 25 September 2011 06:20PM

Hey Less Wrong,

I'm currently taking a cognitive psychology class, and will be designing and conducting a research project in the field — and I'd like to do it on human judgment, specifically heuristics and biases. I'm currently doing preliminary research to come up with a more specific topic to base my project on, and I figured Less Wrong would be the place to come to find questions about flawed human judgment. So: any ideas?

(I'll probably be using these ideas mostly as guidelines for forming my research question, since I doubt it would be academically honest to take them outright. The study will probably take the form of a questionnaire or online survey, but experimental manipulation is certainly possible and it might  be possible to make use of other psych department resources.)

Judgment Under Uncertainty summaries, Part 1: Representativeness

29 Tesseract 15 August 2011 12:05AM

Judgment Under Uncertainty: Heuristics and Biases is one of the foundational works on the flaws of human reasoning, and as such gets cited a lot on Less Wrong — but it's also rather long and esoteric, which makes it inaccessible to most Less Wrong users. Over the next few months, I'm going to attempt to distill the essence of the studies that make up the collection, in an attempt to convey the many interesting bits without forcing you to slog through the 500 or so pages of the volume itself. This post summarizes sections I (Introduction) and II (Representativeness).

By way of background: Judgment Under Uncertainty is a collection of 35 scientific papers and articles on how people make decisions with limited information, edited by Daniel Kahneman, Amos Tversky, and Paul Slovic. Kahneman and Tversky are the most recognizable figures in the area and the names most associated with the book, but only 12 of the studies are their work. It was first published in 1982 (my version is from 1986), and most studies were performed in the '70s — so note that this is not up-to-date research, and I can't say for sure what the current scientific consensus on the topic is. Judgement Under Uncertainty focuses on the divergence of human intuition from optimal reasoning, so it uses a lot of statistics and probability to define what's optimal. The details are actually pretty fascinating if you have the time and inclination (and it's also something of an education in study design and statistics), and this series of posts by no means replaces the book, but I intend to provide something of a shorthand version.

That said, on to the summaries! Title of the chapter/paper in quotes, sections organized as in the book and in bold. (Incomplete preview here, if you want to follow along.)

Introduction

"Judgment Under Uncertainty: Heuristics and Biases", Tversky and Kahneman (1974)

This is the most important paper in the book, and it's short and publicly available (PDF), so I'd encourage you to just go read it now. It reviews the representativeness and availability heuristics and the various errors in reasoning they produce, and introduces the idea of anchoring. Since it reviews some of the material contained in Judgment Under Uncertainty, there's overlap between the material it covers and the material I'm going to cover in this and the other posts. As it's already a boiled-down version of the heuristics literature, I won't attempt to summarize it here.

Representativeness

"Belief in the law of small numbers", Tversky and Kahneman, 1971 (PDF)

People expect that samples will have much less variability and be much more representative of the population than they actually are. This manifests in expecting that two random samples will be very similar to each other and that large observations in one direction will be canceled out by large observations in the other rather than just being diluted. Tversky and Kahneman call this the "law of small numbers" — the belief that the law of large numbers applies to small samples as well.

One consequence of this in science is that failing to account for variability means that studies will be way underpowered. Tversky and Kahneman surveyed psychologists on the probability that a significant result from an experiment on 20 subjects would be confirmed by a replication using 10 subjects — most estimated around 85%, when it was actually around 48%. (Incidentally, a study they cite reviewing published results in psychology estimates that the power was .18 for small effects and .48 for effects of medium size.) The gist of this is that one might very well find a real significant result, attempt to replicate it using a smaller sample on the belief that the small sample will be very representative of the population, and miss entirely due to lack of statistical power. Worse, when given a hypothetical case of a student who ran such a replication and got an insignificant result, many of the surveyed suggested he should try to find an explanation for the difference between the two groups — when it was due entirely to random variation.

 

"Subjective probability: A judgment of representativeness", Kahneman and Tversky, 1972 (PDF)

People judge the likelihood of events based on representativeness rather than actual probability. Representativeness is a bit hard to pin down, but involves reflecting the characteristics of the population and the process that generated it — so the likelihood of six children having the gender order B G B B B B is judged less than of them having the order G B G B B G (because it doesn't reflect the proportion of boys in the population) and likewise for B B B G G G versus G B B G B G (because it doesn't reflect the randomness of gender determination).

People also completely ignore the effect of sample size on the probability of an outcome (e.g. the likelihood of the proportion of male babies being between .55 and .65 for N births), because it doesn't affect the representativeness of that outcome. Repeat: Sample size has no effect at all. People expect the probability of the example I gave to be around 15% whether it's N=10 or N=1000, when it's actually ~20% for N=10 and zero for N=1000. (The graphs on pages 42-43 of the PDF can get this across better than I can — the black line is the predicted probability for all sample sizes, and the bars are the real probability for each.)

 

"On the psychology of prediction", Kahneman and Tversky, 1973

Judging by representativeness makes people completely ignore base rates (i.e. prior probabilities). Subjects asked to judge (on the basis of a personality sketch) either how similar someone was to the typical student in a graduate program or how likely they were to be a student in that program produced identical results (correlation of .97), with no regard whatsoever for the judged prior probability of a graduate student being in a given area (correlation of -.65) — which would be permissible if they thought the sketches were such strong evidence that they overwhelmed existing information, but when asked, subjects expected predictions based on personality sketches to be accurate only 23% of the time. In a followup, Kahneman and Tversky manipulated beliefs about how predictive the evidence was (telling one group that such predictions were accurate 55% of the time and the other 27%) and found that while subjects were slightly less confident in the low-predictiveness group (though they were still 56% sure of being right), they ignored base rates just as completely in either condition. In this and in several other experiments in this chapter, people fail to be regressive in their predictions — that is, the weight that they assign to prior probability versus new evidence is unaffected by the expected accuracy of the new evidence.

An interesting specific point with regard to new information replacing rather than supplementing prior probabilities: while people can make judgments about base rates in the abstract, completely useless specific information can cause this ability to disappear. e.g.: If asked for the probability that an individual randomly selected from a group of 70 engineers and 30 lawyers is a lawyer, they'll say 30%, but if given utterly useless information about a specific person —

Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.

— they'll go back to 50-50.

The rest of the chapter contains several other experiments in which people egregiously ignore base rates and assign far too much predictive validity to unreliable evidence.

People make predictions (e.g. future GPA) more confidently when input (e.g. test scores) is highly consistent, but highly consistent data tends to result from highly intercorrelated variables, and you can predict more accurately given independent variables than intercorrelated ones — so high consistency increases confidence while decreasing accuracy. What's more, people predict extreme outcomes (dazzling success, abject failure) much more confidently than they predict middling ones, but they're also more likely to be wrong when predicting extreme outcomes (because intuitive predictions aren't nearly regressive enough), so people are most confident when they're most likely to be wrong. Kahneman and Tversky call this "the illusion of validity".

There's a bit about regression to the mean, but I intend to cover that in a separate post.

 

"Studies of representativeness", Maya Bar-Hillel

This paper attempts to determine what specific features cause a sample to be judged more or less representative, rather than relying on the black-box approach of asking subjects to assess representativeness themselves. It's pretty esoteric and difficult to summarize, so I won't get into it. There's a flowchart summarizing the findings. 

 

"Judgments of and by representativeness", Tversky and Kahneman

The first section of this chapter breaks down representativeness judgement into four cases:

1. "M is a class and X is a value of a variable defined in this class." e.g. A representative value for the age of first marriage.

2. "M is a class and X is an instance of that class." e.g. Robins are representative birds.

3. "M is a class and X is a subset of M." e.g. Psychology students are representative of all students.

4. "M is a (causal) system an X is a (possible) consequence." e.g. An act being representative of a person.

The second section is an examination of the effect of the representativeness heuristic on the evaluation of compound probabilities. This experiment has been written about on Less Wrong before, so I'll be brief: given two possible outcomes, one of which is highly representative (in sense 4) and one of which is highly non-representative, subjects rank their conjunction as being more probable than the non-representative outcome alone, even though any compound probability must be less than either of its components. (For example, "Reagan will provide federal support for unwed mothers and cut support to local governments" was rated more probable than "Reagan will provide federal support for unwed mothers.") Statistical training doesn't help.

~

This brings us up to page 100, and the end of the Representativeness section. Next post: "Causality and attribution".

Background material for reading Judgment Under Uncertainty?

2 Tesseract 25 June 2011 02:24AM

After seeing it constantly referenced in the Sequences and elsewhere, I've picked up Kahneman and Tversky's book/collection of papers Judgment Under Uncertainty: Heuristics and Biases. I was wondering if anyone here who's read it or knows the subject would recommend any prefatory material so that it makes more sense/is more meaningful.

Personal background: [Personal information deleted] From that and from this site, I'm passingly familiar with e.g. the representativeness heuristic and Bayesian probability, but I've never had to use it much in any academic setting.

Any advice before I delve into it?

Calculus textbook recommendation?

3 Tesseract 29 May 2011 06:20AM

I'm seeking suggestions for a calculus textbook that I could use to teach myself the subject.

Details:

[Personal details removed.] I know that most calculus textbooks are designed to be taught to classes, so I was wondering if anyone knew of a textbook specifically designed for autodidacts, or one that would be particularly useful for the purpose. (If you just know of a good general textbook, I'd be grateful to hear that as well.)

Thanks to anyone who gives a suggestion.

EDIT: Chose a Marvin-Gardner-edited version of Calculus Made Easy, accompanied by Khan Academy lessons.

Link: "When Science Goes Psychic"

3 Tesseract 08 January 2011 09:00AM

A major psychology journal is planning to publish a study that claims to present strong evidence for precognition. Naturally, this immediately stirred up a firestorm. There are a lot of scientific-process and philosophy-of-science issues involved, including replicability, peer review, Bayesian statistics, and degrees of scrutiny. The Flying Spaghetti Monster makes a guest appearance.

Original New York Times article on the study here.

And the Times asked a number of academics (including Douglas Hofstadter) to comment on the controversy. The discussion is here.

I, for one, defy the data.

Meta: Cleaning the front page

48 Tesseract 20 December 2010 04:45AM

All the meetup announcements get promoted, so the front page ends up full of 'em: half of it right now (5/10) is meetup announcements, and with the addition of the quote threads only 30% of the front page is currently 'content'. While meetup announcements are all well and good, it seems counterproductive to have them up there after the meetup date, as is the case with four out of the current five -- it just clutters up the front page even more without providing any benefit.

If post promotion is reversible, it would seem to be a simple step for one of the moderators to depromote each meetup announcement once it's taken place.

(Apologies if this is the wrong place to put an organizational suggestion; I didn't find any obvious better place.)

The Truth about Scotsmen, or: Dissolving Fallacies

27 Tesseract 05 December 2010 09:57PM

One unfortunate feature I’ve noticed in arguments between logically well-trained people and the untrained is a tendency for members of the former group to point out logical errors as if they were counterarguments. This is almost totally ineffective either in changing the mind of your opponent or in convincing neutral observers. There are two main reasons for this failure.

1. Pointing out fallacies is not the same thing as urging someone to reconsider their viewpoint.

Fallacies are problematic because they’re errors in the line of reasoning that one uses to arrive at or support a conclusion. In the same way that taking the wrong route to the movie theater is bad because you won’t get there, committing a fallacy is bad because you’ll be led to the wrong conclusions.

But all that isn’t inherent in the word ‘fallacy’: the vast majority of human beings don’t understand the statement “that’s a fallacy” as “you seem to have been misled by this particular logical error – you should reevaluate your thought process and see if you arrive at the same conclusions without it.” Rather, most people will regard it as an enemy attack,with the result that they will either reject the existence of the fallacy or simply ignore it. If, by some chance, they do acknowledge the error, they’ll usually interpret it as “your argument for that conclusion is wrong – you should argue for the same conclusion in a different way.”

If you’re actually trying to convince someone (as opposed to, say, arguing to appease the goddess Eris) by showing them that the chain of logic they base their current belief on is unsound, you have to say so explicitly. Otherwise saying “fallacy” is about as effective as just telling them that they’re wrong.

2. Pointing out the obvious logical errors that fallacies characterize often obscures the deeper errors that generate the fallacies.

Take as an example the No True Scotsman fallacy. In the canonical example, the Scotsman, having seen a report of a crime, claims that no Scotsman would do such a thing. When presented with evidence of just such a Scottish criminal, he qualifies his claim, saying that no true Scotsman would do such a thing.

The obvious response to such a statement is “Ah, but you’re committing the No True Scotsman fallacy! By excluding any Scotsman who would do such a thing from your reference class, you’re making your statement tautologically true!”

While this is a valid argument, it’s not an effective one. The Scotsman, rather than changing his beliefs about the inherent goodness of all Scots, is likely to just look at you sulkily. That’s because all you’ve done is deprive him of evidence for his belief, not make him disbelieve it – wiped out one of his squadrons, so to speak, rather than making him switch sides in the war. If you were actually trying to make him change his mind, you’d have to have a better model of how it works.

No one is legitimately entranced by a fallacy like No True Scotsman – it’s used strictly as rationalization, not as a faulty but appealing reason to create a belief. Therefore the reason for his belief must lie deeper. In this case, you can find it by looking at what counts for him as evidence. To the Scotsman, the crime committed by the Englishman is an indictment of the English national character, not just the action of an individual. Likewise, a similar crime committed by a Scotsman would be evidence against the goodness of the Scottish character. Since he already believes deeply in the goodness of the Scottish character, he has only two choices: acknowledge that he was wrong about a deeply felt belief, or decide that the criminal was not really Scottish.

The error at the deepest level is that the Scotman possesses an unreasoned belief in the superiority of Scottish character, but it would be impractical at best to argue that point. The intermediate and more important error is that he views national character as monolithic – if Scottish character is better than English character, it must be better across all individuals – and therefore counts the actions of one individual as non-negligible evidence against the goodness of Scotland. If you’re trying to convince him that yes, that criminal really can be a Scotsman, the best way to do so would not be to tell him that he’s comitting a fallacy, but to argue directly against the underlying rationale connecting the individual’s crime and his nationalism. If national character is determined by, say, the ratio of good men to bad men in each nation, then bad men can exist in both England and Scotland without impinging on Scotland’s superiority – and suddenly there’s no reason for the fallacy at all. You’ve disproved his belief and changed his mind, without the word ‘fallacy’ once passing your lips.