There are discussions about this everywhere, Cthulhu eat us first.
Ideas for heuristics and biases research topic?
Hey Less Wrong,
I'm currently taking a cognitive psychology class, and will be designing and conducting a research project in the field — and I'd like to do it on human judgment, specifically heuristics and biases. I'm currently doing preliminary research to come up with a more specific topic to base my project on, and I figured Less Wrong would be the place to come to find questions about flawed human judgment. So: any ideas?
(I'll probably be using these ideas mostly as guidelines for forming my research question, since I doubt it would be academically honest to take them outright. The study will probably take the form of a questionnaire or online survey, but experimental manipulation is certainly possible and it might be possible to make use of other psych department resources.)
Is that supposed to be the Lovecraftian variation on 'God help us'?
To love truth for truth's sake is the principal part of human perfection in this world, and the seed-plot of all other virtues.
Locke
If you want to live in a nicer world, you need good, unbiased science to tell you about the actual wellsprings of human behavior. You do not need a viewpoint that sounds comforting but is wrong, because that could lead you to create ineffective interventions. The question is not what sounds good to us but what actually causes humans to do the things they do.
Douglas Kenrick
Evolution is no threat to religion. Natural selection, explaining and predicting evolution is a threat to religion.
Indeed, one can usefully define any belief system as quasi religious if it finds natural selection threatening. If that belief system piously proclaims its admiration for Darwin while evasively burying his ideas, attributing to him common descent, rather than the explanation of common descent, then that belief system is religious, or serves the same functions and has the same problems as religion.
The trouble is that natural selection implies not the lovely harmonious nature of the environmentalists and Gaea worshipers, but a ruthless and bloody nature, red in tooth and claw, that is apt to be markedly improved by a bit of clear cutting, a few extinctions, and a couple of genocides, and of course converting the swamps into sharply differentiated dry land with few trees, and lakes with decent fishing, by massive bulldozing. And a few more genocides. Recall Darwin's cheerful comments about extinction and genocide. It is all progress. Well, if not all progress, on average it will be progress.
The idea that destroying the environment will make the remaining species "better" by making sure that only the "fittest" survive betrays a near-total misunderstanding of evolution. Evolution is just the name we give to the fact that organisms (or, more precisely, genes) which survive and reproduce effectively in a given set of conditions become more frequent over time. If you clear-cut the forest, you're not eliminating "weak" species and making room for the "strong" — you're getting rid of species that were well-adapted to the forest and increasing the numbers of whatever organisms can survive in the resulting waste.
Seconded. Heck, even the Catholic Church says there is no conflict.
Today, the Church's unofficial position is an example of theistic evolution, also known as evolutionary creation,[2] stating that faith and scientific findings regarding human evolution are not in conflict, though humans are regarded as a special creation, and that the existence of God is required to explain both monogenism and the spiritual component of human origins. Moreover, the Church teaches that the process of evolution is a planned and purpose-driven natural process, actively guided by God.
I think that if you understand how evolution works on a really intuitive level — how blind it is — it's very difficult to believe both in human evolution and a guiding divinity. "Genes which promote their own replication become more common over time" is not a principle which admits of purpose. Vaguer understandings of evolution's actual mechanism probably contribute to the apparent reasonableness of "theistic evolution".
I don't think you're correct. Rare is the top-level post that beats 100 karma; I can do that with ten or so insightful comments that take much less time to compose.
100 upvotes for a top-level post is 1000 karma, not 100 — upvotes for top-level posts are worth ten times more karma than upvotes for discussion and comments. This makes posts disproportionate sources of karma, even given the greater effort involved in writing them.
Personally I'd prefer if the limit was only on downvotes. Sometimes I see a really good conversation and want to upvote 5 comments in quick succession.
Sometimes I see a really bad series of comments by the same person and want to downvote 5 times in quick succession.
Both of these suggestions would be incredibly overbearing solutions to a relatively minor problem.
Judgment Under Uncertainty summaries, Part 1: Representativeness
Judgment Under Uncertainty: Heuristics and Biases is one of the foundational works on the flaws of human reasoning, and as such gets cited a lot on Less Wrong — but it's also rather long and esoteric, which makes it inaccessible to most Less Wrong users. Over the next few months, I'm going to attempt to distill the essence of the studies that make up the collection, in an attempt to convey the many interesting bits without forcing you to slog through the 500 or so pages of the volume itself. This post summarizes sections I (Introduction) and II (Representativeness).
By way of background: Judgment Under Uncertainty is a collection of 35 scientific papers and articles on how people make decisions with limited information, edited by Daniel Kahneman, Amos Tversky, and Paul Slovic. Kahneman and Tversky are the most recognizable figures in the area and the names most associated with the book, but only 12 of the studies are their work. It was first published in 1982 (my version is from 1986), and most studies were performed in the '70s — so note that this is not up-to-date research, and I can't say for sure what the current scientific consensus on the topic is. Judgement Under Uncertainty focuses on the divergence of human intuition from optimal reasoning, so it uses a lot of statistics and probability to define what's optimal. The details are actually pretty fascinating if you have the time and inclination (and it's also something of an education in study design and statistics), and this series of posts by no means replaces the book, but I intend to provide something of a shorthand version.
That said, on to the summaries! Title of the chapter/paper in quotes, sections organized as in the book and in bold. (Incomplete preview here, if you want to follow along.)
Introduction
"Judgment Under Uncertainty: Heuristics and Biases", Tversky and Kahneman (1974)
This is the most important paper in the book, and it's short and publicly available (PDF), so I'd encourage you to just go read it now. It reviews the representativeness and availability heuristics and the various errors in reasoning they produce, and introduces the idea of anchoring. Since it reviews some of the material contained in Judgment Under Uncertainty, there's overlap between the material it covers and the material I'm going to cover in this and the other posts. As it's already a boiled-down version of the heuristics literature, I won't attempt to summarize it here.
Representativeness
"Belief in the law of small numbers", Tversky and Kahneman, 1971 (PDF)
People expect that samples will have much less variability and be much more representative of the population than they actually are. This manifests in expecting that two random samples will be very similar to each other and that large observations in one direction will be canceled out by large observations in the other rather than just being diluted. Tversky and Kahneman call this the "law of small numbers" — the belief that the law of large numbers applies to small samples as well.
One consequence of this in science is that failing to account for variability means that studies will be way underpowered. Tversky and Kahneman surveyed psychologists on the probability that a significant result from an experiment on 20 subjects would be confirmed by a replication using 10 subjects — most estimated around 85%, when it was actually around 48%. (Incidentally, a study they cite reviewing published results in psychology estimates that the power was .18 for small effects and .48 for effects of medium size.) The gist of this is that one might very well find a real significant result, attempt to replicate it using a smaller sample on the belief that the small sample will be very representative of the population, and miss entirely due to lack of statistical power. Worse, when given a hypothetical case of a student who ran such a replication and got an insignificant result, many of the surveyed suggested he should try to find an explanation for the difference between the two groups — when it was due entirely to random variation.
"Subjective probability: A judgment of representativeness", Kahneman and Tversky, 1972 (PDF)
People judge the likelihood of events based on representativeness rather than actual probability. Representativeness is a bit hard to pin down, but involves reflecting the characteristics of the population and the process that generated it — so the likelihood of six children having the gender order B G B B B B is judged less than of them having the order G B G B B G (because it doesn't reflect the proportion of boys in the population) and likewise for B B B G G G versus G B B G B G (because it doesn't reflect the randomness of gender determination).
People also completely ignore the effect of sample size on the probability of an outcome (e.g. the likelihood of the proportion of male babies being between .55 and .65 for N births), because it doesn't affect the representativeness of that outcome. Repeat: Sample size has no effect at all. People expect the probability of the example I gave to be around 15% whether it's N=10 or N=1000, when it's actually ~20% for N=10 and zero for N=1000. (The graphs on pages 42-43 of the PDF can get this across better than I can — the black line is the predicted probability for all sample sizes, and the bars are the real probability for each.)
"On the psychology of prediction", Kahneman and Tversky, 1973
Judging by representativeness makes people completely ignore base rates (i.e. prior probabilities). Subjects asked to judge (on the basis of a personality sketch) either how similar someone was to the typical student in a graduate program or how likely they were to be a student in that program produced identical results (correlation of .97), with no regard whatsoever for the judged prior probability of a graduate student being in a given area (correlation of -.65) — which would be permissible if they thought the sketches were such strong evidence that they overwhelmed existing information, but when asked, subjects expected predictions based on personality sketches to be accurate only 23% of the time. In a followup, Kahneman and Tversky manipulated beliefs about how predictive the evidence was (telling one group that such predictions were accurate 55% of the time and the other 27%) and found that while subjects were slightly less confident in the low-predictiveness group (though they were still 56% sure of being right), they ignored base rates just as completely in either condition. In this and in several other experiments in this chapter, people fail to be regressive in their predictions — that is, the weight that they assign to prior probability versus new evidence is unaffected by the expected accuracy of the new evidence.
An interesting specific point with regard to new information replacing rather than supplementing prior probabilities: while people can make judgments about base rates in the abstract, completely useless specific information can cause this ability to disappear. e.g.: If asked for the probability that an individual randomly selected from a group of 70 engineers and 30 lawyers is a lawyer, they'll say 30%, but if given utterly useless information about a specific person —
Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.
— they'll go back to 50-50.
The rest of the chapter contains several other experiments in which people egregiously ignore base rates and assign far too much predictive validity to unreliable evidence.
People make predictions (e.g. future GPA) more confidently when input (e.g. test scores) is highly consistent, but highly consistent data tends to result from highly intercorrelated variables, and you can predict more accurately given independent variables than intercorrelated ones — so high consistency increases confidence while decreasing accuracy. What's more, people predict extreme outcomes (dazzling success, abject failure) much more confidently than they predict middling ones, but they're also more likely to be wrong when predicting extreme outcomes (because intuitive predictions aren't nearly regressive enough), so people are most confident when they're most likely to be wrong. Kahneman and Tversky call this "the illusion of validity".
There's a bit about regression to the mean, but I intend to cover that in a separate post.
"Studies of representativeness", Maya Bar-Hillel
This paper attempts to determine what specific features cause a sample to be judged more or less representative, rather than relying on the black-box approach of asking subjects to assess representativeness themselves. It's pretty esoteric and difficult to summarize, so I won't get into it. There's a flowchart summarizing the findings.
"Judgments of and by representativeness", Tversky and Kahneman
The first section of this chapter breaks down representativeness judgement into four cases:
1. "M is a class and X is a value of a variable defined in this class." e.g. A representative value for the age of first marriage.
2. "M is a class and X is an instance of that class." e.g. Robins are representative birds.
3. "M is a class and X is a subset of M." e.g. Psychology students are representative of all students.
4. "M is a (causal) system an X is a (possible) consequence." e.g. An act being representative of a person.
The second section is an examination of the effect of the representativeness heuristic on the evaluation of compound probabilities. This experiment has been written about on Less Wrong before, so I'll be brief: given two possible outcomes, one of which is highly representative (in sense 4) and one of which is highly non-representative, subjects rank their conjunction as being more probable than the non-representative outcome alone, even though any compound probability must be less than either of its components. (For example, "Reagan will provide federal support for unwed mothers and cut support to local governments" was rated more probable than "Reagan will provide federal support for unwed mothers.") Statistical training doesn't help.
~
This brings us up to page 100, and the end of the Representativeness section. Next post: "Causality and attribution".
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Rationality drugs. Many nootropics can increase cognitive capacity, which according to Stanovich's picture of the cognitive science of rationality, should help with performance on some rationality measures. However, good performance on many rationality measures requires not just cognitive capacity but also cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias. So: Are there drugs that increase cognitive reflectiveness / "need for cognition"?
Debiasing. I'm developing a huge, fully-referenced table of (1) thinking errors, (2) the normative models they violate, (3) their suspected causes, (4) rationality skills that can meliorate them, and (5) rationality exercises that can be used to develop those rationality skills. Filling out the whole thing is of course taking a while, and any help would be appreciated. A few places where I know there's literature but I haven't had time to summarize it yet include: how to debias framing effects, how to debias base rate neglect, and how to debias confirmation bias. (But I have, for example, already summarized everything on how to debias the planning fallacy.)
Ah, I think you misunderstood me (on reflection, I wasn't very clear) — I'm doing an experiment, not a research project in the sense of looking over the existing literature.
(For the record, I decided on conducting something along the lines of the studies mentioned in this post to look at how distraction influences retention of false information.)