Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Beware of WEIRD psychological samples

38 Post author: ciphergoth 13 September 2009 11:28AM

Most of the research on cognitive biases and other psychological phenomena that we draw on here is based on samples of students at US universities.  To what extent are we uncovering human universals, and to what extent facts about these WEIRD (Western, Educated, Industrialized, Rich, and Democratic) sample sources? A paper in press in Behavioural and Brain Sciences the evidence from studies that reach outside this group and highlights the many instances in which US students are outliers for many crucial studies in behavioural economics.

Epiphenom: How normal is WEIRD?

Henrich, J., Heine, S. J., & Norenzayan, A. (in press). The Weirdest people in the world? (PDF) Behavioral and Brain Sciences.

Broad claims about human psychology and behavior based on narrow samples from Western  societies are regularly published in leading journals. Are such species-generalizing claims justified? This review suggests not only that substantial variability in experimental results emerges across populations in basic domains, but that standard subjects are in fact rather unusual compared with the rest of the species - frequent outliers. The domains reviewed include visual perception, fairness, categorization, spatial cognition, memory, moral reasoning and self‐concepts. This review (1) indicates caution in addressing questions of human nature based on this thin slice of humanity, and (2) suggests that understanding human psychology will require tapping broader subject pools. We close by proposing ways to address these challenges.

Comments (28)

Comment author: taw 13 September 2009 03:50:26PM 20 points [-]

It's not necessarily a bad thing that 67% of studies are done on psychology undergrads. 90% or so of medical studies are done on mice. If you find something big, you should go outside your cheapest testing group (be it mice or psychology undergrads), but if something fails to produce interesting results even on them, you just saved yourself a lot of money and effort - and failures will be much more common than interesting finds.

Comment author: gjm 13 September 2009 12:56:56PM 7 points [-]

Original paper and Epiphenom post both say "Western, Educated, Industrialized, Rich, and Democratic" where you have "White, Educated, Intelligent, Rich, and Democratic". ciphergoth, if the change was deliberate then I'd be interested to know why; if not, I'd be interested in any speculations you have about why :-).

Comment author: ciphergoth 13 September 2009 01:35:30PM 1 point [-]

Fascinating. Fixed - thanks!

Comment author: gjm 14 September 2009 01:10:15AM 4 points [-]

Possible partial explanation: the paper uses WEIRD as a description of societies; if you were thinking of individuals then "Western" and "Industrialized" would be odd words to use, "White" and "Intelligent" less so. Possible diagnostic question: what meaning of "Democratic" was foremost in your mind? (I initially read it as a US-centric description of political stance.)

Comment author: ciphergoth 14 September 2009 06:17:26PM 1 point [-]

I had in mind an image of a preppy-looking US student doing a psychology test, so yes, I was imagining individuals rather than societies. I read Democratic as a description of the society but the reading of it as an individual political leaning did cross my mind.

Comment author: Pzeffan 15 September 2009 10:40:26PM 5 points [-]

Isn't it possible that some people just want to make a name for themselves so bad, that they will purposefully search for an opposing or radical alternative solution? Everything is so competitive in the western world, that it wouldn't surprise me if the dynamics of problem solving are misused for a popularity contest rather than getting to the core of a real problem. I often wonder if researchers latch on to an interesting subject and inadvertently shift their focus from something of real value to simply having something important "sounding" to say...just so it appears as if they are being original and creative.

There are many pressures to perform at any level of the spotlight, and those aspiring to join the club, such as US university students, would naturally be a little more vulnerable to have their intent seeded with a burning desire to impress. Just as well, if one were to statistically calculate how many grad students were doing research in the humanities, social sciences, psychology, etc...it immediately becomes obvious that the chance for standing out in the crowd would greatly increase, were one to produce some fresh content. I just worry that the content we are being fed is lacking in substance.

Comment author: Larks 13 September 2009 05:12:35PM 11 points [-]

Suggesting there's a market for repeating experiments (cheaply, as well) in rural India? This looks like it'd yield some easy research opportunities.

Comment author: taw 13 September 2009 07:02:03PM 11 points [-]

There are also language problems here - most psychological "experiments" consist of giving people questionnaires, followed by data mining of them. And questionnaires are very language and culture dependent.

I know it's just anecdotal evidence, but I know someone who tried to translate some standard English questionnaire (mindfulness or somesuch) into Polish for their MSc thesis, and testing both on students who majored in English, so were supposedly fluent in both languages. All usual controls like randomizing order of questionnaires, and using multiple independent translations were used. And in spite of all that correlations between answers to the same exact question in Polish and English were less than impressive (much lower than the usual test-retest correlation), and for many questions every single translation yielded what was not statistically significantly different from zero correlation.

I think these problems would make a far better thesis than what got actually written, but as a rule failures don't get written or published.

Comment author: billswift 15 September 2009 03:41:51AM 4 points [-]

Even worse than language difficulties, I would think, would be large differences in cultural framing of questions. Every culture brings a different set of background issues to the types of questions asked in many psychological studies. The problem has mostly been solved for IQ type tests, but, even without considering the amount of work involved in developing the cross-cultural IQ tests, framing would be a bigger problem for personality and other "softer" tests. (I have, but have only leafed through, Jensen's "Bias in Mental Testing"; I can already tell it's going to take a lot of work, and it's a bit dated, so I've been putting it off since it's only a peripheral interest.)

Comment author: Johnicholas 15 September 2009 12:19:07PM 1 point [-]

What evidence do you have that "the problem has mostly been solved for IQ type tests"?

Sorry, that sounded challenging, and it isn't meant to be. Would you please point me to any books, papers, and so on?

Comment author: billswift 15 September 2009 06:48:49PM 4 points [-]

"Arthur Jensen Replies to Stephen Jay Gould : THE DEBUNKING OF SCIENTIFIC FOSSILS AND STRAW PERSONS " , http://www.debunker.com/texts/jensen.html , is a good place to start. It's a detailed criticism of Gould's "The Mismeasure of Man" by one of the best psychometricians around. It's got a good bibliography, but is rather dated being from 1982. No matter what you may think of his politics, Steve Sailer also has a lot of good, and more recent, information in his essays on IQ, especially on international comparisons, on his website, www.isteve.com . Richard Lynn's books are supposed to be very good also, but I haven't read them (too many interests, too little time and money).

Comment author: Johnicholas 15 September 2009 07:18:20PM *  -1 points [-]

Right now, my best source for "answers to Arthur Jensen" is Cosma Shalizi. My understanding is that performance on IQ tests is mostly related to culture - even though that was (to some extent) Gould's position.




Comment author: Douglas_Knight 16 September 2009 03:28:09AM 2 points [-]

performance on IQ tests is mostly related to culture

Shalizi simply doesn't say that.

There are two things you could mean by it. One is that some cultures make you smart. The other is that the IQ test mostly screens for culture and not useful abilities. It is certainly true that culture affects the difference between performance on Raven's matrices and other tests. In particular, the Flynn effect is stronger for Raven's matrices than other tests. Also, sub-saharan Africans do dramatically worse on RM than on other estimates, where they're closer to African-Americans (who do slightly worse on RM than on common tests). In applying this information to the two possibilities about culture, you'd have to decide which testing approach you liked better, which would depend on what you're trying to measure. "g" is not the correct answer to this question.

Comment author: Douglas_Knight 16 September 2009 03:04:34AM 0 points [-]

The very title "debunking of scientific fossils and straw persons" makes it sound like it has limited use. Johnicholas asked for positive statements, but a debunking is purely negative. Just because Gould lied about X doesn't make his position wrong.

I suspected from your first comment that all you meant was that people who attempt to prove cultural bias in IQ tests have failed. That is certainly true, with some surprising findings, like that the American black-white gap is larger on questions that are, on the face of them, more culturally neutral. But relying on an opposition you don't trust to do the research is a highly biased search strategy. It is not a great political victory to say that Raven's matrices are culturally biased, so few say it, but that doesn't make it false.

Comment author: Alicorn 13 September 2009 05:34:49PM 3 points [-]

Yes, but then you have to send the researchers to India. (Unless you also recruit Indian psychologists who already live there to do your replications.)

Comment author: Larks 13 September 2009 11:21:49PM 3 points [-]

Gap year students! They can dig some irrigation ditches while they're there.

Critically, they don't need to devise their own experiments; they're effectively doing the leg-work for more senior researchers back in the UK/US, and also making use of the language & cultural skills they've learnt for their gap year/volunteering. Also, the data they gather can be used both to judge the hypothesis the test was originally investigating, and reveal differences between cultures and nations.

Comment author: Alicorn 13 September 2009 11:37:46PM 3 points [-]

Ooh, this could be a scholarship thing. "Study Abroad And Do Replication Studies Fund". Give 'em a grand apiece, no essay required, I bet it would work.

Comment author: gwern 14 September 2009 10:05:06AM 2 points [-]

I'd be worried about trusting the students. It's like giving them a test and your answer key, and telling them 'hey, we did our best in getting the right answers, but please work through all the problems again and see whether we made any mistakes'. This sort of thing only works if you don't get too much garbage in your replications.

The students might be honest enough to actually do all the work professionally, but I'm not sure I'd trust American students (a summer/semester isn't that long, and if they're in India, there are things to do there that could fill a lifetime; the temptation to just fudge up some data and go do all those awesome things would be tremendous), much less Indian ones.

Comment author: Douglas_Knight 14 September 2009 11:49:59AM 4 points [-]

This sort of thing only works if you don't get too much garbage in your replications.

You have way too much trust in the professors. Just a few students naive enough to do what they're supposed to would be an improvement on the status quo.

Comment author: gwern 08 September 2011 10:50:08PM 4 points [-]

The problem is, we already have replications being done by Indian and Chinese scientists and... they're not very good. Here's one: "Local Literature Bias in Genetic Epidemiology: An Empirical Evaluation of the Chinese Literature", 2005:

"We targeted 13 gene-disease associations, each already assessed by meta-analyses, including at least 15 non-Chinese studies. We searched the Chinese Journal Full-Text Database for additional Chinese studies on the same topics. We identified 161 Chinese studies on 12 of these gene-disease associations; only 20 were PubMed-indexed (seven English full-text). Many studies (14–35 per topic) were available for six topics, covering diseases common in China. With one exception, the first Chinese study appeared with a time lag (2–21 y) after the first non-Chinese study on the topic. Chinese studies showed significantly more prominent genetic effects than non-Chinese studies, and 48% were statistically significant per se, despite their smaller sample size (median sample size 146 versus 268, p < 0.001). The largest genetic effects were often seen in PubMed-indexed Chinese studies (65% statistically significant per se). Non-Chinese studies of Asian-descent populations (27% significant per se) also tended to show somewhat more prominent genetic effects than studies of non-Asian descent (17% significant per se)."

Comment author: Larks 14 September 2009 07:24:56PM 2 points [-]

The huge amount of data that could be gathered should allow for checking; data that is both different from what westerners would expect, and consistent over several independent students, is likely to be accurate. Or at least, not inaccurate because of lazy students.

Comment author: Mitchell_Porter 14 September 2009 12:51:30PM 10 points [-]

This may be the place to make an observation which is still growing in me, so I can only state it in a very preliminary way for now. The great historical precursor is to be found in the psychoanalytic subculture which sprung up after Freud, with all its competing schools. Two facts stand out: these people believed that they understood the human mind, and their theories shaped their interactions with each other. (As when one school's rejection of the theories of another was itself explained psychoanalytically.)

There are new conceptions of human nature springing up from genetics, neuroscience, and cognitive science, and these conceptions are spreading into the culture at large. The most prominent vector for the spread of these ideas is the mass media. But enthusiast online communities like this one are going to be far more demonstrative of the social and psychological effects which result from taking these new ideas utterly to heart.

Two other examples come to mind. There is a sub-blogosphere focused on a particular conception of male and female psychology, centered on the blogger Roissy, which owes a lot to evolutionary psychology. And there is another sub-blogosphere focused on a new racial politics, centered on the blogger Steve Sailer, which owes a lot to human genomics. Together with the bias/rationality focus found here and at Overcoming Bias, these blog communities are not just an exercise in trying to assimilate new discoveries and live their implications, they are themselves little sociological case studies in the impact of science on human subjectivity, individually and collectively.

Now beyond sounding generic warnings about the lesson of history, that people have repeatedly thought that they had things figured out, when they didn't; and reminding everyone of the skeptical abyss which exists beneath almost all assertions of what is so; I do not really have a way to inoculate you against the errors that come from embracing your favorite paradigm, whatever it is. This post gave me an opportunity to sound the alarm only because it exposes just one of the ways whereby that which is taken to be new knowledge, hereabouts, may not be knowledge at all. I suppose one principle is to keep an eye on whatever part of the culture you think epitomizes the old beliefs, the old way of thinking that has been superseded, because if anyone will escape whatever pathologies accompany the embrace of the new, if anyone knows things that you cannot believe to be true because of what you "know", it's going to be Them, the Opposition, whoever they may be. And specifically with respect to evolutionary psychology, just to throw an opposite perspective into the ring, I'm going to mention Jeremy Griffith, a totally obscure Australian thinker who puts the biohistorical perspective on human cognition and human values to a completely different use than anyone else. He has his own problems as a thinker, but perhaps he can be a corrective to some of the excesses of the ev-psych outlook.

In the end, though, I guess we have no choice but to endure whatever downsides accompany the outlooks we choose, if we really do insist on holding those outlooks. So, pointless best wishes to us all, as we suffer the travails of inevitable cause and effect. :-)

Comment author: HughRistik 14 September 2009 10:36:09PM *  7 points [-]

There is a sub-blogosphere focused on a particular conception of male and female psychology, centered on the blogger Roissy, which owes a lot to evolutionary psychology.

I'm a big fan of evolutionary psychology, including practical applications of it. Roissy makes a good start attempting to apply it, but he falls prey to major ideological errors, overgeneralization, and oversimplification. I see no evidence that he has read more than a few popular books on the subject. He has made the discovery that even naive applications of evolutionary psychology can be incredibly powerful in the practical world, then falls into the naive realist pit and assumes that his theories are true just because they work better than the conventional alternatives. Furthermore, he fails at ethics really, really badly. I'm being kinda vague, but I'll go into further detail upon request.

Evolutionary psychology is great. Applied evolutionary psychology is great. Roissy just isn't doing it right.

Comment author: bogus 27 September 2009 08:43:18PM *  3 points [-]

Given the emerging influence of 'game' bloggers such as roissy and their often disappointing interaction with so-called "men's rights" activism, (see e.g. [1] and resulting comments, [2]) I think it would be useful if you did take the time to write an extended critique of them. Are you still affiliated with feministcritics.org?

Comment author: HughRistik 28 September 2009 06:55:46PM 2 points [-]

I am indeed planning such an extended critique. I'm just deciding whether it would make sense to post it here, or FC.org, or somewhere else entirely.

And yes, I'm still one of the bloggers there, though I am sort of on hiatus.

Comment author: DonGeddis 17 September 2009 01:52:14AM -1 points [-]

It's hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy's goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.

He's not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.

(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn't he?)

Comment author: David_Gerard 07 June 2011 09:56:18AM *  -1 points [-]

then falls into the naive realist pit and assumes that his theories are true just because they work better than the conventional alternatives.

Upvoted for this. Now to get it down to 140 characters ...

Edit: Posted. Suggestions for how to cut it down enough to add credit welcomed.

Comment author: MichaelBishop 14 September 2009 01:46:54PM 0 points [-]

Economists Steve Levitt and John List have a nice paper about generalizing from social/cognitive science laboratory experiments to the real world. They even write down a model. http://pricetheory.uchicago.edu/levitt/Papers/jep%20revision%20Levitt%20&%20List.pdf