Related: Social Psychology & Priming: Art Wears Off

I recommend reading the piece, but below are some excerpts and commentary.

Power of Suggestion

By Tom Bartlett

...

Along with personal upheaval, including a lengthy child-custody battle, [Yale social psychologist John Bargh] has coped with what amounts to an assault on his life's work, the research that pushed him into prominence, the studies that Malcolm Gladwell called "fascinating" and Daniel Kahneman deemed "classic."

What was once widely praised is now being pilloried in some quarters as emblematic of the shoddiness and shallowness of social psychology. When Bargh responded to one such salvo with a couple of sarcastic blog posts, he was ridiculed as going on a "one-man rampage." He took the posts down and regrets writing them, but his frustration and sadness at how he's been treated remain.

Psychology may be simultaneously at the highest and lowest point in its history. Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title. That isn't true across the board. ... But a social psychologist with a sexy theory has star potential. In the last decade or so, researchers have made astonishing discoveries about the role of consciousness, the reasons for human behavior, the motivations for why we do what we do. This stuff is anything but incremental.

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers.

Psychology isn't the only field with fakers, but it has its share. Plus there's the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another's work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the "anchoring effect," which happens, for instance, when a store lists a competitor's inflated price next to its own to make you think you're getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient.

A small group of skeptical psychologists—let's call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can't get those results. The studies don't check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

... When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.

It's a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If "Florida" makes you sluggish, could "cheetah" make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we're steering the ship of self, but what if we're actually getting blown about by ghostly gusts?

Steve Sailer comments on this:

Advertisers, from John Wanamaker onward, sure as heck hope they are blowing you about by ghostly gusts.

Not only advertisers the industry where he worked in but indeed our little community probably loves any results confirming such a picture. We need to be careful about that. Bartlett continues:

John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn't publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.

The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn't believe it. They were willing to stick their necks out, but they didn't want to be the only ones.

Since that study was published in the Journal of Personality and Social Psychology, it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: "The implications for many social psychological phenomena ... would appear to be considerable." Translation: This is a huge deal.

...

The last year has been tough for Bargh. Professionally, the nadir probably came in January, when a failed replication of the famous elderly-walking study was published in the journal PLoS ONE. It was not the first failed replication, but this one stung. In the experiment, the researchers had tried to mirror Bargh's methods with an important exception: Rather than stopwatches, they used automatic timing devices with infrared sensors to eliminate any potential bias. The words didn't make subjects act old. They tried the experiment again with stopwatches and added a twist: They told those operating the stopwatches which subjects were expected to walk slowly. Then it worked. The title of their paper tells the story: "Behavioral Priming: It's All in the Mind, but Whose Mind?"

The paper annoyed Bargh. He thought the researchers didn't faithfully follow his methods section, despite their claims that they did. But what really set him off was a blog post that explained the results. The post, on the blog Not Exactly Rocket Science, compared what happened in the experiment to the notorious case of Clever Hans, the horse that could supposedly count. It was thought that Hans was a whiz with figures, stomping a hoof in response to mathematical queries. In reality, the horse was picking up on body language from its handler. Bargh was the deluded horse handler in this scenario. That didn't sit well with him. If the PLoS ONE paper is correct, the significance of his experiment largely dissipates. What's more, he looks like a fool, tricked by a fairly obvious flaw in the setup.

...

Pashler, a professor of psychology at the University of California at San Diego, is the most prolific of the Replicators. He started trying priming experiments about four years ago because, he says, "I wanted to see these effects for myself." That's a diplomatic way of saying he thought they were fishy. He's tried more than a dozen so far, including the elderly-walking study. He's never been able to achieve the same results. Not once.

This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a "train wreck looming" in the field because of doubts surrounding priming research. He was blunt: "I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating," he wrote.

Strongly worded e-mails from Nobel laureates tend to get noticed, and this one did. He sent it after conversations with Bargh about the relentless attacks on priming research. Kahneman cast himself as a mediator, a sort of senior statesman, endeavoring to bring together believers and skeptics. He does have a dog in the fight, though: Kahneman believes in these effects and has written admiringly of Bargh, including in his best seller Thinking, Fast and Slow.

On the heels of that message from on high, an e-mail dialogue began between the two camps. The vibe was more conciliatory than what you hear when researchers are speaking off the cuff and off the record. There was talk of the type of collaboration that Kahneman had floated, researchers from opposing sides combining their efforts in the name of truth. It was very civil, and it didn't lead anywhere.

In one of those e-mails, Pashler issued a challenge masquerading as a gentle query: "Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?" In other words, put up or shut up. Point me to the stuff you're certain of and I'll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn't get the straightforward answer he wanted. "Some suggestions emerged but none were pointing to a concrete example," he says.

One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: "So from our reading of the literature, it is not clear why the results should be subtle or fragile."

Bargh contends that we know more about these effects than we did in the 1990s, that they're more complicated than researchers had originally assumed. That's not a problem, it's progress. And if you aren't familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you're unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you've proved is that you're no good at social psychology.

Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"

That's how David Shanks sees things. He, too, has been trying to replicate well-known priming studies, and he, too, has been unable to do so. In a forthcoming paper, Shanks, a professor of psychology at University College London, recounts his and his several co-authors' attempts to replicate one of the most intriguing effects, the so-called professor prime. In the study, one group was told to imagine a professor's life and then list the traits that brought to mind. Another group was told to do the same except with a soccer hooligan rather than a professor.

The groups were then asked questions selected from the board game Trivial Pursuit, questions like "Who painted 'Guernica'?" and "What is the capital of Bangladesh?" (Picasso and Dhaka, for those playing at home.) Their scores were then tallied. The subjects who imagined the professor scored above a control group that wasn't primed. The subjects who imagined soccer hooligans scored below the professor group and below the control. Thinking about a professor makes you smart while thinking about a hooligan makes you dumb. The study has been replicated a number of times, including once on Dutch television.

Shanks can't get the result. And, boy, has he tried. Not once or twice, but nine times.

The skepticism about priming, says Shanks, isn't limited to those who have committed themselves to reperforming these experiments. It's not only the Replicators. "I think more people in academic psychology than you would imagine appreciate the historical implausibility of these findings, and it's just that those are the opinions that they have over the water fountain," he says. "They're not the opinions that get into the journalism."

Like all the skeptics I spoke with, Shanks believes the worst is yet to come for priming, predicting that "over the next two or three years you're going to see an avalanche of failed replications published." The avalanche may come sooner than that. There are failed replications in press at the moment and many more that have been completed (Shanks's paper on the professor prime is in press at PLoS ONE). A couple of researchers I spoke with didn't want to talk about their results until they had been peer reviewed, but their preliminary results are not encouraging.

Ap Dijksterhuis is the author of the professor-prime paper. At first, Dijksterhuis, a professor of psychology at Radboud University Nij­megen, in the Netherlands, wasn't sure he wanted to be interviewed for this article. That study is ancient news—it was published in 1998, and he's moved away from studying unconscious processes in the last couple of years, in part because he wanted to move on to new research on happiness and in part because of the rancor and suspicion that now accompany such work. He's tired of it.

The outing of Diederik Stapel made the atmosphere worse. Stapel was a social psychologist at Tilburg University, also in the Netherlands, who was found to have committed scientific misconduct in scores of papers. The scope and the depth of the fraud were jaw-dropping, and it changed the conversation. "It wasn't about research practices that could have been better. It was about fraud," Dijksterhuis says of the Stapel scandal. "I think that's playing in the background. It now almost feels as if people who do find significant data are making mistakes, are doing bad research, and maybe even doing fraudulent things."

Here is a link to the wiki article on the mentioned misconduct. I recall some of the drama that unfolded around the outing and the papers themselves... looking at the kinds of results Stapel wanted to fake or thought would advance his career reminds me of some other older examples of scientific misconduct.

In the e-mail discussion spurred by Kahneman's call to action, Dijk­sterhuis laid out a number of possible explanations for why skeptics were coming up empty when they attempted priming studies. Cultural differences, for example. Studying prejudice in the Netherlands is different from studying it in the United States. Certain subjects are not susceptible to certain primes, particularly a subject who is unusually self-aware. In an interview, he offered another, less charitable possibility. "It could be that they are bad experimenters," he says. "They may turn out failures to replicate that have been shown by 15 or 20 people already. It basically shows that it's something with them, and it's something going on in their labs."

Joseph Cesario is somewhere between a believer and a skeptic, though these days he's leaning more skeptic. Cesario is a social psychologist at Michigan State University, and he's successfully replicated Bargh's elderly-walking study, discovering in the course of the experiment that the attitude of a subject toward the elderly determined whether the effect worked or not. If you hate old people, you won't slow down. He is sympathetic to the argument that moderators exist that make these studies hard to replicate, lots of little monkey wrenches ready to ruin the works. But that argument only goes so far. "At some point, it becomes excuse-making," he says. "We have to have some threshold where we say that it doesn't exist. It can't be the case that some small group of people keep hitting on the right moderators over and over again."

Cesario has been trying to replicate a recent finding of Bargh's. In that study, published last year in the journal Emotion, Bargh and his co-author, Idit Shalev, asked subjects about their personal hygiene habits—how often they showered and bathed, for how long, how warm they liked the water. They also had subjects take a standard test to determine their degree of social isolation, whether they were lonely or not. What they found is that lonely people took longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction.

That isn't priming, exactly, though it is a related unconscious phenomenon often called embodied cognition. As in the elderly-walking study, the subjects didn't realize what they were doing, didn't know they were bathing longer because they were lonely. Can warm water alleviate feelings of isolation? This was a result with real-world applications, and reporters jumped on it. "Wash the loneliness away with a long, hot bath," read an NBC News headline.

But I like the feeling of insight I get when thinking about cool applications of embodied cognition! (;_:)

Bargh's study had 92 subjects. So far Cesario has run more than 2,500 through the same experiment. He's found absolutely no relationship between bathing and loneliness. Zero. "It's very worrisome if you have people thinking they can take a shower and they can cure their depression," he says. And he says Bargh's data are troublesome. "Extremely small samples, extremely large effects—that's a red flag," he says. "It's not a red flag for people publishing those studies, but it should be."

Even though he is, in a sense, taking aim at Bargh, Cesario thinks it's a shame that the debate over priming has become so personal, as if it's a referendum on one man. "He has the most eye-catching findings. He always has," Cesario says. "To the extent that some of his effects don't replicate, because he's identified as priming, it casts doubt on the entire body of research. He is priming."

I'll admit that took me a few seconds too long to parse. (~_^)

That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.

That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.

Well yes dear journalist that has been the narrative you've just presented to us readers.

Then along comes Gary Latham.

How entertaining a plot twist! Or maybe a journalist is writing a story about out of a confusing process where academia tries to take account of a confusing array of new evidence.  Of course that's me telling a story right there. Agggh bad brain bad!

Latham, an organizational psychologist in the management school at the University of Toronto, thought the research Bargh and others did was crap. That's the word he used. He told one of his graduate students, Amanda Shantz, that if she tried to apply Bargh's principles it would be a win-win. If it failed, they could publish a useful takedown. If it succeeded ... well, that would be interesting.

They performed a pilot study, which involved showing subjects a photo of a woman winning a race before the subjects took part in a brainstorming task. As Bargh's research would predict, the photo made them perform better at the brainstorming task. Or seemed to. Latham performed the experiment again in cooperation with another lab. This time the study involved employees in a university fund-raising call center. They were divided into three groups. Each group was given a fact sheet that would be visible while they made phone calls. In the upper left-hand corner of the fact sheet was either a photo of a woman winning a race, a generic photo of employees at a call center, or no photo. Again, consistent with Bargh, the subjects who were primed raised more money. Those with the photo of call-center employees raised the most, while those with the race-winner photo came in second, both outpacing the photo-less control. This was true even though, when questioned afterward, the subjects said they had been too busy to notice the photos.

Latham didn't want Bargh to be right. "I couldn't have been more skeptical or more disbelieving when I started the research," he says. "I nearly fell off my chair when my data" supported Bargh's findings.

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives. Are there photos that would make people be safer at work? Are there photos that undermine performance? How should we be fine-tuning the images that surround us? "It's almost scary in lots of ways that these primes in these environments can affect us without us being aware," he says. Latham hasn't stopped there. He's continued to try experiments using Bargh's ideas, and those results have only strengthened his confidence in priming. "I've got two more that are just mind-blowing," he says. "And I know John Bargh doesn't know about them, but he'll be a happy guy when he sees them."

Latham doesn't know why others have had trouble. He only knows what he's found, and he's certain about his own data. In the end, Latham thinks Bargh will be vindicated as a pioneer in understanding unconscious motivations. "I'm like a converted Christian," he says. "I started out as a devout atheist, and now I'm a believer."

Following his come-to-Jesus transformation, Latham sent an e-mail to Bargh to let him know about the call-center experiment. When I brought this up with Bargh, his face brightened slightly for the first time in our conversation. "You can imagine how that helped me," he says. He had been feeling isolated, under siege, worried that his legacy was becoming a cautionary tale. "You feel like you're on an island," he says.

Though Latham is now a believer, he remains the exception. With more failed replications in the pipeline, Dijksterhuis believes that Kahneman's looming-train-wreck letter, though well meaning, may become a self-fulfilling prophecy, helping to sink the field rather than save it. Perhaps the perception has already become so negative that further replications, regardless of what they find, won't matter much. For his part, Bargh is trying to take the long view. "We have to think about 50 or 100 years from now—are people going to believe the same theories?" he says. "Maybe it's not true. Let's see if it is or isn't."

Admirable that he's come to the latter attitude after the early angry blog posts prompted by what he was going through. That wasn't sarcasm, scientists are only human after all, there are easier things to do than this.

New Comment
12 comments, sorted by Click to highlight new comments since:

Thank you for putting this together! I've heard of the controversy over the idea of priming before, but I didn't know the details until now.

To quote some of the bolded text

Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"

That doesn't seem right at all. If Bargh's statement that the state of the field has come a long way since the experiments were first conducted, and they now know more about the conditions which provoke those responses, were completely true, then it would follow that Bargh believed it was a repeatable piece of work when he published it, but now holds different beliefs about the circumstances required to replicate those results.

On the other hand, Bargh's reply seems even more shoddy. If he really bought into what he was saying and wanted to conduct good science, then the appropriate response should be to describe an experiment that he believes would replicate the results, for other scientists to review to see if it properly controls for confounding factors, and if it's solid then some of the many scientists interested in trying to replicate his results can perform that experiment.

[-]Shmi50

It's sad to see how Bargh seems to be attached to his original research and is not willing to consider that some of his early experiments may have been poorly designed, regardless of whether the effect is real or not. His Wikipedia page mentions nothing about the controversy.

[-]Shmi00

Well written, too. Hopefully it will last.

[-]satt70

We now have an edit war!

Yesterday afternoon the IP address 87.69.248.247 came along and took back out the criticism added to the start of the entry, returning later to delete the remaining criticism in the main body. The guy who originally added the criticism understandably put it back, telling 87.69.248.247, "If you wish to dispute the content or its relevance, please discuss."

This moved 87.69.248.247 to re-delete the re-added material, and remark, "I have erased slander blogs aimed to destroy this person's reputation if anyone wants to read them he can find it online not in his personal WIKI page [...] There is current controversy around the quality of this replication and this was not the most well known paper of Bargh as what mentioned".

Who will win this exciting contest? Find out in the next episode of Wikipedia!

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.

He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?

Also, yay priming! (I don't want this comment to sound negative about priming as such)

Do people update far more strongly on evidence if it comes from their own lab?

This isn't a completely unreasonable thing to do. For one thing, you have much more knowledge about the methodology of experiments conducted in your lab.

You also know your own results aren't fraudulent.

You know that you, personally, are not being deliberately, knowingly fraudulent. That says nothing about your assistants, nor the possibility of subconscious biases ("I really want this to be true" and "I KNOW this is true, I just have to prove it to those fools at the academy!" and "Who has the last laugh NOW Mr. Bond?!")

Do people update far more strongly on evidence if it comes from their own lab?

This strikes me as a special case of the principal that people update more strongly on evidence they directly observe.

OK, so maybe there's an effect, but it's kinda small and much of the time doesn't reach statistical significance, or whatever threshold researchers use to classify their studies as a discrete yay or nay. And variation in experimental setups tends to swamp between-trial variation.