Thank you for putting this together! I've heard of the controversy over the idea of priming before, but I didn't know the details until now.
To quote some of the bolded text
Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"
That doesn't seem right at all. If Bargh's statement that the state of the field has come a long way since the experiments were first conducted, and they now know more about the conditions which provoke those responses, were completely true, then it would follow that Bargh believed it was a repeatable piece of work when he published it, but now holds different beliefs about the circumstances required to replicate those results.
On the other hand, Bargh's reply seems even more shoddy. If he really bought into what he was saying and wanted to conduct good science, then the appropriate response should be to describe an experiment that he believes would replicate the results, for other scientists to review to see if it properly controls for confounding factors, and if it's solid then some of the many scientists interested in trying to replicate his results can perform that experiment.
It's sad to see how Bargh seems to be attached to his original research and is not willing to consider that some of his early experiments may have been poorly designed, regardless of whether the effect is real or not. His Wikipedia page mentions nothing about the controversy.
Looks like someone has now added it: http://en.wikipedia.org/w/index.php?title=John_Bargh&diff=537069679&oldid=527782415
We now have an edit war!
Yesterday afternoon the IP address 87.69.248.247 came along and took back out the criticism added to the start of the entry, returning later to delete the remaining criticism in the main body. The guy who originally added the criticism understandably put it back, telling 87.69.248.247, "If you wish to dispute the content or its relevance, please discuss."
This moved 87.69.248.247 to re-delete the re-added material, and remark, "I have erased slander blogs aimed to destroy this person's reputation if anyone wants to read them he can find it online not in his personal WIKI page [...] There is current controversy around the quality of this replication and this was not the most well known paper of Bargh as what mentioned".
Who will win this exciting contest? Find out in the next episode of Wikipedia!
That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.
He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?
Also, yay priming! (I don't want this comment to sound negative about priming as such)
Do people update far more strongly on evidence if it comes from their own lab?
This isn't a completely unreasonable thing to do. For one thing, you have much more knowledge about the methodology of experiments conducted in your lab.
You know that you, personally, are not being deliberately, knowingly fraudulent. That says nothing about your assistants, nor the possibility of subconscious biases ("I really want this to be true" and "I KNOW this is true, I just have to prove it to those fools at the academy!" and "Who has the last laugh NOW Mr. Bond?!")
Do people update far more strongly on evidence if it comes from their own lab?
This strikes me as a special case of the principal that people update more strongly on evidence they directly observe.
OK, so maybe there's an effect, but it's kinda small and much of the time doesn't reach statistical significance, or whatever threshold researchers use to classify their studies as a discrete yay or nay. And variation in experimental setups tends to swamp between-trial variation.
Related: Social Psychology & Priming: Art Wears Off
I recommend reading the piece, but below are some excerpts and commentary.
Steve Sailer comments on this:
Not only advertisers the industry where he worked in but indeed our little community probably loves any results confirming such a picture. We need to be careful about that. Bartlett continues:
Here is a link to the wiki article on the mentioned misconduct. I recall some of the drama that unfolded around the outing and the papers themselves... looking at the kinds of results Stapel wanted to fake or thought would advance his career reminds me of some other older examples of scientific misconduct.
But I like the feeling of insight I get when thinking about cool applications of embodied cognition! (;_:)
I'll admit that took me a few seconds too long to parse. (~_^)
Well yes dear journalist that has been the narrative you've just presented to us readers.
How entertaining a plot twist! Or maybe a journalist is writing a story about out of a confusing process where academia tries to take account of a confusing array of new evidence. Of course that's me telling a story right there. Agggh bad brain bad!
Admirable that he's come to the latter attitude after the early angry blog posts prompted by what he was going through. That wasn't sarcasm, scientists are only human after all, there are easier things to do than this.