An article in the NYT's about everyone's favourite messy science, you know the one we sometimes rely on to provide a throwaway line as we pontificate wisely about biases? ;)

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.

The psychologist, Diederik Stapel, of Tilburg University, committed academic fraud in “several dozen” published papers, many accepted in respected journals and reported in the news media, according to a report released on Monday by the three Dutch institutions where he has worked ...

In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged. ...

In a prolific career, Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.

In a statement posted Monday on Tilburg University’s Web site, Dr. Stapel apologized to his colleagues. “I have failed as a scientist and researcher,” it read, in part. “I feel ashamed for it and have great regret.” ...

Dr. Stapel has published about 150 papers, many of which, like the advertising study, seem devised to make a splash in the media. The study published in Science this year claimed that white people became more likely to “stereotype and discriminate” against black people when they were in a messy environment, versus an organized one. Another study, published in 2009, claimed that people judged job applicants as more competent if they had a male voice. The investigating committee did not post a list of papers that it had found fraudulent. ...

In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.

Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

...

found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.

...

“We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’

But reviewers working for psychology journals rarely take this into account in any rigorous way. Neither do they typically ask to see the original data. While many psychologists shade and spin, Dr. Stapel went ahead and drew any conclusion he wanted.

In any case this brought to my attention by a recent blog entry on iSteve.

Telling people what they want to hear

Steve Sailer thinks that what gets distorted the most in such a way is a matter of supply and demand. Which is obviously good signalling for him, but is also eminently plausible. One can't help but wonder especially on the interesting connections that exist between some of the "findings" of psychology of a certain period and place the obsessions and neurosis (heh) specific to that society.

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 10:23 AM
[-][anonymous]12y110

found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.

See also: climategate.

The article discusses the refusal of the University of East Anglia to release ”CRUTEM” data under the Freedom of Information Act – on this occasion the UK Information Commissioner ruled that the university would have to comply with requests to share its data:

Quote:

As a first comment on the University’s defence – in keeping with similar refusals of other requests, rather than focusing on their best line of argument,the practice of the UEA is to use a laundry list of exemptions – more or less throwing spitballs against the wall to see if any of them stuck. Many of the spitballs seem pretty strained, to say the least. In his ruling, the ICO picked each spitball off the wall and, in the process, established or confirmed a number of precedents that will hopefully encourage fewer spitballs in the future.

They attempted to use the following exemptions: s 6 – “information already publicly available” s 12(5)(a) – would have an adverse effect on “international relations, defence, national security or public safety” s 12(5)(c) – would have an adverse effect on “intellectual property rights” s 12(5)(f) – would have an adverse effect on the interests of the person who provided the information

As if it wasn’t already clear that this kind of behaviour (exhibited on a consistent basis) constitutes scientific misconduct, Wicherts and Bakker’s findings provide evidence that it should be so regarded.

Another interesting nugget I read about on Steve Sailer’s site – again referring to the research of Wicherts – concerned “stereotype threat”.

Quotes:

“In 1995, two Stanford psychologists, Claude Steele and Joshua Aronson, demonstrated that African-American college students did worse on tests of academic ability when they were exposed beforehand to suggestions that they were being judged according to their race. Steele and Aronson hypothesized that this effect, which they labeled stereotype threat, might explain part of the persistent achievement gap between white and black students. In the years since, this idea has spread throughout the social sciences.” [...]

A researcher, who doesn’t want his name or any potentially identifying information mentioned, for unfortunately obvious career reasons, recently attended a presentation at a scientific conference. Here is his summary of what he heard:

“One talk presented a meta-analysis of stereotype threat. The presenter was able to find a ton of unpublished studies. The overall conclusion is that stereotype threat does not exist. The unpublished and published studies were compared on many indices of quality, including sample size, and the only variable predicting publication was whether a significant effect of stereotype threat was found. …

“This is quite embarrassing for psychology as a science.” [...]

A meta-analysis of 55 published and unpublished studies of this effect shows clear signs of publication bias.”

Interestingly there are many credulous references to “stereotype threat” on lesswrong, but seemingly no skeptical postures (until now).

I have been aware of the case against stereotype threat for some time, but wouldn't want to post it under my regular handle (linked to my real name and high-karma). People who try to approach these topics in an even-handed way often lose their jobs or suffer other serious consequences. Mostly I just try to let the discussion die down, since those who might present non-PC evidence are laboring under differential burdens and constraints.

See also: climategate.

Interesting. So, say, NASA, which shares its data liberally, has gotten its conclusions right, while CRU, which resisted doing so, has gotten them wrong?

[-][anonymous]12y10

So, say, NASA, which shares its data liberally, has gotten its conclusions right, while CRU, which resisted doing so, has gotten them wrong?

Don't try to put words in my mouth, thanks.

Wicherts and Bakker's finding that failure to share data corresponds to failure to speak the truth is evidence in favour of the idea that the CRU's consistent refusal to share data is related to their being pseudo-scientists.

The fact that their line on global warming is synoptic with James Hansen’s group’s line has two parsimonious explanations: they have both discovered the truth; or they are willing to manipulate the data to prove whatever they want to prove.

Either way, the evidence in Konkvistadors’s post (presuming it is news to anyone) should reduce one’s confidence in the scientific competence of the CRU.

Not really putting words in your mouth, just trying to make sense of what you said in context of the post. It turned out to be pretty, well, normal. You're mentioning reasons that will allow you to say global warming isn't happening, not trying to evaluate claims of global warming, in general, using this heuristic.

[-][anonymous]12y-20

To explain the comment about “putting words in my mouth”: my comment relating the activities of the CRU to the subject of Konkvistador’s post was to the effect that the CRU is unwilling to share data, even when it is legally obliged to do so; there is debate regarding whether this is acceptable scientific practice; and that here we have concrete evidence that failure to share data is related to bad science; therefore, in light of this finding everyone (AGW-credulist or skeptic) should take a dimmer view of the CRU’s opacity.

From the credulist point of view, it might appear that the CRU has problems with quality control, which they are trying to shield from view – perhaps their work on paleoclimatology should be handed over to the Met office, for example. This is compatible with the idea that they are right about AGW in general but CRUTEM is a mess, or even that the problems with CRUTEM are not too severe but they should be doing better.

From the skeptic (or fence-sitting) view, this is more evidence that the CRU are pseudo-scientists in general. The fact that NASA agree in general with the CRU on AGW, and NASA happen to share data, doesn’t exonerate the CRU; there are plenty of ways to lie and mislead that do not involve failing to share data, so the fact that NASA is more transparent is no guarantee that their and the University of East Anglia’s conclusions on AGW are sound in general.

Your comment implied that I had claimed that the CRU’s conclusions were necessarily wrong, because they don’t share their data; and that NASA’s conclusions in general are necessarily right, because they do share data.

This is a non-sequitur on both counts. What makes this objectionable is that you supplied no reasoning beyond a mere statement, as though these conclusions followed trivially from what I had said. This is a rhetorical technique designed to score points, rather than something I would expect from a valuable debating partner – a suitable description of this style of commenting is “putting words into someone’s mouth”, and I think that the best way of dealing with it is to refer to it directly, so as to dissociate oneself from the non-sequitur.

The hyperbole of my original reply was shorthand for "is evidence for," and I'm sorry if me doing that derailed the topic a bit by miscommunicating. The purpose of my reply was so I could get a better idea whether you were assessing claims of global warming using the tool referred to in the post ("the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings."), or whether you were making a related but not-covered-by-the-post argument about how CRU wasn't doing science. Your replies indicated that you were doing the latter.

The CRU has been exonerated of manipulating data or hiding information that challenged the consensus on global warming.

If they had, though, it would certainly be understandable. As with any issue that deals with a lot of motivated cognition, climate skeptics will seize on any data that will support their disbelief out of the sea of all the data that confronts it, and not revise their confidence back down if the data is retracted or shown to be false. A single study contradicting the consensus at low P-value would be no problem in a rational world, but it's a social liability in our own. But no bigger a liability than being found hiding information. Damned if you do and damned if you don't.

Noticably more damned if you do, insofar as actually being found hiding information also damages your credibility among the rest of the population.

[-][anonymous]12y00

The CRU has been exonerated of manipulating data or hiding information that challenged the consensus on global warming.

By whom? is the important question. Having read some of the incriminating emails and the infamous harryreadme.txt I certainly don’t exonerate them.

If they had, though, it would certainly be understandable. As with any issue that deals with a lot of motivated cognition, climate skeptics will seize on any data that will support their disbelief out of the sea of all the data that confronts it, and not revise their confidence back down if the data is retracted or shown to be false.

If this is the case, I wonder why legitimate scientists never caught on to that idea in the past – defeat the skeptics by hiding data and the details of scientific practices from them. This seems 180 degrees from reality.

Creationists are often mentioned in this context – transparent scientific practices have failed to persuade them. However, this is simply because creationists are in possession of a memeplex that renders them immune to reason; hiding data and scientific information from the public would only embolden them, besides giving more rational people reason to doubt the veracity of Darwinism.

Even if there were any substance to the idea that transparency in science empowers skeptics, that is vastly outweighed by the hazards involved in permitting these people to recommend massive social and economic policy changes, without their being subject to scrutiny from outsiders (NB: peer review is not incorruptible, as the climategate emails have revealed). They can hardly claim to be minding their own business!

By whom? is the important question

By the House of Commons Science and Technology Committee, the Independent Climate Change Review, the International Science Assessment Panel, Pennsylvania State University, the United States Environmental Protection Agency, and the United States Department of Commerce, as stated in the article I already linked to.

If this is the case, I wonder why legitimate scientists never caught on to that idea in the past – defeat the skeptics by hiding data and the details of scientific practices from them. This seems 180 degrees from reality.

Science is rarely so closely connected to social policy; whether public officials accept that evolution is true mainly determines whether kids get taught about evolution. Whether public officials accept anthropogenic climate change determines whether we attempt to do anything about it.

I doubt that legitimate scientists in any field have ever systematically refused transparency in order to protect their favored theories, and it's certainly not the case with climate change research, but it's not as if there's been a historical shortage of scientists who don't play their cards straight. I'm simply saying that this is a case where there's a particularly obvious motive.

I doubt that legitimate scientists in any field have ever systematically refused transparency in order to protect their favored theories,

So what you're saying is that no true scientist has ever refused transparency in order to protect favored theories.

No, when I said legitimate scientists in any field, I meant the body of scientists in a legitimate field. There have certainly been scientists who have earned their degrees legitimately who systematically refused transparency, but I do not think there has been any legitimate field of science where the practitioners in general had a systematic tendency to refuse transparency. I apologize if my wording was unclear.

A researcher, who doesn’t want his name or any potentially identifying information mentioned, for unfortunately obvious career reasons, recently attended a presentation at a scientific conference. Here is his summary of what he heard:

“One talk presented a meta-analysis of stereotype threat. The presenter was able to find a ton of unpublished studies. The overall conclusion is that stereotype threat does not exist. The unpublished and published studies were compared on many indices of quality, including sample size, and the only variable predicting publication was whether a significant effect of stereotype threat was found. …

This is interesting, but not exactly what I'd call public evidence.

[-][anonymous]12y20

Wicherts's papers are in the public domain. Steve Sailer's link to the abstract of the talk in question is broken, the document having been moved to here.

I don't see anything on his website about the meta-analysis, except for a line on his CV saying that the paper is under review. That means all we have to go by is the one-paragraph abstract from his 2009 talk, and the report of one person who saw that talk. And the abstract, though critical of stereotype threat research, doesn't actually claim that stereotype threat does not exist.

[-][anonymous]12y00

Point taken; that particular criticism of stereotype threat is absent from the papers available on his site.

Blip's comment may shed light on reasons why the paper is yet unpublished, although YMMV.

And the abstract, though critical of stereotype threat research, doesn't actually claim that stereotype threat does not exist.

I'll quote the abstract to clarify matters:

Numerous laboratory experiments have been conducted to show that African Americans’ cognitive test performance suffers under stereotype threat, i.e., the fear of confirming negative stereotypes concerning one’s group. A meta-analysis of 55 published and unpublished studies of this effect shows clear signs of publication bias. The effect varies widely across studies, and is generally small. Although elite university undergraduates may underperform on cognitive tests due to stereotype threat, this effect does not generalize to non-adapted standardized tests, high-stakes settings, and less academically gifted test-takers. Stereotype threat cannot explain the difference in mean cognitive test performance between African Americans and European Americans.

Edit: in the absence of the most helpful Wicherts paper, here is another paper, referred to in a longer Steve Sailer article, discussing the misinterpretation of findings in stereotype threat (particularly in pop science and the media, e.g. Malcolm Gladwell).

Quote:

C. M. Steele and J. Aronson (1995) showed that making race salient when taking a difficult test affected the performance of high-ability African American students, a phenomenon they termed stereotype threat. The authors document that this research is widely misinterpreted in both popular and scholarly publications as showing that eliminating stereotype threat eliminates the African American–White difference in test performance. In fact, scores were statistically adjusted for differences in students’ prior SAT performance, and thus, Steele and Aronson’s findings actually showed that absent stereotype threat, the two groups differ to the degree that would be expected based on differences in prior SAT scores. The authors caution against interpreting the Steele and Aronson experiment as evidence that stereotype threat is the primary cause of African American–White differences in test performance.

The reference to "high-stakes settings" in the Wicherts abstract concerns the obvious problem that stereotype threat is rarely if ever tested in real settings (e.g. college admissions) because this would be unethical. But if the test is meaningless and the subject is smart enough to figure out what the experimenter is hoping to prove, the potential source of bias is obvious.

[-][anonymous]12y100

Also from the original NYT article, a quote that irked me:

“We have the technology to share data and publish our initial hypotheses, and now’s the time,” Dr. Schooler said. “It would clean up the field’s act in a very big way.”

Ya think?!

This may be controversial and a bit OT but I willing to flat out state that, even given the idealistic view of its functioning and structure, the sheer inefficiency of academia in implementing improvements mean, it fails in many different ways to play a good role in societies truth seeking mechanism (which to make matters depressing, I'm not even sure we have one any more). And while it does still provide crucial advancements, and thus could be argued is working great or at least good enough. I want to ask LW two questions that you've all probably heard in a different context:

Compared to what? At what cost?

In my opinion all those perfectly good brains in it are, to a shocking extent, being wasted. And I say this as someone who loves the atmosphere and people there.

which to make matters depressing, I'm not even sure we have any more [w.r.t "societies [sic] truth seeking mechanism"]

If society no longer had a mechanism for finding truth would you still expect there to be regular technological and scientific discoveries like there are today?

[-][anonymous]12y20

That is an excellent point. I clearly overstated, since obviously society as a whole still finds new bits of truth, or rather decent enough new models of reality (~science) together with ways to change reality to make it do what we want (~technology).

When I spoke of truth finding process I was thinking about the a designed optimized process for finding truth, the scientific revolution is supposedly a case when such a process was established and began complimenting the truth seeking mechanisms available in most previous societies to great effect.

Society obviously dosen't need say to know the scientific method, or have good institutions to advance technologically since people will some times adopt improvements others have stumbled upon no matter what and eccentric individuals would do their own research even without any backing. Also currently we arguably have information technology feeding on itself, since it makes thinking about itself easier. You can make progress that way.

So how much is the modern slow down in some fields due to there being less low hanging fruit and how much due to the quality of the truth seeking processes that are supposed to be going on there, breaking down?

which to make matters depressing, I'm not even sure we have one any more

What I should have said is that I suspect that there is much more slow down due to the processes breaking down that we would like to believe.

[-][anonymous]12y00

Do most Less Wrongers actually believe there is such thing as a single scientific method?

[This comment is no longer endorsed by its author]Reply

The antiquated publishing system is holding back self-improvement in academia. The journals are the incentivizers for quality research, but they're not interested in quality: They're interested in their impact factor. You'd think by now we'd be using science to create a collaborative system that self-improves and works towards incentivizing academia to find valuable truths.

We're stuck with a Ferrari with wagon wheels.

I'm suspicious that the solution is so simple. If academic recursive self-improvement were as straightforward as you imply, wouldn't somebody somewhere be making a killing off of it?

Assume a perfectly spherical, frictionless economy of ideas ...

They have to topple the current system, which does make a killing, and I don't see how a system controlled by academia would make a killing. I would think it would be more analogous to free and open source software than a business venture.

I don't think that this is as big a deal as people are making it out to be. There's very little in the system which protects against fraud. Whenever there's some form of fraud that shows up people make a big deal about it, but the bottom line is that fraud is rare enough that it is likely that protections against fraud would be a poor use of limited resources.

I think people like Andrew Wakefield make the effects of fraud more obvious. And, I agree, fraud on that level is probably rare, but what about smaller acts of fraud? For instance, I don't think it's that unlikely that many scientists, while under pressure and deadlines, fudge their results. And not because they want to deceive, no, they already "know" what the results should be, so they're not doing anything that's really all that wrong.

Interestingly though, we've found that science works despite significant bias, poor research, and so on. So, I wouldn't be surprised if there was a significant amount of fraud, and yet we still were able to do science.

Basically, I don't think the question should be "is this really a big deal?" but "how much better would science be if this were fixed?".

[-][anonymous]12y100

Basically, I don't think the question should be "is this really a big deal?" but "how much better would science be if this were fixed?".

Opportunity cost is a big deal when it comes to top quality human minds, considering they are so rare.

I've read a few articles on this issue, and the problem seems pretty alarming. From what I understand, there are only a small handful of journals that accept null hypothesis (i.e. hypothesis that X is not true), in fact I think there might be only one (JASN).

The vast majority of journals reject (or at least discourage) non-positive results except in the case of famous researchers or contentious issues, which means studies that show negative results tend to not get published. In fact, most researchers don't even attempt to publish - they start over, or give up and move to a different project. If the study was critical to their career, they may even move to another field entirely.

This meta-study examines studies of publication bias and reporting bias (unfavorable results omitted from conclusions). It comes to the conclusion that studies that show positive results are significantly more likely to be published than studies that don't.

If the academic culture is discouraging studies that show negative results via the publication process, doesn't that seem to imply there is, at the very least, a major inefficiency in our process of learning new things?

I'm no scientist, but I do have to do a lot of troubleshooting for my job, and while knowing what works is most important, knowing what doesn't takes a close second.

If journals encouraged negative results as much as positive results, I'd imagine we'd see major new scientific breakthroughs twice as often as we currently do - and that's kind of a big deal, I think. Right now the huge academic pressure is not to produce valid results, but to produce positive results. That's a major problem, in my opinion, precisely because human beings are very vulnerable to such pressure. There are a whole slew of biases that arise because of that type of pressure, and any researcher not very cognizant of their vulnerability is another potential Staple.

It seems inexplicably strange to me that journals publish papers from researchers who don't make all of their data public, except where privacy of participants could be infringed upon.