Jonah Lehrer wrote about the (surprising?) power of publication bias.

http://m.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all

Cosma Shalizi (I think) said something, or pointed to something, about the null model of science - what science would look like if there were no actual effects, just statistical anomalies that look good at first. I can't find the reference, though.

 


New Comment
17 comments, sorted by Click to highlight new comments since:

Here's the Shalizi link: The Neutral Model of Inquiry. Good stuff, I remember enjoying it a lot. Choice quote: "...the first published p-value for any phenomenon is uniformly distributed between 0 and 0.05."

Thank you very much!

[-][anonymous]00

I love that he called it "The Neutral Model".

(To the OP: I think it's a reference to Motoo Kimura's theory that at the molecular level, most evolutionary change is neutral rather than adaptive.)

I wish I could vote this up several times because of, well, confirmation bias. It's seemed to me that evolutionary psych makes a lot of stew from very little meat, and it looks as though there's even less meat than I thought-- the article says that there's much less evidence of a female preference for male symmetry than was previously believed.

Meanwhile, I wonder if some of the fading of results isn't just publication bias, it's that biological details change faster than we realize. Drugs that work for schizophrenia might stop working as well because people are eating different additives or somesuch.

A simulation hypothesis is fun, of course-- we're being toyed with and/or the program is slightly unstable.

the article says that there's much less evidence of a female preference for male symmetry than was previously believed.

I totally predicted that one. Hmph. Is there a discussion post somewhere for people to post predictions? Ideally it'd be near the top of Top, if people voted it up enough. I like the idea of prediction markets but they seem cumbersome and many sorts of predictions need to be made super precise before you can bet on them, even if they wouldn't have to be that precise to be socially acknowledged as sticking their necks out.

I don't know of a best place on LW, but Predictionbook.com is a handy site for publishing your predictions. IIRC, gwern is an afficiannado.

Thanks! I'd wrongly assumed that all such sites were based on some sort of economic transaction.

I've read prediction markets that use money tend to violate gambling laws.

This article was previously discussed here.

[-][anonymous]70

Is it possible that there is too much science today?

I mean, in the raw-numbers sense of number of professional scientists and number of papers published. You could, conceivably, increase the volume of "science" without increasing its accuracy. How do we know we're not doing that?

You could, conceivably, increase the volume of "science" without increasing its accuracy. How do we know we're not doing that?

To me it seems pretty obvious that we are doing that, and have been for many decades. But I suppose spelling out an argument for this conclusion suitable for a general audience would require bridging some significant inferential distances.

I would say it's possible, but not in a way that's easily findable or fixable. How do you tell the difference between all the scientific research projects that haven't found anything useful yet? Which will go nowhere and waste time and money and which will lead to small but useful discoveries?

It's probably easier to usefully think about this in terms of specific fields rather than science in general. I could easily imagine that for example, there are way more people with anthropology degrees than useful anthropology going on.

I'm a bit late - the dangers of not checking Less Wrong over the weekend. :/ But in rebuttal: The Decline Effect Is Stupid

Jonah Lehrer is the Decline Effect. ... The trouble for the Earth is he writes for The New Yorker. ... If they didn't, I, and those who are real scientists, wouldn't have to explain why the Decline Effect doesn't exist

I read that article on the Last Psychiatrist... the way he described the article and the way Johnicolas did, I never would have guessed they were the same..

Guess I need to read the original.

At my first reading, I agreed with Alone's interpretation in 'The decline effect is stupid'. The article seems to describe anti-science, spooky, the-world-is-"connected"-and-affected-by-our-perception metaphysics.

For example, this doesn't sound like it wants to describe publication bias:

It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.

And certainly not this:

The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli.

[Consider, what work is "cosmic" doing in the last sentence?]

The article nods at scientific explanations, but then says they're not sufficient to explain what's going on. What is the article trying to imply? That something can be true at first, for a while, and then the truth value wears off? Because the scientist was getting too successful, the people were too confidant, the cosmos was feeling weary of being consistent? This idea tugs familiar grooves -- it's the superstition we're all programmed with.

But the article is somewhat long, and as I meander through, I consider that perhaps it intends that there should be a scientific explanation for "the effect" after all. Maybe the language and supernatural insinuations within the article are playfully meant as bait to goad scientists into thinking about it and dissolving it. (If it reflects a "real" trend, what is the scientific explanation then?).

I appreciate other things that Dr. Lehrer has written -- he seems to have a scientific worldview through and through -- so this latter interpretation is the one I finally settle on.

Good article, but scary.

One possible explanation is that for any given thing being investigated, there's some chance of an effect size initially larger than it should be, and some chance of it initially being smaller that it should be. If by chance the effect size starts out being too small, the investigation of the thing never takes off. If by chance the effect size starts out too large, a ton more studies are done and regression to the mean happens.

Alternately, it's possible that lots of initial studies have flawed methodology. Then, as more studies are done on a topic, the methodology becomes slowly more refined and the effect slowly goes away.