CarlShulman comments on Follow-up on ESP study: "We don't publish replications" - Less Wrong

71 Post author: CarlShulman 12 July 2011 08:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 11 July 2011 10:10:43PM 8 points [-]

Yes, this is a standard incentives problem. But one to keep in mind when parsing the literature.

Comment author: jsalvatier 12 July 2011 06:00:05PM *  6 points [-]

What rules of thumb do you use to 'keep this in mind'? I generally try to never put anything in my brain that just has one or two studies behind it. I've been thinking of that more as 'it's easy to make a mistake in a study' and 'maybe this author has some bias that I am unaware of', but perhaps this cuts in the opposite direction.

Comment author: CarlShulman 12 July 2011 07:23:08PM 5 points [-]

Actually, even with many studies and a meta-analysis, you can still get blindsided by publication bias. There are plenty of psi meta-analyses showing positive effects (with studies that were not pre-registered, and are probably very selected), and many more in medicine and elsewhere.

Comment author: jimmy 12 July 2011 06:50:43PM *  5 points [-]

If it's something I trust an idiot to make the right conclusion on with good data, I'll look for meta-analyses, p<<0.05, or do a quick and dirty meta analysis myself if the number of studies is sufficiently small. If it's something I'm surprised has even been tested, I'll give one study more weight. If it's something that I'd expect to be tested a lot, I'd give it less. If the data I'm looking for is orthogonal to the data they're being published for, it probably doesn't suffer from selection bias so I'll take it at face value. If the studies result is 'convenient' in some way for the source that showed it to me, I'll be more skeptical of selection bias and misinterpretation.

If it's a topic where I see very easy to make methodological flaws or interpretation errors, then I'll try to actually dig in and look for them and see if there's a new obvious set of conclusions to draw.

Separately from determining how strong the evidence is, I'll try to 'put it in my brain' if there's only a study or two if it's testing a hypothesis I already suspected of being true, or if it makes too much sense in hindsight (aka high priors), or put it in my brain with a 'probably untrue but something to watch out for' tag otherwise.

Comment author: Benquo 12 July 2011 05:03:55AM *  7 points [-]

How much money do you think it would take to give replications a journal with status on par with the new-studies-only ones?

Or alternately, how much advocacy of what sort? Is there someone in particular to convince?

Comment author: ChristianKl 19 July 2011 09:37:10PM 4 points [-]

It's not something you can simply buy with money. It's about getting scientists to cite papers in the replications journal.

Comment author: Benquo 19 July 2011 10:21:47PM *  0 points [-]

What about influencing high-status actors (e.g. prominent universities)? I don't know what the main influence points are for an academic journal, and I don't know what things it's considered acceptable for a university to accept money for, but it seems common to endow a professorship or a (quasi-academic) program.

Probably this method would cost many millions of dollars, but it would be interesting to know the order of magnitude required.