someonewrongonthenet comments on This is why we can't have social science - Less Wrong

36 Post author: Costanza 13 July 2014 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 14 July 2014 06:33:14AM *  24 points [-]

If the first experiment was wrong, the second experiment will end up wrong too.

I guess the context is important here. If the first experiment was wrong, and the second experiment is wrong, will you publish the failure of the second experiment? Will you also publish your suspicion that the first experiment was wrong? How likely will people believe you that your results prove the first experiment was wrong, if you did something else?

Here is what the selection bias will do otherwise:

20 people will try 20 "second experiments" with p = 0,05. 19 of them will fail, one will succeed and publish the results of their successful second experiment. Then, using the same strategy, 20 people will try 20 "third experiments", and again, one of them will succeed... Ten years later, you can have dozen experiments examining and confirming the theory from dozen different angles, so the theory seems completely solid.

It's kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.

Is there a chance that the process I described was responsible for this?

Comment author: someonewrongonthenet 28 August 2014 06:13:21PM *  -1 points [-]

I guess the context is important here. If the first experiment was wrong, and the second experiment is wrong, will you publish the failure of the second experiment? Will you also publish your suspicion that the first experiment was wrong? How likely will people believe you that your results prove the first experiment was wrong, if you did something else?

In practice, individual scientists like to be able to say "my work causes updates". If you do something that rests on someone else's work and the experiment doesn't come out, you have an incentive to say "Someonewrongonthenet's hypothesis X implies A and B. Someonewrongonthenet showed A [citation], but I tried B and that means X isn't completely right.

Cue further investigation which eventually tosses out X. Whether or not A was a false positive is less important than whether or not X is right.

Is there a chance that the process I described was responsible for this?

Yes, that's possible. I'm not sure direct replication actually solves that issue, though - you'd just shift over to favoring false negatives instead false positives. The existing mechanism that works against this is the incentive to overturn other people's work.