gwern comments on Should effective altruists care about the US gov't shutdown and can we do anything? - Less Wrong

-2 Post author: Ishaan 01 October 2013 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 04 October 2013 04:49:03PM *  0 points [-]

Even granting your distinction, the exact same argument still applies: just substitute in an additional rate of, say, 10% chance of going from replication to whatever you choose to define as 'success'. You cannot say that a 11% replication rate and then a 1.1% success rate is optimal - or suboptimal - without doing more intellectual work!

Comment author: Lumifer 04 October 2013 05:02:07PM *  2 points [-]

No, I don't think so. An 11% replication rate means that 89% of the published results are junk and external observers have no problems seeing that. Which implies that if those who published it were a bit more honest/critical/responsible, they should have been able to do a better job of controlling for the effects which lead them to think there's statistical significance when in fact there's none.

If the prior odds are 1:10,000 you have no business publishing results at 0.05 confidence level.

Comment author: gwern 04 October 2013 05:39:46PM 0 points [-]

An 11% replication rate means that 89% of the published results are junk and external observers have no problems seeing that.

Yes, so? As Edison said, I have discovered 999 ways to not build a lightbulb.

Which implies that if those who published it were a bit more honest/critical/responsible, they should have been able to do a better job of controlling for the effects which lead them to think there's statistical significance when in fact there's none.

Huh? No. As I already said, you cannot go from replication rate to judgment of the honesty, competency, or insight of researchers without additional information. Most obviously, it's going to be massively influenced by the prior odds of the hypotheses.

If the prior odds are 1:10,000 you have no business publishing results at 0.05 confidence level.

No one has any business publishing at an arbitrary confidence level, which should be chosen with respect to some even half-assed decision analysis. 1:10,000 or 1:1000, doesn't matter.

Comment author: Lumifer 04 October 2013 06:23:43PM 1 point [-]

As Edison said, I have discovered 999 ways to not build a lightbulb.

You're still ignoring the difference between a failed experiment and a failed replication.

Edison did not publish 999 papers each of them claiming that this is the way to build the lightbulb (at p=0.05).

you cannot go from replication rate to judgment of the honesty, competency, or insight of researchers without additional information. Most obviously, it's going to be massively influenced by the prior odds of the hypotheses.

And what exactly prevents the researchers from considering the prior odds when they are trying to figure out whether their results are really statistically significant?

I disagree with you -- if a researcher consistently publishes research that cannot be replicated I will call him a bad researcher.

Comment author: gwern 04 October 2013 06:45:08PM 0 points [-]

You're still ignoring the difference between a failed experiment and a failed replication. Edison did not publish 999 papers each of them claiming that this is the way to build the lightbulb (at p=0.05).

So? What does this have to do with my point about optimizing return from experimentation?

And what exactly prevents the researchers from considering the prior odds when they are trying to figure out whether their results are really statistically significant?

Nothing. But no one does that because to point out that a normal experiment has resulted in a posterior probability of <5% is not helpful since that could be said of all experiments, and to run a single experiment so high-powered that it could single-handedly overcome the prior probability is ludicrously wasteful. You don't run a $50m clinical trial enrolling 50,000 people just because some drug looks interesting.

I disagree with you -- if a researcher consistently publishes research that cannot be replicated I will call him a bad researcher.

Too bad. You should get over that.

Comment author: Lumifer 04 October 2013 07:17:32PM *  1 point [-]

I think our disagreement comes (at least partially) from the different views on what does publishing research mean.

I see your position as looking on publishing as something like "We did A, B, and C. We got the results X and Y. Take it for what it is. The end."

I'm looking on publishing more like this: "We did multiple experiments which did not give us the magical 0.05 number so we won't tell you about them. But hey, try #39 succeeded and we can publish it: we did A39, B39, and C39 and got the results X39 and Y39. The results are significant so we believe them to be meaningful and reflective of actual reality. Please give our drug to your patients."

The realities of scientific publishing are unfortunate (and yes, I know of efforts to ameliorate the problem in medical research). If people published all their research ("We did 50 runs with the following parameters, all failed, sure #39 showed statistical significance but we don't believe it") I would have zero problems with it. But that's not how the world currently works.

P.S. By the way, here is some research which failed replication (via this)

Comment author: gwern 04 October 2013 08:09:32PM 0 points [-]

The realities of scientific publishing are unfortunate (and yes, I know of efforts to ameliorate the problem in medical research). If people published all their research ("We did 50 runs with the following parameters, all failed, sure #39 showed statistical significance but we don't believe it") I would have zero problems with it. But that's not how the world currently works.

That would be a better world. But in this world, it would still be true that there is no universal, absolute, optimal percentage of experiments failing to replicate, and the optimal percentage is set by decision-theoretic/economic concerns.

Comment author: Lumifer 04 October 2013 08:21:54PM 1 point [-]

Experiments that fail to replicate at percentages greater than those expected from published confidence values (say, posterior probabilities) are evidence that the published confidence values are wrong.

A research process that consistently produces wrong confidence values has serious problems.

Comment author: gwern 04 October 2013 10:47:48PM 0 points [-]

Experiments that fail to replicate at percentages greater than those expected from published confidence values (say, posterior probabilities) are evidence that the published confidence values are wrong.

How would you know? People do not produce posterior probabilities or credible intervals, they produce confidence intervals and p-values.

Comment author: Lumifer 07 October 2013 02:57:57PM 1 point [-]

I don't see how this point helps you.

Either the p-values in the papers are worthless in the sense of not reflecting the probability that the observed effect is real -- in which case the issue in the parent post stands.

Or the p-values, while not perfect, do reflect the probability the effect is real -- in which case they are falsified by the replication rates and in which case the issue in the parent post stands.