gwern comments on Should effective altruists care about the US gov't shutdown and can we do anything? - Less Wrong

-2 Post author: Ishaan 01 October 2013 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 02 October 2013 04:49:29PM *  1 point [-]

Outside view

You might be but I'm not really.

But we are making progress in biology at a rapid rate. For example, the use of genetic markers to figure out how to treat different cancers was first proposed in the early 1990s and is now a highly successful clinical method.

That's a crude method of measuring success.

The cost of new drugs rises exponentially via Eroom's law. Big Pharma constantly lays of people.

A problem like obesity grows worse over the years instead of progress. Diabetes gets worse.

Even if you say that science isn't about solving real world issues but about knowledge, I also think that replication rates of 11% in the case of breakthrough cancer research indicates that the field is not good at finding out what's going on.

Comment author: gwern 04 October 2013 02:55:11PM *  0 points [-]

Even if you say that science isn't about solving real world issues but about knowledge, I also think that replication rates of 11% in the case of breakthrough cancer research indicates that the field is not good at finding out what's going on.

I don't think a flat replication rate of 11% tells us anything without recourse to additional considerations. It's sort of like a Umeshism: if your experiments are not routinely failing, you aren't really experimenting. The best we can say is that 0% and 100% are both suboptimal...

For example, if I was told that anti-aging research was having a 11% replication rate for its 'stopping aging' treatments, I would regard this as shockingly too high and a collective crime on par with the Nazis, and if anyone asked me, would tell them that we need to spend far far more on anti-aging research because we clearly are not trying nearly enough crazy ideas. And if someone told me the clinical trials for curing balding were replicating at 89%, I would be a little uneasy and wonder what side-effects we were exposing all these people to.

(Heck, you can't even tell much about the quality of the research from just a flat replication rate. If the prior odds are 1 in 10,000, then 11% looks pretty damn good. If the prior odds are 1 in 5, pretty damn bad.)

What I would accept as a useful invocation of an 11% rate is, say, an economic analysis of the benefits showing that this represents over-investment (for example, falling pharmacorp share prices) or surprise by planners/scientists/CEOs/bureaucrats where they had held more optimistic assumptions (and so investment is likely being wasted). That sort of thing.

Comment author: Lumifer 04 October 2013 04:09:10PM 1 point [-]

Replication rate of experiments is quite different from the success rate of experiments.

An 11% success rate is often shockingly high. An 11% replication rate means the researchers are sloppy, value publishing over confidence in the results, and likely do way too much of throwing spaghetti at the wall...

Comment author: gwern 04 October 2013 04:49:03PM *  0 points [-]

Even granting your distinction, the exact same argument still applies: just substitute in an additional rate of, say, 10% chance of going from replication to whatever you choose to define as 'success'. You cannot say that a 11% replication rate and then a 1.1% success rate is optimal - or suboptimal - without doing more intellectual work!

Comment author: Lumifer 04 October 2013 05:02:07PM *  2 points [-]

No, I don't think so. An 11% replication rate means that 89% of the published results are junk and external observers have no problems seeing that. Which implies that if those who published it were a bit more honest/critical/responsible, they should have been able to do a better job of controlling for the effects which lead them to think there's statistical significance when in fact there's none.

If the prior odds are 1:10,000 you have no business publishing results at 0.05 confidence level.

Comment author: gwern 04 October 2013 05:39:46PM 0 points [-]

An 11% replication rate means that 89% of the published results are junk and external observers have no problems seeing that.

Yes, so? As Edison said, I have discovered 999 ways to not build a lightbulb.

Which implies that if those who published it were a bit more honest/critical/responsible, they should have been able to do a better job of controlling for the effects which lead them to think there's statistical significance when in fact there's none.

Huh? No. As I already said, you cannot go from replication rate to judgment of the honesty, competency, or insight of researchers without additional information. Most obviously, it's going to be massively influenced by the prior odds of the hypotheses.

If the prior odds are 1:10,000 you have no business publishing results at 0.05 confidence level.

No one has any business publishing at an arbitrary confidence level, which should be chosen with respect to some even half-assed decision analysis. 1:10,000 or 1:1000, doesn't matter.

Comment author: Lumifer 04 October 2013 06:23:43PM 1 point [-]

As Edison said, I have discovered 999 ways to not build a lightbulb.

You're still ignoring the difference between a failed experiment and a failed replication.

Edison did not publish 999 papers each of them claiming that this is the way to build the lightbulb (at p=0.05).

you cannot go from replication rate to judgment of the honesty, competency, or insight of researchers without additional information. Most obviously, it's going to be massively influenced by the prior odds of the hypotheses.

And what exactly prevents the researchers from considering the prior odds when they are trying to figure out whether their results are really statistically significant?

I disagree with you -- if a researcher consistently publishes research that cannot be replicated I will call him a bad researcher.

Comment author: gwern 04 October 2013 06:45:08PM 0 points [-]

You're still ignoring the difference between a failed experiment and a failed replication. Edison did not publish 999 papers each of them claiming that this is the way to build the lightbulb (at p=0.05).

So? What does this have to do with my point about optimizing return from experimentation?

And what exactly prevents the researchers from considering the prior odds when they are trying to figure out whether their results are really statistically significant?

Nothing. But no one does that because to point out that a normal experiment has resulted in a posterior probability of <5% is not helpful since that could be said of all experiments, and to run a single experiment so high-powered that it could single-handedly overcome the prior probability is ludicrously wasteful. You don't run a $50m clinical trial enrolling 50,000 people just because some drug looks interesting.

I disagree with you -- if a researcher consistently publishes research that cannot be replicated I will call him a bad researcher.

Too bad. You should get over that.

Comment author: Lumifer 04 October 2013 07:17:32PM *  1 point [-]

I think our disagreement comes (at least partially) from the different views on what does publishing research mean.

I see your position as looking on publishing as something like "We did A, B, and C. We got the results X and Y. Take it for what it is. The end."

I'm looking on publishing more like this: "We did multiple experiments which did not give us the magical 0.05 number so we won't tell you about them. But hey, try #39 succeeded and we can publish it: we did A39, B39, and C39 and got the results X39 and Y39. The results are significant so we believe them to be meaningful and reflective of actual reality. Please give our drug to your patients."

The realities of scientific publishing are unfortunate (and yes, I know of efforts to ameliorate the problem in medical research). If people published all their research ("We did 50 runs with the following parameters, all failed, sure #39 showed statistical significance but we don't believe it") I would have zero problems with it. But that's not how the world currently works.

P.S. By the way, here is some research which failed replication (via this)

Comment author: gwern 04 October 2013 08:09:32PM 0 points [-]

The realities of scientific publishing are unfortunate (and yes, I know of efforts to ameliorate the problem in medical research). If people published all their research ("We did 50 runs with the following parameters, all failed, sure #39 showed statistical significance but we don't believe it") I would have zero problems with it. But that's not how the world currently works.

That would be a better world. But in this world, it would still be true that there is no universal, absolute, optimal percentage of experiments failing to replicate, and the optimal percentage is set by decision-theoretic/economic concerns.