Even if you say that science isn't about solving real world issues but about knowledge, I also think that replication rates of 11% in the case of breakthrough cancer research indicates that the field is not good at finding out what's going on.
I don't think a flat replication rate of 11% tells us anything without recourse to additional considerations. It's sort of like a Umeshism: if your experiments are not routinely failing, you aren't really experimenting. The best we can say is that 0% and 100% are both suboptimal...
For example, if I was told that anti-aging research was having a 11% replication rate for its 'stopping aging' treatments, I would regard this as shockingly too high and a collective crime on par with the Nazis, and if anyone asked me, would tell them that we need to spend far far more on anti-aging research because we clearly are not trying nearly enough crazy ideas. And if someone told me the clinical trials for curing balding were replicating at 89%, I would be a little uneasy and wonder what side-effects we were exposing all these people to.
(Heck, you can't even tell much about the quality of the research from just a flat replication rate. If the prior odds are 1 in 10,000, then 11% looks pretty damn good. If the prior odds are 1 in 5, pretty damn bad.)
What I would accept as a useful invocation of an 11% rate is, say, an economic analysis of the benefits showing that this represents over-investment (for example, falling pharmacorp share prices) or surprise by planners/scientists/CEOs/bureaucrats where they had held more optimistic assumptions (and so investment is likely being wasted). That sort of thing.
Replication rate of experiments is quite different from the success rate of experiments.
An 11% success rate is often shockingly high. An 11% replication rate means the researchers are sloppy, value publishing over confidence in the results, and likely do way too much of throwing spaghetti at the wall...
For those who haven't heard, NIH and NSF are no longer processing grants, leading to many negative downstream effects.
I've been directing my attention elsewhere lately and don't have anything informative to say about this. However, my uninformed intuition is that people who care about effective altruism (research in general, infrastructure development, X-risk mitigation, life-extension...basically everything, actually) or have transhumanist leanings should be very concerned.
The consequences have already been pretty disastrous. To provide just one, immediate example, the article says that the Center for Disease Control and Prevention has shut down. I think that this is almost certain to directly cause a nontrivial number of deaths. Each additional day that this continues could have huge negative impact down the line, perhaps delaying some key future discoveries by years. This event *might* be a small window of opportunity to prevent a lot of harm very cheaply.
So the question is:
1) Can we do anything to remedy the situation?
2) If so, is it worth doing it? (Opportunity costs, etc)