"You can't pick winners in drug development" rhymes with a cluster of memes that are popular in the zeitgeist today:
Once you clarify any of these claims down to a specific proposition, sometimes they're true. But there is a general sense that you can get social approval from saying things whose upshot is "Thinking: it's not that great after all!"
Evidence in support of first principles reasoning generally resorts to cherry picking IME. In contrast, when I look through what methodology I can find on breakthrough thinkers in biographies and autobiographies, I find something less like 'a flash of inside view brilliance' and more like 'tried something over and over again in the presence of feedback loops and kept trying to find simple models that would explain most/the core of the data' (to account for noise in the data gathering process). Once a simple model was found, tested/extended to establish the domain of validity. These thinkers themselves seem to often point out multiple false starts where elegant inside view models were developed but eventually needed to be abandoned. We don't see as many of those looking back since people rarely record them unless their abandonment was noisy. Scott points to several in the history of depression models IIRC.
Which I suppose is to say that I don't think you can pick winners using first principles reasoning even though first principles reasoning is how we move forward. Like an exploratory/confirmatory thing.
I do agree that 'thinking isn't so great' serves much more as an excuse to avoid the 99% perspiration than it is a claim about the 1% inspiration. The 'thinking isn't so great' can be helpful when it helps point people towards the idea that 'summon sapience' includes more than symbolic based analytic techniques. Presence is expensive, especially at first. So people try to avoid it.
is about to present its results from two trials.
Might have been better to wait for the result?
And...yep, 33% objective response rates, which is great. https://www.google.com/amp/s/immuno-oncologynews.com/2018/04/20/dynavax-immunotherapy-and-keytruda-fight-head-and-neck-cancer-trial-shows/%3famp
Fascinating and also excellent news.
Is there a mechanism for betting on individual drugs more precisely than just picking the stock for the companies which hold them? A prediction market for drugs?
I don't believe so (at least I've never heard of a public one; sometimes large companies have internal prediction markets).
Related through the prediction of which drugs will succeed and which won't: are you familiar with Roger M. Stein? He does financial engineering research with MIT, and has done some work related to different ways to fund drug research. In particular, he suggested a fund for securities made up of pharmaceutical IP which would work by re-securitizing a batch of drugs after each stage in the trials (as I understand it).
The pitch for the fund in a TED talk is here.
The list of publications from his website are here: http://www.rogermstein.com/publication-list/
I think the relevant papers are “Commercializing biomedical research through securitization techniques,” “Can Financial Engineering Cure Cancer?”, and “Financing drug discovery for orphan diseases".
My knowledge of finance is not good, so I am hoping someone can verify whether this passes the sniff test.
Epistemic Status: Moderate
Way back in 2015 I was writing about the connection between cancer remissions and the immune response to infection. To recap the facts:
At the time, I predicted that if only there were a delivery mechanism that could more effectively isolate inflammatory cytokines to the tumor site, it might work safely for more than just special cases like isolated limb perfusion; and that there might be some delivery mechanism that made a bacterial therapy like Coley’s toxins work.
The heuristic here was that when I went looking for the biggest responses (remissions, complete tumor regressions) in the toughest cases (metastatic cancers, sarcomas which don’t respond to chemotherapy), many of them seemed to involve this picture of acute, intense activation of the innate immune response.
It turns out that two new therapies with very good results pretty much support this perspective.
CpG oligodeoxynucleotides, a motif found in bacterial DNA, are the active ingredient in Coley’s toxins; they are the part of bacterial lysate that triggers the immunostimulatory effects.
Today, SD-101, a CpG oligodeoxynucleotide drug produced by the biotech company Dynavax, is about to present its results from two trials.
This January, Stanford scientists reported that SD-101 combined with another immunotherapy — but no traditional chemotherapy — eradicated both implanted and spontaneous tumors when injected into mice, both at the injection site and elsewhere.
We’ll have to see the results of the human trials, but this looks promising.
Another drug, NKTR-214, is an engineered version of the inflammatory cytokine IL-2, designed to localize more effectively to tumors. The IL-2 core is attached to a chain of polyethylene glycols, which release slowly in the body, preferentially activating the tumor-killing receptors for IL-2 and resulting in 500x higher concentrations in tumors than a similar quantity of IL-2 alone. This is the tumor-localizing property that could make inflammatory cytokines safe.
In patients with advanced or metastatic solid tumors, previously treated with PD-1 inhibitors, NKTR-214 resulted in 23% of patients experiencing partial tumor regression.
While this still doesn’t mean much chance of recovery, it’s still notable — _any _treatment for advanced cancers with more than a 20% response rate is remarkable. (Chemotherapy usually produces partial response rates in the 2-20% range for metastatic cancers, depending on cancer type and drug regimen.)
It’s early days yet, but I continue to think that immunostimulants have a lot of potential in cancer treatment.
Moreover, I think this is a little bit of evidence against the frequently heard claim that it’s impossible to “pick winners” in biotech.
The conventional wisdom is that you can’t know ahead of time which drugs that seem to work in preclinical studies (in vitro or in mice) will succeed in humans.
Most preclinical drug candidates _do _fail, it’s true. And there are a lot of reasons to expect this: mouse models are not perfect proxies for human diseases, experimental error and outright fraud often make early results unreplicable, and we don’t understand all the complexities of biochemistry that might make a proposed mechanism fail.
But the probability distribution over drug candidates can’t be uniform, or it would have been impossible to ever develop effective drugs! The search space of possibly bioactive molecules is too large, and the cost of experiments too high, to get successes if drugs were tested truly at random. We would never have gotten chemotherapy that way.
I think it’s likely that using the simple heuristic of “big effects in tough cases point to a real mechanism somewhere nearby” gets you better-than-chance predictions of what will work in human trials.