I've definitely seen this in the academic literature. And it's extra annoying if the study used a small sample; the p-values are going to be large simply because the study didn't collect much evidence.
OTOH, chemotherapy isn't a very good example because there are other factors at work:
I think the fact that chemotherapy isn't a very good example demonstrates a broader problem with this post: that maybe in general your beliefs will be more accurate if you stick with the null hypothesis until you have significant evidence otherwise. Doing so often protects you from confirmation bias, bias towards doing something, and the more general failure to imagine alternative possibilities. Sure, there are some cases where, on the inside view, you should update before the studies come in, but there are also plenty of cases where your inside view is just wrong.
I like this, thanks for posting. I've noticed there's a contrarian thrill in declaring, "Actually there's no evidence for that" / "Actually that doesn't count as evidence."
Academics love it when some application of math/statistics allows them to say the opposite of what people expect. There's this sense that anything that contradicts "common sense" must be the enlightened way of thinking, rising above the "common," "ignorant" thinking of the masses (aka non-coastal America).
It's hard to tell, since while common sense is sometimes wrong, it's right more often than not. An idea being common sense shouldn't count against it, even though like the article said it's not conclusive.
Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn't, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).
Unfortunately, the problem described here is all too common. Many 'experts' give advice as if their lack of knowledge is proof. That's just not the way the world works, but we have many examples of it that are probably salient to most people, though I don't wish to get into them.
Where this post is lacking is that it won't convince anyone who doesn't already agree with it, and doesn't have any real way to deal with it (not that it should, solving that would be quite an accomplishment).
Thus, this is simply another thing to keep in mind, where experts use terms in ways that are literally meaningless to the rest of the populace, because the expert usage is actually wrong. If you are in these fields, push back on it.
I just thought of this in the context of this study on hydroxychloroquine in which 14/15 patients on the drug improved vs 13/15 patients treated with something else. To the average Joe, HCQ curing 14/15 people is an amazing positive result, and it's heartening to know that other antivirals are almost as good. To the galaxy-brained journalist, there's p>0.05 and so "the new study casts doubt on hydroxychloroquine effectiveness... a prime example of why Trump shouldn't be endorsing... actually isn't any more effective."
Well, we can say that 27/30 (90%) patients improved. With a very high level of confidence, we can say that this disease is less fatal than Ebola (which would have killed 26 or so).
Upon seeing the title, I guessed this piece was going to argue that people are often right without evidence. Instead the OP argued against believing something without evidence.
In the intro class, he took one step backwards. At that point he's in the Valley of Bad Rationality: education made him worse than where he started.
But is the doctor worse or better for it (even assuming that this story, second hand or third hand or more is accurate)? And how do we know?
In general, it's good to check your intuitions against evidence where possible (so, seek out experiments and treat experimentally validated hypotheses as much stronger than intuitions).
The valley being described here is the idea that you should just discard your intuitions in favor of the null hypothesis, not just when experiments have failed to reject the null hypothesis (though even here, they could just be underpowered!), but when experiments haven't been done at all!
It's a generalized form of an isolated demand for rigor, where whatever gets defined as a null hypothesis gets a free pass, but anything else has to prove itself to a high standard. And that leads to really poor performance in domains where evidence is hard to come by (quickly enough), relative to trusting intuitive priors and weak evidence when that's all that's available.
Correct, favoring hypothesis H or NOT H simply because you label one "null hypothesis" are both bad. Equally bad when you don't have evidence either way.
In this case, intuition favors "more chemo should kill more cancer cells", and intuition counts as some evidence. The doctor ignores intuition (which is the only evidence we have here) and favors the opposite hypothesis because it's labeled "null hypothesis".
I was suggesting that there might be ways of assigning the label of "null hypothesis".
X is good, more X is good. (intuition favors "more chemo should kill more cancer cells")
X has a cost, we go as far as the standards say, and stop there. (Chemo kills cells. This works on your cells, and cancer cells. Maybe chemo isn't like shooting someone - they aren't that likely to die as a result - but just as you wouldn't shoot someone to improve their health unless it was absolutely necessary, and no more, chemo should be treated the same way.) "Do no harm." (This may implicitly distinguish between action and inaction.)
From the review article Hematopoietic Stem-Cell Transplantation by Edward A. Copelan, M.D.,
The chemotherapy used to treat cancers acts primarily on proliferating cells. Normal and malignant stem cells, however, are quiescent and therefore insensitive to therapy.
This is why HCTs are used to treat cancer, and also why indefinite chemotherapy would not be expected to help patients. It continues to damage healthy tissue, while not killing the lingering malignant quiescent stem cells.
If indefinite HCT was beneficial for treatment of cancer, we should expect that some evidence would be available for it. Absence of evidence is evidence of absence.
My take is that the doctor was trying to coordinate care for his patient, build rapport with the family, and trying to resist being drawn into a rabbit hole discussion by one ill-informed and overconfident family member.
Ultimately, that family member concluded that the outcome pointed to the doctor's lack of reasoning faculties, rather than to his own insensitivity to social context and lack of basic research.
This blog post and the episode it describes was from back in 2008, but the research article I linked above had been out for two years at that point. Information was available.
Another piece of common sense is "doctor knows best." If the poster of the original linked blog post was going to interrogate the doctor, and certainly if he was going to use him as an example of bad rationality in your blog post, he should have at least done some research first.
This is why HCTs are used to treat cancer, and also why indefinite chemotherapy would not be expected to help patients. It continues to damage healthy tissue, while not killing the lingering malignant quiescent stem cells.
The original article's point is that "additional chemo might get rid of the last little bits of cancer that are too small to show up on scans". So even if it was known that HCT's don't work on normal and malignant stem cells, there is still the question of whether there are lingering proliferating cells that aren't showing up on scans.
Of course, this doesn't mean the chemo is worth it. From the original article: "more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa".
If indefinite HCT was beneficial for treatment of cancer, we should expect that some evidence would be available for it. Absence of evidence is evidence of absence.
True. But 1) how strong is this evidence of absence? Perhaps it's weak since the standards for journals to publish stuff are pretty high. And 2) the author of the original article was clear that he didn't know how beneficial treatment was or what the risks were, and thus was seeking the expertise of the doctor.
My take is that the doctor was trying to coordinate care for his patient, build rapport with the family, and trying to resist being drawn into a rabbit hole discussion by one ill-informed and overconfident family member.
From the original article: "And good people, maybe I’m being unfair and underestimating this guy, but I swear to you that this fancy oncologist in this very prestigious institution didn’t seem to understand the difference between these two types of 'no evidence.' and "*The most generous possible interpretation of what went on, but which would require me to attribute to him a thought process that he did not express at all, is that he understands the difference between the two types of "no evidence" but has come to believe that doctors’ interpretations of imperfect evidence will systematically lead them to over-treat and so has adopted a rule of "do nothing unless there is strong evidence that you should do something" as a second-best optimum."
If the doctor was trying to do what you said, I have a moderately strong expectation that the author would have picked up on that. Instead, as indicated in the quotes above, the author picked up a different vibe.
Another piece of common sense is "doctor knows best."
I think that is way too charitable. My model is that they usually meet some level of competency, but a) are often painfully unaware of research that has been done over the past ~10 years, and b) lack some basic reasoning ability. I can dig up links/excerpts if you are interested, but here are some things that come to mind:
Before responding, I think this is an opportunity for a productive and charitable back and forth, which I'd like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I'll respond to a couple pieces of what you've said, and feel free to only respond to part of what I've said.
The original article seemed to have three points, or question-clusters.
Here's my interpretation and critique of the author's implied answer to the medical question. I didn't do this in my top-level comment, but let's charitably grant that he was fully aware of the quiescent stem cell issue.
So when the OP initiated this conversation, the cancer had been in remission for a while. So what's the "common-sense" criterion for a stopping point?
The doctor has one, and it also makes "common sense." If you can't see it, and you've been fighting it past the point of not being able to see it for a while, it's probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
I think my charitable interpretation fo the doctor's perspective (a charity the OP did not grant the oncologist), is at the very least a reasonable argument. It's one that the OP made no effort to suss out, either in the conversation with the doctor or in reflection afterward.
This brings us to the social issue. My interpretation of the doctor is that he's dealing with a relative who is:
Now, I think that you and I agree that it is good and necessary for patients, or their advocates, to assess the competence of experts generally, including doctors. This is a tricky problem, and it's been written about at length in the rationalsphere.
My perspective is that when you do this, there's a tradeoff involved. On the one hand, if you do it well, you increase your chance of working with a competent expert. On the other hand, if you do it poorly, you may disrupt the formation of appropriate trust, and complicate the work of establishing relationships and sharing information.
In this case, I am arguing that the OP, by his own admission, did not do some of the common-sense things you'd do if you were trying to play the role of expert-vetter well. In this case, especially if you see yourself as a very rational, smart person, you'd make an effort to do some research in advance, and to understand the doctor's point of view. I agree that doctors, as you say, are not always up-to-date on the literature, and I've been on the receiving end of some bad care myself. However, a simple action you can take to safeguard against such cases, when you already know the diagnosis, is to find that literature yourself.
The OP seems not to have done that. He also wasn't asking about the doctor's up-to-dateness, but rather the doctor's willingness to address his point about absence of evidence. So I assert that the OP seems to have been playing the role of expert-vetter poorly. Furthermore, they perceived and portrayed themselves as doing it well, a sort of Dunning-Krueger effect.
They then use this as a dig against their doctor, and their post is now being used as a lesson in rationality. This seems concerning to me. Not having been present, I don't want to assume I really know what was going on. But it rubs me the wrong way, and doesn't seem like a central case of either instrumental or epistemological rationality in action.
I don't want to get into the philosophical level, because this isn't a main source of my objection to the Overcoming Bias post. So I will leave that to the side.
You don't have to persuade me that doctors are not always as well-informed as we'd like them to be. And certainly I already know that our medical system is deeply flawed, even broken. As I say, I'm on board with the idea that patients or their relatives should inform themselves, and have a collaborative role with their doctors.
I'm saying that I think the poster of the Overcoming Bias post comes across to me as having done a below-average job of occupying that role. That's just the perception I get from reading the post. Maybe I'd feel different if I'd been there in person to observe, but I can only go off the information given.
Before responding, I think this is an opportunity for a productive and charitable back and forth, which I'd like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I'll respond to a couple pieces of what you've said, and feel free to only respond to part of what I've said.
Likewise! And sounds good! :)
The original article seemed to have three points, or question-clusters.
I have a feeling that we agree here but am not sure, so I will say it. My read is that there was one, singular, focal point of the article: that you should update incrementally instead of having some (arbitrary) threshold before you update at all. That's the central point and it feels to me like the threads you are opening are tangential.
I'm open to discussing these (IMO) tangential points, but I think it's important to note that they are DH4, not DH6, or even DH5.
The author seems to be implying that it's common sense to apply at least N + 1 treatments in this case, to kill any remaining proliferating cells.
I think you are mistaken. The author said that he notices a tradeoff at play and wanted to get the doctors opinion on that tradeoff. Eg. the tradeoff might come out in favor of not applying N+1 treatments. From the article:
"Going into the appointment, I had the idea (based on nothing but what seemed to me like common sense) that there was a tradeoff: more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa."
I would also (charitably) assume that the author feels uncertain about whether there are other tradeoffs/considerations at play, and wanted to hear from the doctor about that as well. Ie. first figure out all of the tradeoffs and then make a decision based on the weights.
As for my take on cancer treatment, I'm at the same point as the author: I notice some tradeoffs but a) don't know how strong they are and b) probably don't have a complete picture.
The doctor has one, and it also makes "common sense." If you can't see it, and you've been fighting it past the point of not being able to see it for a while, it's probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
Here is my model of how the author would reply to this: "You say it's probably not there. That might be true. I don't know how likely that is and wanted to get the doctor's opinion on it. I agree that chemo is harming the body. I see that as a con. But there is also a 'pro' of 'we might prevent a relapse'. I don't know how to weigh the pros and cons and want to get the doctor's opinion on how much weight should be assigned to each. The problem is that the doctor expressed a belief that 'we might prevent a relapse' doesn't even belong on the 'pros' list to begin with, and this belief stems from the incorrect notion that evidence needs to meet some threshold before we update at all."
My perspective is that when you do this, there's a tradeoff involved.
Agreed that there are tradeoffs and that they roughly take the shape you describe.
So I assert that the OP seems to have been playing the role of expert-vetter poorly.
Hm. I agree that it would have been good for the author to have done the research. It strikes me as either a) laziness or b) a lack of altruism (ie. if it were himself who had the cancer, or a closer relative, he would have been motivated enough to do the research). Both of which are things we all struggle with. Still, we should strive to do better. But on the other hand, I think getting into all of that would have distracted from the main point of the blog post, and so it feels to me like a good decision to leave it out.
I like the framework you've offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
What he said instead was that there was "no evidence" that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.
So then I asked him whether by "no evidence" he meant that there have been lots of studies directly on this point which came back with the result that more chemo doesn’t help, or whether he meant that there was no evidence because there were few or no relevant studies. If the former was true, then it’d be pretty much game over: the case for discontinuing the chemo would be overwhelming.
But if the latter was true, then things would be much hazier: in the absence of conclusive evidence one way or the other, one would have to operate in the realm of interpreting imperfect evidence; one would have to make judgments based on anecdotal evidence, by theoretical knowledge of how the body works and how cancer works, or whatever.
I think that there are three ways of interpreting the central point of these sentences.
It seems to me unlikely that even the blog's author thought that this doctor did not understand point (1). I don't think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
"Deciding how to decide" should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP's framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn't the only factor at play informing the patient's risk of dying from that cancer, and the patient's goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, "conclusive" studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our "hazy" group rationality processes. They are inseparable from it.
I just came across something that seems similar: how they say "past performance is not an indicator of future results" in finance.
Uh... yes it is! It's not a perfect indicator. It might not even be a good indicator. But it is an indicator. In other words, it is not zero Bayesian evidence.
Shouldn't the follow up to no evidence showing that it does any good be "Is there any evidence showing it does harm?"
Have you seen this before? Any thoughts on how it might inform on your examples?
I am not defending the arrogance of some doctors but I do wonder if you are truly giving the doctor in question here a full opportunity and might have biased the discussion by stating things in a way that did not allow a good discussion to ensue but perhaps setup a more adversarial framework.
I wonder how much a believe in the Hippocratic Oath might be at play here.
Quick summary of Doctor, There are Two Kinds of “No Evidence”:
Let me be clear about the mistake the doctor is making: he's focused on conclusive evidence. To him, if the evidence isn't conclusive, it doesn't count.
I think this doctor is stuck in a Valley of Bad Rationality. Here's what I mean:
I think that a lot of people are stuck in this same valley.