The cost of a mammogram is about $100 and the cost of a breast biopsy is about $1000. Thus 2000 women X 10 years X $100/mammorgram + 8%X2000 women X $1000/biopsy = $2,160,000 per life saved.
This might be the calculation they actually looked at.
Good point - but they didn't give that as their justification. Also, you can get a better cost (in dollars and other measures) per life saved by giving women mammograms once every 2 years; and probably better still by giving them every 3 years.
Of course they wouldn't give that as a justification. Look at the reaction of the BC community over the change in recommendation with the justification of unnecessary anxiety/morbidity-- do you imagine there'd be less outrage if the reported reason for changing the guideline was money? They were retarded enough to bring this up during the health-care debate as it is...
To make the cost argument, you'd need to also present the cost differences caused by earlier detection of a small number of cancers. The cost of treating a single case might be greater than the cost of testing a thousand cases.
I suspect that the only way skipping early detection can be a win, cost-wise, is if it enables more people to die before they receive costly treatment.
Early detection can also lead to overdiagnosis. The report discusses that as a factor in their decision.
No you wouldn't, the most dangerous cancers grow so fast that they often appear and grow to dangerous size between annual mammograms. Stretching it out to every 2 or 3 years might actually reduce number of lives saved even more than it reduces costs.
Stretching it out to every 2 or 3 years might actually reduce number of lives saved even more than it reduces costs.
Think about that, and you'll realize that's impossible.
and if a million dollars is the bright line, this explains why the 3x better age range of 50-60 got a pass.
Really? The cost you are quoting for the procedures sounds low for the U.S., but I'm no expert. (comment reworded for clarity)
Note: the following is a response to a misunderstanding of MichaelBishop's comment in its original form, and refers to the price US society is willing to pay to save a human life.
Not really - I've heard US$1e6 cited before as a cutoff.
As someone who hasn't paid much attention to this debate and doesn't have a lot of previous information about it, I don't feel you make your case convincingly that the conclusion reached is wrong. There appear to be a lot of unstated assumptions and missing information in both your conclusion and that of the panel (based on skimming the report you linked).
The report lacks detailed information on a number of relevant factors, such as the risk of complications from biopsies, the expected costs in time and resources of following up on a false positive, the expected number of years of additional lifespan from early treatment and the size of the overdiagnosis effect. It's not clear whether these issues were quantified and included in the calculation underlying the recommendation or not but you don't seem to have taken them into account either.
You also seem to neglect the costs of undergoing screening for those who receive a negative result in your analysis. Simply comparing the costs of a false positive to the cost of a missed diagnosis is not sufficient to draw a conclusion. You seem very confident that this report radically understates the benefits of screening but it doesn't seem to me that you make a more convincing case than the original report.
I think it's fair to judge their bias by assuming that they made their decision based on the factors they said they made their decision on.
I don't see any particular evidence of bias in the factors they say they made their decision on in the linked report. The shortcoming I see in the report is that many of the factors mentioned are not quantified and it is not clear to me whether the recommendation given is based on a calculation that is not laid out in the report or whether the calculation was never done. There is simply not enough information in the report to determine whether the recommendation is justified.
Your criticism of the recommendation suffers from exactly the same shortcoming. You state your conclusion without quantifying the factors involved or showing the calculation you used to reach the conclusion. It also seems to me that you neglect to mention some relevant factors which the original report at least suggests were considered. Your implication is that the original report places a much lower value on human life than is used when making other decisions, or that it places an unduly high weight on the negative utility of 'anxiety', but neither the report or your criticism provide any concrete quantification of those factors.
You state your conclusion without quantifying the factors involved or showing the calculation you used to reach the conclusion. ... neither the report or your criticism provide any concrete quantification of those factors.
I wrote:
So, if we assume biannual mammograms, the conclusion is that the worry and inconvenience to 286 women who have false positives, and 71 women who receive biopsies, is worth more than one woman's life. If we suppose that a false positive causes one week of anxiety, that's a little over 5 years of anxiety, plus less than one year of soreness.
You pluck some numbers out the air without any justification ('suppose that a false positive causes one week of anxiety'), you don't attempt to quantify the possible complications from surgical biopsy, you compare against 'one woman's life' when you should compare the expected number of extra years of life from early treatment, you don't account for the overdiagnosis problem mentioned in the report and as mentioned elsewhere you ignore the costs of screening women who get a negative result.
You didn't introduce any new factors not mentioned in the original report and you don't give a neutral reader any reason to suppose that your hidden assumptions and calculations are any more valid than those underlying the report's recommendation. Indeed, an unbiased observer has every reason to place a great deal more faith in the report than in your estimates.
You may have a worthwhile point about a bias for action/inaction under different circumstances. You may be right that routine mammograms for 40-49 year olds are worthwhile. I don't think you make a strong case for your position vs. that of the report though and your apparent high level of confidence in your position suggests that you may be labouring under some biases of your own.
Sure, I could have been a lot more thorough, if I had several hours to devote to this post. I didn't need to. I pointed out the gap between the values the USPSTF used in their recommendation, and the values the FDA uses when regulating drugs. I think the gap is large enough that none of the factors you mention will come close to closing it. I suggest you take it on yourself to supply the figures if you think otherwise.
But you didn't accuse me of just being sloppy, or failing to account for some factors. You wrote (my emphasis):
You state your conclusion without quantifying the factors involved or showing the calculation you used to reach the conclusion. ... neither the report or your criticism provide any concrete quantification of those factors.
And that is a blatantly false accusation.
You further wrote:
you don't give a neutral reader any reason to suppose that your hidden assumptions and calculations
As I already showed you my calculations twice, I can't imagine what you are referring to. There are no hidden calculations. Missing calculations, maybe. Hidden, no.
You didn't introduce any new factors not mentioned in the original report
Why would I do that? I think you're missing the point. This is not an post arguing in favor of mammograms.
I think you're missing the point. I'm not arguing against mammograms. From what I've seen here I'm still agnostic. The point of my original post was primarily to note the discrepancy between the apparent confidence you have in the wrongness of the report's conclusion and my impression that you failed to make your case at all convincingly.
I'm not aware of having any particular prior opinion on this issue. I was aware that it had come up in the back and forth debate over health care but I had not consciously formed a strong opinion on it. My (probably biased) belief was that I was relatively impartial on this issue. To me there are numerous obvious logical flaws in your argument that rather undermine it's use as an example on which to build a general theory of an action/inaction bias. The argument for such a bias appears to be premised on there being a watertight and unarguable case for mammograms, a case which it seems to me you failed to make. I don't appear to be the only one who wasn't convinced based on the other comments.
It says:
By the USPSTF's estimates, it takes 1,904 women in their 40s being screened for a decade to save the life of one woman whose cancer would have gone undetected. As science journalist Merrill Goozner observes, that means that it costs as much as $20 million – not counting the interventions for false-positive results – for every life saved by regular mammograms for women in their 40s.
That's 19,040 mammograms for $20m, or about $1000/mammogram. Which is wrong. Mammograms cost about $100 for the uninsured.
(For instance, the F-18 Raptor, a plane that costs about $350,000,000, has an ejector seat for the pilot.)
Maybe pilots who feel more confident in survivability are more effective (not to mention more willing to fly the plane). I've seen "leave no man behind" justified in this way.
Even if there is no such effect, the relevant comparison is not between the cost of the plane and the value of the pilot, but between the cost of adding the ejector seat (including, of course, the effective cost of having the plane be heavier and have more moving parts) and the value of the pilot.
If you have a $100bn device that occasionally kills its users and can halve the risk by spending $1, then the fact that the device costs $100bn has nothing whatever to do with whether you should spend that $1.
Having an ejector seat lets the pilot give up saving the plane and eject.
However, it isn't a very good example, because you have to factor in the pilot's estimate for the probability that the plane could be saved; and that probably takes off an order of magnitude.
I would also point out that pilots themselves are insanely expensive. Quite aside from the basics like salary and subsidization, his education (often up to a master's degree in physics or a related field), a top of the line fighter pilot has had years, if not decades of flying experience, most of which was only to train him, and every year of that costs millions for support services. You've heard that a regular grunt costs hundreds of thousands or millions for every year overseas? Imagine how much it costs when that grunt is a B-2 or F-18 pilot!
That pilots costs so much is one of the major factors behind the recent success of drones.
(Just the initial training goes into the millions; http://www.airforcetimes.com/community/opinion/airforce_editorial_pilotcuts_071217/ says >1m; even India can't train pilots for less than 4 or 500,000 USD depending on how you interpret http://indiatoday.intoday.in/site/Story/71281/LATEST%20NEWS/IAF+not+keen+on+women+fighter+pilots.html )
Just want to point out that this should be reffering to the F-22 Raptor. The F-18 (callsign: Hornet) is significantly older and less advanced, and only costs around 30 million.
Empirically, we have a much higher success rate at intervening in health than in economics.
It would be easier to establish this is true if the relevant class of interventions were better defined.
One issue is that individuals will make better decisions for themselves than they do for government. i.e. myth of the rational voter.
Yet in health, we see action as inherently dangerous; while in economics, we see inaction as inherently dangerous.
Again, there does appear to be lots of medical over-treatment so I'm not so sure your claim is true.
Again, there does appear to be lots of medical over-treatment so I'm not so sure your claim is true.
How about this: In health, the government sees action as inherently dangerous; while in economics, it sees inaction as inherently dangerous.
Yet in health, we see action as inherently dangerous; while in economics, we see inaction as inherently dangerous. Why?
That is a very good point, I must appreciate that you noticed it. I would say that one of the reasons that happens is because people resist change. In health, any action would mean there could be something wrong that can happen. Thus, it is made a mandate that every possible wrong be checked before such an action takes place. Hence, the inherent danger in action.
Where as in case of economics, actions are usually taken to stop a change from happening (Stimulus package, bailing out car companies, president goes shopping, etc.). Thus, inaction would be accepting change which people always oppose. Hence, the inherent danger in inaction.
I suspect that a lot of it has to do with how much control people imagine that they have in economics vs. health.
Economics is just a lot of people making choices. This leads many to imagine that, to fix any economic problem, we just need to get everyone to make the right choices. Indeed, the "get everyone to" part is often elided, and so we imagine that "we just need to make the right choices". Thus, doing the right thing is naturally imagined as something within our power. Any economic problem can be solved; it's just a matter of will. There is therefore a bias towards action.
On the other hand, most people accept that much of their health is beyond anyone's control. They accept, for example, that no one can keep them from dying. Since they acknowledge that some bad health states cannot be solved, they fear putting themselves into such a state. On the other hand, the body usually appears to work just fine without any intervention (e.g., your heart beats without anyone consciously making it do so). There is therefore a bias against action.
I think the false negative rate is wrong in that post. The original source says
The BCSC data indicate that false-positive mammography results are common in all age groups. The rate is highest among women age 40-49 years (97.8 per 1,000 women per screening round) and declines with each subsequent age decade (Table 7). The rate of false-negative mammography results is lowest among women age 40-49 years (1.0 per 1,000 women per screening round) and increases slightly with subsequent age decades.
Which suggests to me that P(negative|cancer) is not 1/1000 but 1/(actual cancer rate per thousand) which appears to be around 1/4 from the numbers in the paper. The false negative rate given here of 'up to 20%' seems much more in line with that interpretation than does the 1/1000 false negative rate.
The wording of the original report is quite misleading as it suggests the false negative rate increases with age but I think they actually mean that the number of false negatives per 1000 increases (because the cancer rate is increasing). The other link suggests that P(negative|cancer) is higher with younger women due to firmer breast tissue making it harder to distinguish a tumor from healthy tissue. Other pages I found through Google suggested the same.
It's hard to know whether we can call this level of cognitive dysfunction "bias". Bias usually means a failure to weigh or interpret the evidence correctly. In this case, they added up the numbers, looked at them, and said that 2 < 1.
They made the right decision and made the right announcement. Call me names if you will.
Why do you think they made the right decision?
I deleted that paragraph, since it distracts more than it helps.
The cost in anxiety, pain, other health impacts of surgery combined with the financial impact Laura mentioned seems worse than the alternative. "Nothing is more important than human lives" is an effective premise to advocate so long as you don't actually make your decisions based on it.
You're not addressing the content of the post at all. It doesn't say that nothing is worth more than a life. It says, very specifically, that 5 years of anxiety, plus less than one year of soreness, is not worth more than a life. And I agree.
It also says that 500 years of anxiety and 100 years of soreness might well be worth more than a life, or at least that it wouldn't be crazy to defend that.
You're not addressing the content of the post at all.
I disagree and assert that you are neglecting the context.
It doesn't say that nothing is worth more than a life.
No, it doesn't. That is the plausible implicit assumption that could make the claim in question logically follow and also the kind of thing that is often the right thing to say for social reasons even if your actual decision making is not determined by it. A political significant organisation should include money in the decision making but probably not mention this in a press release.
It says, very specifically, that 5 years of anxiety, plus less than one year of soreness, is not worth more than a life. And I agree.
It very specifically leaves off financial cost. I very clearly included this in my answer above and with good reason. Choosing to leave out the financial element is significant when doing so leads to absurd accusations like "they say 2>1". If you take a look at the claim I reject it is that not agreeing with a preferred decision is a 'cognitive dysfunction', not a mere disagreement of preferences.
I don't address the post in general and now that Phil has removed the distracting paragraph I probably agree with most of the points he raises.
When Congress was debating the bank bailouts and the stimulus package, a lot could have been said in favor of doing nothing; but no one even suggested it.
Krugman is saying that Congress did essentially that - once the risk of total collapse of the world economy passed, they switched to doing nothing, and actions to alleviate unemployment are not even discussed.
2 weeks ago, the U.S. Preventive Services Task Force came out with new recommendations on breast cancer screening, including, "The USPSTF recommends against routine screening mammography in women aged 40 to 49 years."
The report says that you need to screen 1904 women for breast cancer to save one woman's life. (It doesn't say whether this means to screen 1904 women once, or once per year.) They decided that saving that one woman's life was outweighted by the "anxiety and breast cancer worry, as well as repeated visits and unwarranted imaging and biopsies" to the other 1903. The report strangely does not state a false positive rate for the test, but this page says that "It is estimated that a woman who has yearly mammograms between ages 40 and 49 has about a 30 percent chance of having a false-positive mammogram at some point in that decade and about a 7 percent to 8 percent chance of having a breast biopsy within the 10-year period." The report also does not describe the pain from a biopsy. This page on breast biopsies says, "Except for a minor sting from the injected anesthesia, patients usually feel no pain before or during a procedure. After a procedure, some patients may experience some soreness and pain. Usually, an over-the-counter drug is sufficient to alleviate the discomfort."
So, if we assume biannual mammograms, the conclusion is that the worry and inconvenience to 286 women who have false positives, and 71 women who receive biopsies, is worth more than one woman's life. If we suppose that a false positive causes one week of anxiety, that's a little over 5 years of anxiety, plus less than one year of soreness.
(I heard on NPR that the USPSTF that made this recommendation included representatives from insurance companies, but no experts on breast cancer. So perhaps I'm barking up the wrong tree by looking for a cognitive bias more subtle than financial reward.)
I'm not shocked at the wrongness of the conclusion; just at its direction. The trade-off the USPSTF made between anxiety and death is only 2 orders of magnitude away from something that could be defended as reasonable. Usually, government agencies making this tradeoff are off by at least that many orders of magnitude, but in the opposite direction. (F-18 example deleted.)
So, what cognitive bias let this government agency move the decimal point in their head at least 4 points over from where they would normally put it?
I think the key is that this report recommended inaction rather than action. In certain contexts, inaction seems safer than action.
Imagine what would happen if the FDA were faced with an identical choice, but with action/inaction flipped: Say you have an anti-anxiety drug, which will eliminate anxiety of the same level caused by a false-positive on a mammogram, in 15% of the patients who take it - and it will kill only 1 out of every 2000 patients who take it. Per week.
Would the FDA approve this drug? Approval, after all, does not mean recommending it; it means that the decision to use it can be left to the doctor and patient. The USPSTF report stressed that such decisions must always be left up to the doctor and patient; by the same standards, the FDA should certainly approve the drug. Yet I think it would not.
A puzzle is why we have the opposite bias in other contexts. When Congress was debating the bank bailouts and the stimulus package, a lot could have been said in favor of doing nothing; but no one even suggested it. Empirically, we have a much higher success rate at intervening in health than in economics. Yet in health, we regulate actions as if they were inherently dangerous; while in economics, we see inaction as inherently dangerous. Why?
ADDED: Perhaps we see regulation as inherently safer than a lack of regulation. "Regulating" (banning) drugs is seen as "safe". "Regulating" the economy, by bailing out banks, passing large stimulus bills, and passing new laws regulating banks, is seen as "safe". Recommending or not recommending mammograms isn't regulation either way; therefore, we perceive it neutrally.