Personal Psychiatric Analysis
Imagine reading about the following result buried in a prestigious journal:
We administered [Drug X] to 10,000 patients 80+ years of age selected to be a statistical representation of the populace. None had exhibited any prior medical history to suggest unusual conditions, outside of the normal range of issues collected over a lifetime. 1/3 of the patients were selected as a control group, and the others were entered into a longitudinal study of [Drug X] in which they were given varying doses over a 30 year timespan. [Please read charitably and flesh this out to be a good, well run longitudinal study by your personal standards. The important thing is the number of patients involved.]
Of the patients administered [drugx] 1x/month for 10 years, we found that there was an increase of average lifespan by 1 year compared to normal actuarial tables. We are unsure of the cause of this. We also had one patient who has yet to die after 30 years and shows no signs of aging. Our drug has effectively demonstrated its properties as a medication designed to reduce cholesterol and will proceed to be approved for normal prescription.
Now, personally, reading this I would be completely uninterested in the normal result and fascinated by the one, crazy, outlier. Living to the age of 110 is abnormal enough that within 6,666 people selected as a statistical representation of the population, it is extremely unlikely that anyone would live that long, much less continue performing at the apparent health of an 80 year old.
How small would the sample size have to be before you would consider trying the drug yourself, just to see if you, too, lived forever as long as you took it? What adverse effects and hassles would you go through to try it? Would these factors interact to influence your decision (Mild headaches and a pill 4x/day in exchange for maybe apparent eternal life? Sign me up!)
This example is an oversimplification to make a point- often in clinical trials there are odd outliers in the results. Patients who went into full remission, or had a full recovery, or were cured of schizophrenia completely.
In the example above, if the sample size had been 10 people, 9 of whom had no adverse effects and one who lived forever, I would take it. I have been known to try nootropics with little or no proven effect, because there are outliers in their samples who have claimed tremendously helpful effects and few people with adverse effects, and i want to see if I get lucky. I think that if even the right placebo could cause changes which improve my effectiveness, it would be worth a shot.
As far as I know, psychiatrists cannot reliably predict that a given drug will improve a patient's long-term diagnosis, and psychiatrists/psychologists cannot even reliably agree on what condition a patient is manifesting. Mental disorders appear to resist diagnosis and solution, unlike, say, a broken leg or a sucking chest wound. I have learned that Cognitive Behavioral Therapy (CBT) has consistent results against a number of disorders, so I have endeavored to learn and apply CBT to my own life without a psychologist or psychiatrist. It has proven extremely effective and worthwhile.
Here is the topic for discussion: should we trust psychiatric analysis using frequentist statistics and ignore the outliers, or should we individually analyze psychiatric studies to see if they contain outliers who show symptoms which we personally desire? Should we act differently when seeking nootropics to improve performance than we do when seeking medication for crippling OCD? Should we trust our psychiatrists, who are probably not very statistically savvy and probably don't read the cases of the outliers?
Where are the holes in my logic, which suggests that psychiatrists who think like medical doctors/general practitioners have a completely incorrect perspective (the law of averages) for finding and testing potential solutions for the extremely personalized medicinal field of psychotherapy/psychiatry (in which everyone is, actually, an extremely unique snowflake.).
This is more of a thought-provoking prompt than a well-researched post, so please excuse any apparent assertions in the above, all of which is provided for the sake of argument and arises from anecdata.
MetaMed: Evidence-Based Healthcare
In a world where 85% of doctors can't solve simple Bayesian word problems...
In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...
In a world where "p-values" are anything the author wants them to be...
...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...
...there's also MetaMed. Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare. Their Chairman and CTO is Jaan Tallinn (cofounder of Skype, major funder of xrisk-related endeavors), one of their major VCs is Peter Thiel (major funder of MIRI), their management includes some names LWers will find familiar, and their researchers know math and stats and in many cases have also read LessWrong. If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.
MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually grow the fingertip back. The idea behind MetaMed isn't just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom - people who regularly read LW will be aware that this is often a pretty large divergence - but that they will also look for this sort of very recent technology that most hospitals won't have heard about.
This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately. We can’t all have nice things like this someday unless somebody pays for it while it’s still new and expensive. And the regular healthcare system really is bad enough at science (especially in the US, but science is difficult everywhere) that there's no point in condemning anyone to it when they can afford better.
I also got my hands on a copy of MetaMed's standard list of citations that they use to support points to reporters. What follows isn't nearly everything on MetaMed's list, just the items I found most interesting.
Dealing with the high quantity of scientific error in medicine
In a recent article, John Ioannidis describes a very high proportion of medical research as wrong.
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
Part of the problem is that surprising results get more interest, and surprising results are more likely to be wrong. (I'm not dead certain of this-- if the baseline beliefs are highly likely to be wrong, surprising beliefs become somewhat less likely to be wrong.) Replication is boring. Failure to replicate a bright shiny surprising belief is boring. A tremendous amount isn't checked, and that's before you start considering that a lot of medical research is funded by companies that want to sell something.
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
The culture at LW shows a lot of reliance on small inferential psychological studies-- for example that doing a good deed leads to worse behavior later. Please watch out for that.
Med Patient Social Networks Are Better Scientific Institutions
When you're suffering from a life-changing illness, where do you find information about its likely progression? How do you decide among treatment options?
You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard. You'll be better off getting your prognosis and treatment decisions from a social networking site: PatientsLikeMe.com.
PatientsLikeMe.com lets patients with similar illnesses compare symptoms, treatments and outcomes. As Jamie Heywood at TEDMED 2009 explains, this represents an enormous leap forward in the scope and methodology of clinical trials. I highly recommend his excellent talk, and I will paraphrase part of it below.
Action vs. inaction
2 weeks ago, the U.S. Preventive Services Task Force came out with new recommendations on breast cancer screening, including, "The USPSTF recommends against routine screening mammography in women aged 40 to 49 years."
The report says that you need to screen 1904 women for breast cancer to save one woman's life. (It doesn't say whether this means to screen 1904 women once, or once per year.) They decided that saving that one woman's life was outweighted by the "anxiety and breast cancer worry, as well as repeated visits and unwarranted imaging and biopsies" to the other 1903. The report strangely does not state a false positive rate for the test, but this page says that "It is estimated that a woman who has yearly mammograms between ages 40 and 49 has about a 30 percent chance of having a false-positive mammogram at some point in that decade and about a 7 percent to 8 percent chance of having a breast biopsy within the 10-year period." The report also does not describe the pain from a biopsy. This page on breast biopsies says, "Except for a minor sting from the injected anesthesia, patients usually feel no pain before or during a procedure. After a procedure, some patients may experience some soreness and pain. Usually, an over-the-counter drug is sufficient to alleviate the discomfort."
So, if we assume biannual mammograms, the conclusion is that the worry and inconvenience to 286 women who have false positives, and 71 women who receive biopsies, is worth more than one woman's life. If we suppose that a false positive causes one week of anxiety, that's a little over 5 years of anxiety, plus less than one year of soreness.
"Can't Say No" Spending
The remarkable observation that medical spending has zero net marginal effect is shocking, but not completely unprecedented.
According to Spiegel in "Too Much of a Good Thing: Choking on Aid Money in Africa", the Washington Center for Global Development calculated that it would require $3,521 of marginal development aid invested, per person, in order to increase per capita yearly income by $3.65 (one penny per day).
The Kenyan economist James Shikwati is even more pessimistic in "For God's Sake, Please Stop the Aid!": The net effect of Western aid to Africa is actively destructive (even when it isn't stolen to prop up corrupt regimes), a chaotic flux of money and goods that destroys local industry.
What does aid to Africa have in common with healthcare spending? Besides, of course, that it's heartbreaking to just say no -
Useless Medical Disclaimers
I recently underwent a minor bit of toe surgery and had to sign a scary-looking disclaimer form in which I acknowledged that there was a risk of infection, repeat surgery, chronic pain, amputation, spontaneous combustion, meteor strikes, and a plague of locusts o'er the land.
It was the most pointless damned form I've ever seen in a doctor's office. What are the statistical incidences of any of these risks? Should I be more or less worried about dying in a car crash on the way home? Taken literally, that kind of "information" is absolutely useless for making decisions. You can't translate something into an expected utility, even a qualitative and approximate one, if it doesn't come with a probability attached.
Taken literally, saying that there is a "possibility" of infection tells me nothing. The probability could be 1/1,000,000,000,000 and it would still be technically correct to describe the outcome as "possible". I'm not the litigious type, but I seriously wonder if it would be possible to sue based on the theory that "possibilities" with no probabilities attached to them are not useful information and therefore should not constitute a "disclaimer" under the law.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)