When my son was three, we enrolled him in a study of a vision condition that runs in my family.  They wanted us to put an eyepatch on him for part of each day, with a little sensor object that went under the patch and detected body heat to record when we were doing it.  They paid for his first pair of glasses and all the eye doctor visits to check up on how he was coming along, plus every time we brought him in we got fifty bucks in Amazon gift credit.


I reiterate, he was three.  (To begin with.  His fourth birthday occurred while the study was still ongoing.)


So he managed to lose or destroy more than half a dozen pairs of glasses and we had to start buying them in batches to minimize glasses-less time while waiting for each new Zenni delivery.  (The patching protocol was useless without the glasses to go with it.)  He would sneak his eyepatch off when we weren't looking and rely on having more parents than the average bear, most of which parents have ADHD diagnoses, to confuse us about whether he was supposed to be patched at any given time.  The sensor doodad and its backup were lost or "lost" in the couch cushions over and over until we couldn't find either one again.  He was at first extremely prone to falling asleep once the patch was on, which meant he wasn't looking at things with his uncovered eye during the intended patch time.  We bribed him with a dollar per day he was cooperative with the patch (an amendment from our original plan to just give him the value of the Amazon credit - between the replacement glasses and the bribery we were down on net.  The bribery alone would have ensured that if he'd been more cooperative, but he wasn't.)


They didn't drop us out of the study.  In fact, they eventually concluded that their protocol had had the effect they wanted - he's apparently trained away the asymmetry that made it impossible for my father to use a binocular microscope in med school.  They didn't make a fuss about the missing sensors, the forgotten days, the napped-through patching hours, the lacunae in glasses compliance.


He was three.  They knew he was three.  No human three year old is perfectly compliant with a patch protocol and the process of trying to force it to occur perfectly would be nightmarish for within the study, let alone if they popularize this approach as a clinical recommendation for everybody whose kid presents with the same problem.  They were running the study on intention to treat.
 



Maybe real communism has never been tried, but from an intention-to-treat perspective it performs dreadfully.  Condoms work ninety-eight percent of the time - wait, no, scratch that, in intention-to-treat terms it's more like 87%.  Is Duolingo better than a textbook?  Turns out the biggest term in this estimate is how likely I am to actually pick each one up after having decided I should, and it's not close.


You know how "people will not just"?  This is the science term for "people will not just".  If you have a plan, your plan needs to be efficacious enough to work after attrition in the population of those who make vague noises about trying it; robust to the chance that it actively harms some subset of those people and they drop out of your project; capable of surviving normal long-run levels of human error, either by way of juke tolerance or systems built in to steer everybody toward checklists and no-fault postmortems.


I think this is useful to have in mind when discussing any protocol which might contact humans or other chaotic systems eventually.

New Comment
4 comments, sorted by Click to highlight new comments since:

The way I used to explain "as treated" vs "intent to treat":

Under AT, you're asking a question about biology: does this treatment make a difference when it's applied rigorously to the tests and absolutely never to the controls?

Under ITT, you're asking a question about the practice of medicine, under actual combat conditions: if we tell clinicians and patients to do something, and they follow instructions as imperfectly as ever, is this treatment still a good idea?

That is: ITT bakes in mistakes, noncompliance, weird patients, etc. on top of the basic scientific effect.

Also, when doing a study, please write down afterwards whether you used intention to treat or not. 

Example:  I encountered a study that says post meal glucose levels depend on order in which different parts of the meal were consumed. But the study doesn't say whether every participant consumed the entire meal, and if not, how that was handled when processing the data. Without knowing if everyone consumed everything, I don't know if the differences in blood glucose were caused by the change in order, or by some participants not consuming some of the more glucose-spiking meal components.

In that case, intention to treat (if used) makes the result of the study less interesting since it provides another effect that might "explain away" the headline effect.

This can be quite frustrating if you want to figure out "what happens if I do X", and all the answers provided by science turn out to be about "what happens if people kinda want to do X, but then most of them don't".

I mean, it is good and potentially important to know that most people who kinda want to do X will fail to actually do it... but it doesn't answer the original question.

When I read studies, the intention-to-treat aspect is usually mentioned, and compliance statistics are usually given, but it's usually communicated in a way that lays traps for people who aren't reading carefully. Ie, if someone is trying to predict whether the treatment will work for their own three year old, and accurately predicts similar compliance issues, they're likely to arrive at an efficacy estimate which double-discounts due to noncompliance. And similarly when studies have surprisingly-low compliance, people who expect themselves to comply fully will tend to get an unduly pessimistic estimate of what will happen.

Curated and popular this week