A similar problem is presented on this (generally excellent) site
http://www.straightstatistics.org/article/stumped-claims-accurate-diagnosis
Ben Goldacre also has some good stuff
But in fact, what the study found was a referral rate for blacks and women of 84.7%
I might just be being stupid, but how was this figure derived at all? I understand that the point of the article is that statistics can be presented in non-intuitive and confusing ways, but whenever I've seen such examples in the past there has always been some justification, however shaky.
Did the media just outright lie this time, or am I missing something?
Nope: the odds ratio was (.847/(1-.847))/(.906/(1-.906)), which is indeed 57.5%, which could be rounded to 60%. If the starting probability was, say, 1%, rather than 90.6%, then translating the odds ratio statement to "60% as likely" would be legitimate, and approximately correct; probably the journalist learned to interpret odds ratios via examples like that. But when the probabilities are close to 1, it's more correct to say that the women/blacks were 60% more likely to not be referred.
Science journalists are expected to read papers, pick out the important parts, and rewrite them in their own words. Unfortunately, it's impossible to reliably rewrite a mathematical statement in different words without understanding what it means, so they sometimes fail and misrepresent the research they report on. But this is a problem with the journalism and its editing, not with the original research. It's good to avoid being misconstrued, papers should be written for experts first and journalists second or lower.
I just stumbled across Language Log: Thou shalt not report odds ratios (2007-07-30), HT reddit/statistics:
This was a failure mode of pop-sci journalism which I was not aware of (if I would happen to know enough to understand real papers, I’d definitely value pop-sci at minus-whatever in the meantime…)
On a related note this article got me remembering Understanding Uncertainty: 2845 ways to spin the Risk, which argues that certain presentations bias the understanding of probabilities:
I’d be quite interested if anybody could point me to further resources on good presentation of statistical facts (beside the normalization on one type of presentation), or on further pop-sci journalism failure modes.