You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on The Thyroid Madness: Two Apparently Contradictory Studies. Proof? - Less Wrong Discussion

7 Post author: johnlawrenceaspden 10 April 2016 08:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: AstraSequi 19 April 2016 07:45:34PM *  1 point [-]

If none of the patients had had any sort of thyroid problem, I'd have expected it to be equally bad for everyone.

I’m talking about conservation of expected evidence. If X is positive evidence, then ~X is negative evidence. An experiment only supports a hypothesis if it was possible for it to come out another way that refutes it. And if an experiment that could have supported the hypothesis actually didn’t, then it’s evidence against.

What makes me think that they felt bad on thyroxine is table 2, where all the 'self-reported' psychological scores have got worse from thyroxine. In particular p=0.007 for the decline in Vitality. Since, as you point out, they really didn't know which was which, it's hard to see how they could have faked that.

Terminology then. When you said “Thyroxine is very strongly disliked by the healthy controls (they could tell it from placebo and hated it),” it suggests they could identify the active treatment.

Absolutely this treatment is harmful to healthy people.

The people in the study had symptoms. Even if you think their symptoms were mild or unrepresentative, you shouldn’t call them healthy. It’s fair to extend the conclusion to cover people without those symptoms, but I think that’s an important difference.

Yes, but that does mean that anything that needs careful dose control will get rejected.

It’s more that you need an easily followed protocol. Anything else, especially anything subjective, is unlikely to be practically feasible, and will probably not be reproducible.

The TSH test replaced that around 1970. But they never seem to have checked that clinical and biochemical diagnoses detected the same things, and after that there was the slow emergence of all sorts of nasty diseases that look very like hypothyroidism in the clinical sense but have normal TSH.

This is normal. Clinical presentations often have many causes, which makes it almost impossible to progress. Eventually we break them down based on their causal mechanisms so we can treat them individually. Each time we find a new cause, some of the cases will be left unexplained.

These are the only ones I can find through google scholar / pubmed. That in itself is really surprising and one of the things I can't explain! Why has such an obvious thing not been ruled out?

There are a lot of interesting hypotheses competing for resources, and we have to decide which ones are worth considering. I can’t say what the reason might be here, but there are a lot of possibilities. For example, it might not be possible to design a study like the one you want that could effectively answer the question.

Really? Forty years of experience in treating patients is less valuable than a single anecdote published in a journal? Really?

Yes. Expert opinion (i.e., the opinion of individual experts, not expert consensus) is the lowest level because you can find an expert to support pretty much any proposition that isn’t obviously ridiculous, and sometimes even if it is. In fact, this is true higher in the hierarchy as well, which is why we use syntheses of evidence so much. I can’t stress this enough: in biology, you can use peer-reviewed evidence to make plausible arguments for arbitrary hypotheses.

All the rest of it is anecdotal, from alternative sources, but there's a mountain of it.

The point of evidence-based medicine is that perceptions are unreliable. That includes the perceptions we call clinical experience (which once said that bloodletting was an important medical treatment). Keep in mind that doctors aren’t scientists and usually don’t even qualify as experts. EBM is unreliable too, but less so, just like science is unreliable but is still better than ancestral wisdom.

The TSH test ruling out hypothyroidism is expert opinion. Its reliability is unfounded dogma.

This sounds like you’re saying the TSH test doesn’t actually measure TSH, but I think you mean to say you disagree with the conclusions that it’s used for. But since hypothyroidism is defined as low thyroid hormone levels, some of this will be a dispute over definitions.

I can't find any evidence for it as the sole measure of thyroid system function at all.

I don’t think anyone who understands it would say it is. It measures TSH levels, and the question is what we do with that measurement. But we’re often limited by what we’re able to (easily) measure, and it might even be the only objective measurement we have.

Comment author: Lumifer 19 April 2016 08:49:15PM 1 point [-]

in biology, you can use peer-reviewed evidence to make plausible arguments for arbitrary hypotheses.

Et tu, Brut? That is obviously true for humanities and for things like observational studies of nutrition, but do you think it extends to most / all of biology? "For any hypothesis there is a mouse strain which proves it true"? :-/

Comment author: Lumifer 21 April 2016 05:09:53PM 3 points [-]

Hmmm

This piece claims that

...we face a replication crisis in the field of biomedicine, not unlike the one we’ve seen in psychology but with far more dire implications. Sloppy data analysis, contaminated lab materials, and poor experimental design all contribute to the problem.

...Freedman and his co-authors guessed that fully half of all results rest on shaky ground, and might not be replicable in other labs. These cancer studies don’t merely fail to find a cure; they might not offer any useful data whatsoever.

Comment author: johnlawrenceaspden 22 April 2016 03:27:12PM *  2 points [-]

Oh God, where will this end? Is it really only physics and chemistry that aren't sloppy cargo-cults, or are they broken too?

A lot of this, I think is to do with taking tenure away from young academics. Once upon a time once you'd proved basic competence and cleverness, you could spend your whole career being careful about stuff. These days you've just got to turn out crap as fast as possible. And you spend most of your time applying for grants.

Comment author: AstraSequi 23 April 2016 02:01:35PM 1 point [-]

This open-access article discusses some of the issues in cancer research.

In most ways biology is intermediate between the hard and soft sciences, with all that implies. It’s usually impossible to identify all the confounders, most biologists are not trained in statistics, experiments are complex and you can get different results from slight variations in protocol, we're trying to generalize from imperfect models, many high-profile results don’t get tested by other labs, ... all these factors come together and we get something that people call a “replication crisis.”

Comment author: Lumifer 23 April 2016 06:15:57PM 1 point [-]

tl;dr It's complicated.

Yes, I know. But it would be nice if people recognized that it is complicated and not pretend that we know more than we actually do.