A study found that registering outcomes meant that positive outcomes dropped a lot. The researchers looked at 30 large National Heart Lung, and Blood Institute (NHLBI) funded trials between 1970 and 2000. Of those studies, 17 or 57% showed a significant positive result. They then compared that to 25 similar studies published between 2000 and 2012. Of those, only 2 or 8% were positive. That is a significant drop – from 57% to 8% positive studies."

I also ran across a study which had same intriguing, plausible, nuanced results about the effects of timeouts, reasoning, and compromising on improving children's behavior. How much should I trust it?

New Comment
9 comments, sorted by Click to highlight new comments since:

I think the lumping of various disciplines into "science" is unhelpful in this context. It is reasonable to trust the results of the last round of experiments at the LHC far more than the occasional psychology paper that makes the news.

I've not seen this distinction made as starkly as I think it really needs to be made -- there is a lot of difference between physics and chemistry, where one can usually design experiments to test hypotheses; to geology and atmospheric science, where one mostly fits models to data that happens to be available; to psychology, where the results of experiments seem to be very inconsistent and publication bias is a major cause of false research results.

...and then on to any specific field which has political uses, where "publication bias" can reach Lysenko levels ;)

So just never studfy psychology and you won't go crazy. It all works out...

Did you accidentally edit out the beginning of your post?

Thanks. You're right. I'll correct it.

Are you familiar with Ioannidis? Things are noticeably worse in psychology :-/

My prior for a psych study which makes it into the mainstream press is that it is wrong.

[-][anonymous]00

I've been thinking about this a lot lately. I've concluded that when I do give extra scrutiny to fields, it's because I'm privellaging a hypothesis and/or wishfully thinking. For instance, I would like a vaccine for the common cold. I mean, it may not kill lots of people, but the shear incidence and malaise reduces a significant amount of quality of life. The prevalence in the Western World, and exposure to very powerful people would, I thought, mean significant interest in devleoping something. But people seem to take it for granted. Nobody seems to give a shit, in short.

Contrary to what the Wikipedia article might suggest, developing a cross-serotype vaccine for the common cold is intractable extensive computational advances based on my reading of this article. It does suggest that one approach branch was abandoned without rigorous statistical analyses, where that might be indicated, in this paper. Thought I don't know how to interpet the table in question possibly because it's design is idiosycratic to molecular biologists and I have a different education. I would like at it further, if I let my mistrust run wild, or ask ya'll to look it. But really, the more parsimonious question was *why am I privileging that hypothesis?

For this specific question, it might be worth studying people (especially young people) who get fewer or no colds. It's young people because I have heard that as you get older, your immune system develops a better library of cold viruses to fight off. This matches my experience, though I'll also note that I get the same cold (or at least colds with the same symptoms) a number of times.

In general, I don't think healthy people get studied enough.

You should probably ignore these studies as a matter of course. While the study itself is not obviously bad, it does not appear to be very useful, and the interpretations of the popular press are sinfully bad.

While popular articles on discipline never bother to mention it, there is no such thing as a standard time out (just to pick an example). There is a standard form of a time out, where you use some formula (one minute per year is common), and have basic rules (do not talk to child while in time out, etc.), but these are all very general and ignore the context of the event. A large part of what will matter in discipline is everything that surrounds a discipline event: is the child angry? Hyper? Hungry? Are you angry? Do you state what was wrong? State expectations? Use a calm voice? Scary parent voice? Yelling? What happens afterwards -- do you give support in re-entering into appropriate behavior?

This is a short list of some variables that experiments don't usually control for. In the case of the linked study, we are looking at a small sample (102 parents, 5 reports each) of parents who are self reporting, so basically, we are getting very little information that we can use, even if we didn't care about... well, the interactions between the parent and the child. Which, really, is almost all we should care about.

I cannot back this statement with studies, but through personal experience working with children, living with children, and seeing others doing the same, you are going to get the best results when you engage with the child and treat interactions as meaningful (that is, whatever you are doing, keep it rational, and try hard to translate this to the child's level). When you find something that works, even if it is just a small improvement, make it routine, and keep to it. If a researcher tells you it doesn't work, they had better have something better than "it only worked 16% of the time in our sample". Something that works in one out of six families is certainly not something you should work to avoid.

[-][anonymous]00

Larzelere's study I wouldn't trust at all without further investigation. There are a lot of ways for interview studies to go wrong. That he is claiming multiple positive results from the same experiment strongly suggests poor study design. The other stuff looks to be better done, and is in line with other stuff I've read.