James_Miller comments on This is why we can't have social science - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (82)
When natural scientists attempt to replicate famous experiments where the original result was clearly correct, with what probability do they tend to succeed? Is it closer to 1 than, say, .7?
I'd think that "famous experiments where the original result was clearly correct" are exactly those whose results have already been replicated repeatedly. If they haven't been replicated they may well be famous -- Stanford prison experiment, I'm looking at you -- but they aren't clearly correct.
I was thinking more "What is the error rate in replication experiments when we know the results from the original experiment were correct?" So if mixing X and Y under certain conditions has to yield Z, how often when scientists actually try to do this do they get Z?
The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine - it's taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?
If you want to know whether food is good or bad you have to look at mortality which means you might have to wait a decade.
A lot of psychology experiments claim effects over much shorter timeframes.
I think he is more suggesting that the number of confounding factors in psychology experiments is generally far higher than in the natural sciences. The addition of such uncontrollable factors leads to a generally higher error rate in human sciences.
The number of confounding factors isn't that important if it's possible to do controlled experiments that control for them. Nutrition science has the problem that you usually can't do good controlled experiments or those are very expensive.
Obviously if you can control for a confounding factor then its not an issue, I was simply stressing that the nature of human sciences means that it is effectively impossible to control for all confounding factors, or even be aware of many of them.
To the extend that's true careful replication of studies to identify factors is important if you don't want to practice what Feymann described as Cargo Cult science. If you follow Feymann argument physicists also would get a bunch of bad results if they would work with the scientific standards used in psychology.
Feymann on rat psychology:
Nutrition is really a different case than a lot of psychology. There are question in psychology such as whether doing certain things to a child in it's childhood effect whether that child is a healthy adult. Those questions are hard to investigate scientifically because of time lag. The same isn't true for many psychology experiments.
I don't think we actually disagree on anything, the only point I was making was that your reply to Lightwave, while accurate, wasn't actually replying to the point he made.
I've suggested on LW before that most attempts at physics experiments are wrong, if one counts physics students' attempts. The standard reaction to a student getting a counterintuitive result is, "well, obviously they messed up the experiment". I notice I feel OK with that response in the case of physics but don't like Mitchell trying it for psychology.
(I wonder whether biology students have to count chromosomes.)
Students are particularly bad at experimentation (which is why they have to take those labs in the first place), and the experiments they do are selected for being particularly fundamental and well-understood (in particular, they have already been replicated lots of times). I think this is a more important difference than physics versus psychology.