Lumifer comments on Open thread, Dec. 21 - Dec. 27, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (230)
(1) Observational studies are almost always attempts to determine causation. Sometimes the investigators try to pretend that they aren't, but they aren't fooling anyone, least of all the general public. I know they are attempting to determine causation because nobody would be interested in the results of the study unless they were interested in causation. Moreover, I know they are attempting to determine causation because they do things like "control for confounding". This procedure is undefined unless the goal is to estimate a causal effect
(2) What do you mean by the sentence "the study was causative"? Of course nobody is suggesting that the study itself had an effect on the dependent variable?
(3) Assuming that the statistics were done correctly and that the investigators have accounted for sampling variability, the relationship between the independent and dependent variable definitely exists. The correlation is real, even if it is due to confounding. It just doesn't represent a causal effect
You are assuming a couple of things which are almost always true in your (medical) field, but are not necessarily true in general. For example,
Nope. Another very common reason is to create a predictive model without caring about actual causation. If you can't do interventions but would like to forecast the future, that's all you need.
That further assumes your underlying process is stable and is not subject to drift, regime changes, etc. Sometimes you can make that assumption, sometimes you cannot.
You'd also like a guarantee that others can't do interventions, or else your measure could be gamed. (But if there's an actual causal relationship, then 'gaming' isn't really possible.)