It is widely understood that statistical correlation between two variables ≠ causation. But despite this admonition, people are routinely overconfident in claiming correlations to support particular causal interpretations and are surprised by the results of randomized experiments, suggesting that they are biased & systematically underestimating the prevalence of confounds/common-causation. I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.
Full article: http://www.gwern.net/Causality
Causes do cancel out in some structures, and Nature does not select randomly (e.g. evolution might select for cancellation for homeostasis reasons). So the argument that most models are faithful is not always convincing.
This is a real issue, a causal version of a related issue in statistics where two types of statistical dependence cancel out such that there is a conditional independence in the data, but underlying phenomena are related.
I don't think gwern has a mistaken epistemology, however, because this issue exists. The issue just makes causal (and statistical) inference harder.
I agree completely.