It is widely understood that statistical correlation between two variables ≠ causation. But despite this admonition, people are routinely overconfident in claiming correlations to support particular causal interpretations and are surprised by the results of randomized experiments, suggesting that they are biased & systematically underestimating the prevalence of confounds/common-causation. I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.
Full article: http://www.gwern.net/Causality
The main way to correct for this bias toward seeing causation where there is only correlation follows from this introspection: be more imaginative about how it could happen (other than by direct causation).
[The causation bias (does it have a name?) seems to express the availability bias. So, the corrective is to increase the availability of the other possibilities.]
Maybe. I tend to doubt that eliciting a lot of alternate scenarios would eliminate the bias.
We might call it 'hyperactive agent detection', borrowing a page from the etiology of religious belief: https://en.wikipedia.org/wiki/Agent_detection which now that I think about it, might be stem from the same underlying belief - that things must have clear underlying causes. In one context, it gives rise to belief in gods, in another, interpreting statistical findings like correlation as causation.