It is widely understood that statistical correlation between two variables ≠ causation. But despite this admonition, people are routinely overconfident in claiming correlations to support particular causal interpretations and are surprised by the results of randomized experiments, suggesting that they are biased & systematically underestimating the prevalence of confounds/common-causation. I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.
Full article: http://www.gwern.net/Causality
That sounds more like a poor understanding of Occam's razor. Complex ontologically basic processes is not simpler than a handful of strict mathematical rules.
Of course it's (normatively) wrong. But if that particular error is what's going on in peoples' heads, it'll manifest as a different pattern of errors (and hence useful interventions) than an availability bias: availability bias will be cured by forcing generation of scenarios, but a preference for oversimplification will cause the error even if you lay out the various scenarios on a silver platter, because the subject will still prefer the maximally simple version where A->B rather than A<-C->B.