You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Morendil comments on Taking "correlation does not imply causation" back from the internet - Less Wrong Discussion

41 Post author: sixes_and_sevens 03 October 2012 12:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Morendil 03 October 2012 07:47:03PM *  5 points [-]

I don't get it. How does "journalist reports causation where there is only association" constitute an example of the journalist misapplying "correlation does not imply causation", as opposed to failing to apply it in the first place?

(In the latter case, it's good that the phrase "correlation does not imply causation" is floating around them Interwebs. The OP suggests that it's not an unalloyed good because people "get it wrong".)

Comment author: IlyaShpitser 03 October 2012 08:33:09PM *  2 points [-]

Sorry, I didn't catch the distinction between misapplying and failing to apply from your original phrasing ("people get this wrong.")

To add to sixesandsevens list, I have one below, where people think lack of association implies lack of causation, or weak association implies weak causality (it does not in either case, due to effect cancellation, which happens reasonably frequently).

Comment author: RichardKennaway 03 October 2012 09:19:56PM *  3 points [-]

people think lack of association implies lack of causation, or weak association implies weak causality (it does not in either case, due to effect cancellation, which happens reasonably frequently).

Isn't the Faithfulness assumption the assumption that effect cancellation is rare enough to be ignored? If it happens frequently, that looks like a rather large problem for Pearl's methods.

I currently have a paper in the submission process about systems which actively perform effect cancellation, and the problems that causes, but I assume that isn't what you have in mind.

Comment author: IlyaShpitser 03 October 2012 10:01:16PM *  6 points [-]

If you pick parameters of your causal model randomly, then almost surely the model will be faithful (formally, in Robins' phrasing: "in finite dimensional parametric families, the subset of unfaithful distributions typically has Lebesgue measure zero on the parameter space"). People interpret this to mean that faithfulness violations are rare enough to be ignored. It is not so, sadly.

First, Nature doesn't pick causal models randomly. In fact, cancellations are quite useful (homeostasis, and gene regulation are often "implemented" by faithfulness violations).

Second, we may have a model that is weakly faithful (that is hard to tell from unfaithful with few samples). What is worse is it is difficult to say in advance how many samples one would need to tell apart a faithful vs an unfaithful model. In statistical terms this is sometimes phrased as the existence of "pointwise consistent" but the non-existence of "uniformly consistent" tests.

I suggest the following paper for more on this:

http://www.hss.cmu.edu/philosophy/scheines/uniform-consistency.pdf

See also this (the distinction comes from analysis): http://en.wikipedia.org/wiki/Uniform_convergence

Kevin Kelly at CMU thinks a lot about "having to change your mind" due to lack of uniform consistency.


Much of what Pearl et al do (identification of causal effects, counterfactual reasoning, actual cause, etc.) does not rely on faithfulness. Faithfulness typically comes up when one wishes to learn causal structure from data. Even in this setting there exist methods which do not require faithfulness (I think the LiNGAM algorithm does not).

Comment author: RichardKennaway 04 October 2012 11:11:45AM 1 point [-]

First, Nature doesn't pick causal models randomly. In fact, cancellations are quite useful (homeostasis, and gene regulation are often "implemented" by faithfulness violations).

The very subject of my paper. I don't think the magnitude of the obstacle has yet been fully appreciated by people who are trying to extend methods of causal discovery in that direction. And in the folklore, there are frequent statements like this one, which is simply false:

"Empirically observed covariation is a necessary but not sufficient condition for causality."

(Edward Tufte, quoted here.)

Comment author: IlyaShpitser 09 October 2012 10:45:38PM *  1 point [-]

I think the way causal discovery is sold sometimes is not as a way of establishing causal structure from data, but as a way of narrowing down the set of experiments one would have to run to establish causal structure definitively, in domains which are poorly understood but in which we can experiment (comp. bio., etc).

If phrased in this way, assuming faithfulness is not "so bad." It is true that many folks in causal inference and related areas are quite skeptical of faithfulness type assumptions and rightly so. To me, it's the lack of uniform consistency that's the real killer.

In Part II of this talk (http://videolectures.net/uai2011_shpitser_causal/) I gave is an example of how you can do completely ridiculous, magical things if you assume a type of faithfulness. See 31:07.