A post about how, for some causal models, causal relationships can be inferred without doing experiments that control one of the random variables.

If correlation doesn’t imply causation, then what does?

To help address problems like the two example problems just discussed, Pearl introduced a causal calculus. In the remainder of this post, I will explain the rules of the causal calculus, and use them to analyse the smoking-cancer connection. We’ll see that even without doing a randomized controlled experiment it’s possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn’t require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.

New Comment
24 comments, sorted by Click to highlight new comments since:

Correlation may not imply causation - but it is highly correlated with causation! In fact, the most likely theory is that correlation causes causation... ;-)

That last sentence? Ow.

If A causes B, then artificially inducing A results in (or increases the frequency of, etc.) B. I am not really sure what you are trying to say, nor what "most likely theory" here means (under what setup?)

I realize you are being glib, but this is important!

[-][anonymous]00

nor what "most likely theory" here means (under what setup?)

Rupert Sheldrake has a theory of how correlation causes causation.

[This comment is no longer endorsed by its author]Reply

If A causes B, then artificially inducing A results in (or increases the frequency of, etc.) B. I am not really sure what you are trying to say, nor what "most likely theory" here means (under what setup?)

My impression was that it was an absurdity for the purpose of satire.

It is what's known as a joke - it has no hidden wisdom or meaning.

I think part of what is amusing here is that this joke is a serious theory for some folks.

I think part of what is amusing here is that this joke is a serious theory for some folks.

Like Rupert Sheldrake?

Should I delete? I have no qualms with deleting.

(It's unfortunate that submitting doesn't include the link in a structured way, which would allow duplicate detection.)

[-]tim60

Given that the original submission is a year and half old, its likely that enough people are unfamiliar with it that its worth keeping up. (afaik, resubmission isn't a big enough problem in discussion to enact a delete-all-duplicates policy)

Looks promising, but requiring the graph to be acyclic makes it difficult to model processes where feedback is involved. A workaround would be treat each time stamp of a process as a different event. Have A(0)->B(1), where event A at time 0 affects event B at time 1, B(0)->A(1), A(0)->A(1), B(0)->B(1), A(t)->B(t+1), etc. But this gets unwieldy very quickly.

Your workaround is correct, and not as unwieldy as it may appear at first glance. A lot of people have been using causal diagrams with this structure very successfully in situations where the data generating mechanism has loops. As a starting point, see the literature on inverse probability weighting and marginal structural models.

Processes with feedback loops are, in fact, a primary motivation for using causal directed acyclic graphs. If there are no feedback loops, reasoning about causality is relatively simple even without graphs; whereas if there are loops, even very smart people will get it wrong unless they are able to analyze the situation in terms of the graphical concept of 'collider stratification bias'.

The correlation/causation conundrum is a particularly frustrating one in the social sciences due to the complex interaction of variables related to human experience.

I've found looking at time-order and thinking of variables-as-events is a helpful way to simplify experimental design seeking to get at causal mechanisms in my behavioral research.

Take the smoking example:

I would consider measuring changes in strength of correlation at various points in an ongoing experiment.

Once a baseline measurement is obtained from those already smoking subjects/participants, we measure the correlation between avg. number of cigarettes smoked per weak and lung capacity. This way one doesn't have to randomize or control, unethically asking people to smoke if they don't already. We already have a hypothesis based on the prior that volume of cigarettes smoked has a strong positive correlation with lung damage, and so reducing the number of cigarettes smoked would improve lung functioning in smokers.

But here we assume that the lifestyles of the smokers studied are relatively stable across the span of the experiment.

The researcher must take into account mediating factors that could impact lung functioning outside of smoking - i.e Intermittent exercise and lifestyle improvements.

In any case, following the same group of people over time is a lot easier than matching comparison groups by race/age/gender/education, or any of the other million human variables.

Once a baseline measurement is obtained from those already smoking subjects/participants, we measure the correlation between avg. number of cigarettes smoked per weak and lung capacity. This way one doesn't have to randomize or control, unethically asking people to smoke if they don't already. We already have a hypothesis based on the prior that volume of cigarettes smoked has a strong positive correlation with lung damage, and so reducing the number of cigarettes smoked would improve lung functioning in smokers.

It was not clear from this description what exactly your design was. Is it the case that you find some smokers, and then track the relationship between lung capacity and how much they smoke per week (which varies due to [reasons])? Or do you artificially reduce the nicotine intake in smokers (which is an ethical intervention)? Or what?

Seems like a much longer (and harder to read) version of Eliezer's Causal Model post. What can I expect to get out of this one that I wouldn't find in Eliezer's version?

Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.

-XKCD

Details? Content? Eliezer doesn't even define d-separation, for starters.

[-][anonymous]00

Do you know if there's an efficient algorithm for determining when two subsets of a DAG are d-separated given another? The naive algorithm seems to be a bit slow.

http://www.gatsby.ucl.ac.uk/~zoubin/course05/BayesBall.pdf

Amusing name, linear time algorithm. Also amusingly I happen to have direct line of sight on the author while writing this post :).

In some sense, we know a priori that d-separation has to be linear time because it is a slightly fancy graph traversal. If you don't like Bayes Ball, you can use the moralization algorithm due to Lauritzen (described here:

http://www.stats.ox.ac.uk/~steffen/teaching/grad/graphicalmodels.pdf

see slide titled "alternative equivalent separation"), which is slightly harder to follow for an unaided human, but which has a very simple implementation (which reduces to a simple DFS traversal of an undirected graph you construct).

edit: fixed links, hopefully.

[-][anonymous]20

Yeah, sadly both links are broken for me.

Link is broken for me.

What can I expect to get out of this one that I wouldn't find in Eliezer's version?

Some of the useful (if you're going to use it or enjoy it, that is) math from chapters 1-3 of Pearl's book.

More detail, more mathematics, more exercises, more references. More, that's what you get. Eliezer's post is only an appetiser, and the XKCD a mere amuse-bouche.

Correlation doesn't necessitate causation, but it is certainly (weak?) Bayesian evidence.