(An idea I had while responding to this quotes thread)
"Correlation does not imply causation" is bandied around inexpertly and inappropriately all over the internet. Lots of us hate this.
But get this: the phrase, and the most obvious follow-up phrases like "what does imply causation?" are not high-competition search terms. Up until about an hour ago, the domain name correlationdoesnotimplycausation.com was not taken. I have just bought it.
There is a correlation-does-not-imply-causation shaped space on the internet, and it's ours for the taking. I would like to fill this space with a small collection of relevant educational resources explaining what is meant by the term, why it's important, why it's often used inappropriately, and the circumstances under which one may legitimately infer causation.
At the moment the Wikipedia page is trying to do this, but it's not really optimised for the task. It also doesn't carry the undercurrent of "no, seriously, lots of smart people get this wrong; let's make sure you're not one of them", and I think it should.
The purpose of this post is two-fold:
Firstly, it lets me say "hey dudes, I've just had this idea. Does anyone have any suggestions (pragmatic/technical, content-related, pointing out why it's a terrible idea, etc.), or alternatively, would anyone like to help?"
Secondly, it raises the question of what other corners of the internet are ripe for the planting of sanity waterline-raising resources. Are there any other similar concepts that people commonly get wrong, but don't have much of a guiding explanatory web presence to them? Could we put together a simple web platform for carrying out this task in lots of different places? The LW readership seems ideally placed to collectively do this sort of work.
If you are familiar with d-separation (http://en.wikipedia.org/wiki/D-separation#d-separation), we have:
if A is dependent on B, and there's some unobserved C involved, then:
(1) A <- C -> B, or
(2) A -> C -> B, or
(3) A <- C <- B
(this is Reichenbach's common cause principle: http://plato.stanford.edu/entries/physics-Rpcc/)
or
(4) A -> C <- B
if C or its effect attains a particular (not necessarily recorded) value. Statisticians know this as Berkson's bias, which is a form of selection bias. In AI, this is known as "explaining away." Manfred's excellent example falls into category (4), with C observed to equal "hired as actor."
Beware: d-separation applies to causal graphical models, and Bayesian networks (which are statistical and not causal models). The meaning of arrows is different in these two kinds of models. This is actually a fairly subtle issue.
Odd - I always felt like d-separation was the same thing on causal diagrams and on Bayes networks. Although, I also understood Bayes network as being a model of the causal directions in a situation, so perhaps that's why.
Manfred's excellent example needs equally excellent counterparts for other possibilities.