It is a commonplace that correlation does not imply causality, however eyebrow-wagglingly suggestive it may be of causal hypotheses. It is less commonly noted that causality does not imply correlation either. It is quite possible for two variables to have zero correlation, and yet for one of them to be completely determined by the other.
Very true. Once again, I'm going to have to recommend in the context of a Richard Kennaway post, the use of more precise concepts. Instead of "correlation", we should be talking about "mutual information", and it would be helpful if we used Judea Pearl's definition of causality.
Mutual information between two variables means (among many equivalent definitions) how much you learn about one variable by learning the other. Statistical correlation is one way that there can be mutual information between two variables, but not the only way.
So, like what JGWeissman said, there can be mutual information between the two series even in the absence of a statistical correlation that directly compares time t in one to time t in the other. For example, there is mutual information between sin(t) and cos(t), even though d(sin(t))/dt = cos(t), and even though they're simultaneously uncorrelated (i.e. uncorrelated when comparing time t to time t). The reason there is mutual information is that if you know sin(t), a simple time-shift tells you cos(t).
As for causation, the Pearl definition is (and my apologies I may not get this right) that:
"A causes B iff, after learning A, nothing else at the time of A or B gives you information about B. (and A is the minimal such set for which this is true)"
In other words, A causes B iff A is the minimal set for which B is conditionally independent given A.
So, anyone want to rephrase Kennaway's post with those definitions?