dxu comments on Rationality Quotes December 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (440)
Not necessarily. Causation might not be present, true, but causation is not necessary for correlation, and statistical correlation is what Bayes is all about. Correlation often implies causation, and even when it doesn't, it should still be respected as a real statistical phenomenon. All Jiro's update would require is that P(success|genius) > P(success|~genius), which I don't think is too hard to grant. It might not update enough to make the hypothesis the dominant hypothesis, true, but the update definitely occurs.
"Because" (in the original quote) is about causality. Your inequality implies nothing causal without a lot of assumptions. I don't understand what your setup is for increasing belief about a causal link based on an observed correlation (not saying it is impossible, but I think it would be helpful to be precise here).
Jiro's comment is correct but a non-sequitur because he was (correctly) pointing out there is a dependence between success and genius that you can exploit to update. But that is not what the original quote was talking about at all, it was talking about an incorrect, self-serving assignment of a causal link in a complicated situation.
Yes, naturally. I suppose I should have made myself a little clearer there; I was not making any reference to the original quote, but rather to Jiro's comment, which makes no mention of causation, only Bayesian updates.
Because P(causation|correlation) > P(causation|~correlation). That is, it's more likely that a causal link exists if you see a correlation than if you don't see a correlation.
As for your second paragraph, Jiro himself/herself has come to clarify, so I don't think it's necessary (for me) to continue that particular discussion.
Where are you getting this? What are the numerical values of those probabilities?
You can have presence or absence of a correlation between A and B, coexisting with presence or absence of a causal arrow between A and B. All four combinations occur in ordinary, everyday phenomena.
I cannot see how to define, let alone measure, probabilities P(causation|correlation) and P(causation|~correlation) over all possible phenomena.
I also don't know what distinction you intend in other comments in this thread between "correlation" and "real correlation". This is what I understand by "correlation", and there is nothing I would contrast with this and call "real correlation".
Do you think it is literally equally likely that causation exists if you observe a correlation, and if you don't? That observing the presence or absence of a correlation should not change your probability estimate of a causal link at all? If not, then you acknowledge that P(causation|correlation) != P(causation|~correlation). Then it's just a question of which probability is greater. I assert that, intuitively, the former seems likely to be greater.
By "real correlation" I mean a correlation that is not simply an artifact of your statistical analysis, but is actually "present in the data", so to speak. Let me know if you still find this unclear. (For some examples of "unreal" correlations, take a look here.)
I think I have no way of assigning numbers to the quantities P(causation|correlation) and P(causation|~correlation) assessed over all examples of pairs of variables. If you do, tell me what numbers you get.
I asked why and you have said "intuition", which means that you don't know why.
My belief is different, but I also know why I hold it. Leaping from correlation to causation is never justified without reasons other than the correlation itself, reasons specific to the particular quantities being studied. Examples such as the one you just linked to illustrate why. There is no end of correlations that exist without a causal arrow between the two quantities. Merely observing a correlation tells you nothing about whether such an arrow exists. For what it's worth, I believe that is in accordance with the views of statisticians generally. If you want to overturn basic knowledge in statistics, you will need a lot more than a pronouncement of your intuition.
A correlation (or any other measure of statistical dependence) is something computed from the data. There is no such thing as a correlation not "present in the data".
What I think you mean by a "real correlation" seems to be an actual causal link, but that reduces your claim that "real correlation" implies causation to a tautology. What observations would you undertake to determine whether a correlation is, in your terms, a "real" correlation?
My original question was whether you think the probabilities are equal. This reply does not appear to address that question. Even if you have no way of assigning numbers, that does not imply that the three possibilities (>, =, <) are equally likely. Let's say we somehow did find those probabilities. Would you be willing to say, right now, that they would turn out to be equal (with high probability)?
Okay, here's my reasoning (which I thought was intuitively obvious, hence the talk of "intuition", but illusion of transparency, I guess):
The presence of a correlation between two variables means (among other things) that those two variables are statistically dependent. There are many ways for variables to be dependent, one of which is causation. When you observe that a correlation is present, you are effectively eliminating the possibility that the variables are independent. With this possibility gone, the remaining possibilities must increase in probability mass, i.e. become more likely, if we still want the total to sum to 1. This includes the possibility of causation. Thus, the probability of some causal link existing is higher after we observe a correlation than before: P(causation|correlation) > P(causation|~correlation).
If you are using a flawed or unsuitable analysis method, it is very possible for you to (seemingly) get a correlation when in fact no such correlation exists. An example of such a flawed method may be found here, where a correlation is found between ratios of quantities despite those quantities being statistically independent, thus giving the false impression that a correlation is present when it is actually not.
As I suggested in my reply to Lumifer, redundancy helps.
Sorry it's taken me so long to get back to this.
The illusion of transparency applies not only to explaining things to other people, but to explaining things to oneself.
The argument still does not work. Statistical independence does not imply causal independence. In causal reasoning the idea that it does is called the assumption or axiom of faithfulness, and there are at least two reasons why it may fail. Firstly, the finiteness of sample sizes mean that observations can never prove statistical independence, only put likely upper bounds on its magnitude. As Andrew Gelman has put it, with enough data, nothing in independent. Secondly, dynamical systems and systems of cyclic causation are capable of producing robust statistical independence of variables that are directly causally related. There may be reasons for expecting faithfulness to hold in a specific situation, but it cannot be regarded as a physical law true always and everywhere.
Even when faithfulness does hold, statistical dependence tells you only that either causation or selection is happening somewhere. If your observations are selected on a common effect of the two variables, you may observe correlation when the variables are causally independent. If you have reason to think that selection is absent, you still have to decide whether you are looking at one variable causing the other, both being effects of common causes, or a combination.
Given all of these complications, which in a real application of statistics you would have to have thought about before even collecting any data, the argument that correlation is evidence for causation, in the absence of any other information about the variables, has no role to play. The supposed conclusion that P(causation|correlation) > P(causation|~correlation) is useless unless there is reason to think that the difference in probabilities is substantial, which is something you have not addressed, and which would require coming up with something like actual values for the probabilities.
This is too vague to be helpful. What multiple analysis methods? The correlation coefficient simply is what it is. There are other statistics you can calculate for statistical dependency in general, but they are subject to the same problem as correlation: none of them imply causation. What does showing someone else your results accomplish? What are you expecting them to do that you did not? What is "the way everything is supposed to turn out"?
What, in concrete terms, would you do to determine the causal efficacy of a medication? You won't get anywhere trying to publish results with no better argument than "correlation raises the probability of causation".
How will you be able to distinguish between the two?
You also seem to be using the word "correlation" to mean "any kind of relationship or dependency" which is not what it normally means.
Redundancy helps. Use multiple analysis methods, show someone else your results, etc. If everything turns out the way it's supposed to, then that's strong evidence that the correlation is "real".
EDIT: It appears I've been ninja'd. Yes, I am not using the term "correlation" in the technical sense, but in the colloquial sense of "any dependency". Sorry if that's been making things unclear.
I still don't understand in which sense do you use the word "real" in 'correlation is "real"'.
Let's say you have two time series 100 data points in length each. You calculate their correlation, say, Pearson's correlation. It's a number. In which sense can that number be "real" or "not real"?
Do you implicitly have in mind the sampling theory where what you observe is a sample estimate and what's "real" is the true parameter of the unobserved underlying process? In this case there is a very large body of research that mostly goes by the name of "frequentist statistics" about figuring out what does your sample estimate tell you about the unobserved true value (to call which "real" is a bit of stretch since normally it is not real).
It seems as though my attempts to define my term intensionally aren't working, so I'll try and give an extensional definition instead:
An example would be that site you linked earlier. Those quantities appear to be correlated, but the correlations are not "real".
So you are using "real" in the sense of "matching my current ideas of what's likely". I think this approach is likely to... lead to problems.
The quote about causality is a characterization of an opponent's view. I was suggesting that the quote's author may have mischaracterized his opponent's view by interpreting a Bayseian update as an assertion of causality.