I interpreted this paragraph as sugesting that causality reduces to similarity, but given your latest clarifications, I guess what you actually had in mind was that causality tends to produce similarity and so we can infer causality from similarity.
When two regions of spacetime are timelike separated, we cannot deduce any direction of causality from similarities between them; they could be similar because one is cause and one is effect, or vice versa. But when two regions of spacetime are spacelike separated, and far enough apart that they have no common causal ancestry assuming one direction of physical causality, but would have common causal ancestry assuming a different direction of physical causality, then similarity between them... is at least highly suggestive.
Previously, I thought you considered causality to be a higher level concept rather than a primitive one, similar to "sound waves" or "speech" as opposed to say "particle movements". That sort of made sense except that I didn't know why you wanted to make causality an integral part of decision theory. Now you're saying that you consider causality to be primitive and a special kind of logical relations, which actually makes less sense to me, and still doesn't explain why you want to make causality an integral part of decision theory. It makes less sense because if we consider the laws of physics as logical relations, they don't have a direction. As you said, "Time-symmetrical laws of physics didn't seem to leave room for asymmetrical causality." I don't see how you get around this problem if you take causality to be primitive. But the bigger problem is that (at the risk of repeating myself too many times) I don't understand your motivation for studying causality, because if I did I'd probably spend more time thinking about it mysef and understand your ideas about it better.
I'm trying to think like reality. If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general? Why not Time-Turners or a zillion other possibilities?
Part of the sequence: Rationality and Philosophy
Thomas Kelly
Jason Brennan
After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.
In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:
Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."
This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)
Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.
But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?
Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.
(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)
We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)
A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).
Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).
People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3
Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant
Previous post: Intuitions Aren't Shared That Way
Notes
1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).
2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).
3 Bostrom (2000); Yudkowsky (2008); Muehlhauser (2011).