"why is a causal connection privileged?" I agree with everything here. What follows is merely history.
Historically, I think that CDT was meant to address the obvious shortcomings of choosing to bring about states that were merely correlated with good outcomes (as in the case of whitening one's teeth to reduce lung cancer risk). When Pearl advocates CDT, he is mainly advocating acting based on robust connections that will survive the perturbation of the system caused by the action itself. (e.g. Don't think you'll cure lung cancer by making your population brush their teeth, because that is a non-robust correlation that will be eliminated once you change the system). The centrality of causality in decision making was obvious intuitively but wasn't reflected in formal Bayesian decision theory. This was because of the lack of a good formalism linking probability and causality (and some erroneous positivistic scruples against the very idea of causality). Pearl and SGS's work on causality has done much to address this, but I think there is much to be done.
There is a very annoying historical accident where EDT was taken to be the 'one-boxing' decision theory. First, any use of probability theory in the NP with infallible predictor is suspicious, because the problem can be specified in a logically complete way with no room for empirical uncertainty. (This is why dominance reasoning is brought in for CDT. What should the probabilities be?). Second, EDT is not easy to make coherent given an agent who knows they follow EDT. (The action that EDT disfavors will have probability zero and so the agent cannot condition on it in traditional probability theory). Third, EDT just barely one-boxes. It doesn't one-box on Double Transparent Newcomb, nor on Counterfactual Mugging. It's also obscure what it does on PD. (Again, I can play the PD against a selfish clone of myself, with both agents having each other's source code. There is no empirical uncertainty here, and so applying probability theory immediate raises deep foundational problems).
If TDT/UDT had come first (including the logical models and deep connections to Godel's theorem), the philosophy discussion of NP would have been very different. EDT (which brings into the NP very dubious empirical probability distributions) would not have been considered at all for NP. I don't see that CDT would have held much interest if its alternative was not as feeble as EDT.
It is important to understand why economists have done so much work with Nash Equilibria (e.g. on the PD) rather than invent UDT. This is explained by the fact that the assumption of logical correlation and perfect empirical knowledge between agents in the PD is not the practical reality. This doesn't mean that UDT is not relevant to practical situations, but only that these situations involve many additional elements that may be complex to deal with in UDT. Causal based theories would have been interesting independently, for the reasons noted above concerning robust correlations.
EDIT: I realize the comment by Paul Christiano sometimes describes UDT as a variant of EDT. When I used the term "EDT" I mean the theory discussed in the philosophy literature which involves choosing the action that maximizes P(outcomes / action). This is a theory which essentially makes use of vanilla conditional probability. In what I say, I assume that UDT/TDT, despite some similarity to EDT in spirit, are not limited to regular conditioning and do not fail on smoking lesion.
The LW approach has focused on finding agent types that win on decision problems. Lots of the work has been in trying to formalize TDT/UDT, providing sketches of computer programs that implement these informal ideas. Having read a fair amount of the philosophy literature (including some of the recent stuff by Egan, Hare/Hedden and others), I think that this agent/program approach has been extremely fruitful. It has not only given compelling solutions to a large number of problems in the literature (Newcomb's, trivial coordination problems like Stag Hunt that CDT fails on, PD playing against a selfish copy of yourself) but it also has elucidated the deep philosophical issues that the Newcomb Problem dramatizes (concerning pre-commitment, free will / determinism and uncertainty about purely apriori/logical question). The focus on agents as programs has brought to light the intricate connection between decision making, computability and logic (esp. Godelian issues) --- something merely touched on in the philosophy literature.
These successes provide a sufficient reason to push the agent-centered approach (even if there were no compelling foundational argument that the 'decision' centered approach was incoherent). Similarly, I think there is no overwhelming foundational argument for Bayesian probability theory but philosophers should study it because of its fruitfulness in illuminating many particular issues in the philosophy of science and the foundations of statistics (not to mention its success in practical machine learning and statistics).
This response may not be very satisfying but I can only recommend the UDT posts (http://wiki.lesswrong.com/wiki/Updateless_decision_theory) and the recent MIRI paper http://intelligence.org/files/RobustCooperation.pdf.)
Rough arguments against the decision-centered approach:
Point 1
Suppose I win the lottery after playing 10 times. My decision of which numbers to pick on the last lottery was the cause of winning money. (Whereas previous decisions over numbers produced only disutility). But it's not clear there's anything interesting about this distinction. If I lost money on average, the important lesson is the failing of my agent-type (i.e. the way my decision algorithm makes decisions on lottery problems).
And yet in many practical cases that humans face, it is very useful to look back at which decisions led to high utility. If we compare different algorithms playing casino games, or compare following the advice of a poker expert vs. a newbie, we'll get useful information by looking at the utility caused by each decision. But this investigation of decisions that cause high utility is completely explainable from the agent-centered approach. When simulation and logical correlations between agents are not part of the problem, the optimal agent will make decisions that cause the most utility. UDT/TDT and variants all (afaik) act like CDT in these simple decision problems. If we came upon a Newcomb problem without being told the setup (and without any familiarity with these decision theory puzzles), we would see that the CDTer's decisions were causing utility and the EDTer's decisions were not causing any utility. The EDTer would look like lunatic with bizarrely good luck. Here we are following a local causal criterion in comparing actions. While usually fine, we would clearly be missing out on an important part of the story in the Newcomb problem.
Point 2
In AI, we want to build decision making agents that win. In life, we want to improve our decision making so that we win. Thinking about the utility caused by individual decisions may be a useful subgoal in coming up with winning agents, but it seems hard to see it as the central issue. The Newcomb problem (and the counterfactual mugging and Parfit's Hitchhiker) make clear that a local Markovian criterion (e.g. choose the action that will cause the highest utility, ignoring all previous actions/commitments) is inadequate for winning.
Point 3
The UDT one-boxer's agent type does not cause utility in the NP. However it does logically determine the utility. (More specifically, we could examine the one-boxing program as a formal system and try to isolate which rules/axioms lead to its one boxing in this type of problem). Similarly, if two people were using different sets of axioms (where one set is inconsistent), we might point to one of the axioms and say that its inclusion is what determines the inconsistency of the system. This is a mere sketch, but it might be possible to develop a local criterion by which "responsibility" for utility gains can be assigned to particular aspects of an agent.
It's clear that we can learn about good agent types by examining particular decisions. We don't have to always work with a fully specified program. (And we don't have the code of any AI that can solve decision problems the way humans can). So the more local approach may have some value.
if from the US, you'll need a visa to visit moscow and i don't think you can obtain this on arrival in russia.
Whether you change beliefs in response to a new case will depend on the nature of the selection or sampling process . If you go through a history of quack medicine, you'd get lots of new case-studies but you might not change your beliefs about typical human epistemic performance at all.
Even if new cases are selected to be examples of human stupidity, they might still be roughly random within that class. So cases that are more extreme than one's expectation will shift your beliefs. But this might leave your beliefs about the frequency of incidence of human gullibility unchanged. (Maybe I come to think that believers in quack medicine are even more stupid than I previously thought, but not that such believers are any more common).
It's very hard to judge whether one's new information is selection-biased in some way. In areas like psychology and political science, it's not so hard to find academic papers that support either side on a debate. Even if you can't find that, it could be because of file-drawer effects or because of topic has not been investigated much by academics.
I think dreeves background at Yahoo and success in founding Beeminder makes him well-placed to talk about getting things done.
You make claims that your movement is growing fast and that many people are already involved. These claims would be more credible you presented more evidence for how committed these people are. Joining a facebook group requires minimal commitment. It's even less impressive if THINK was free-riding from existing rational altruism groups.
When I look at the website, I don't see much evidence of 20 serious, well-organized groups being ready to roll-out three weeks from now.
Unrelated point: colleges have complicated restrictions on use of their logo. I'm not sure if your use is a problem, but you might want to check. See, e.g. http://www.clubsandsigs.harvard.edu/article.html?aid=106.
Great post. Arnold Kling has a good discussion of Kurzweil's predictions somewhere, but I haven't been able to find it by Googling.
I agree that Kurzweil did well, making a significant number of specific, non-obvious correct predictions. But how well does Kurzweil's ability here generalize to other predictions? Kurzweil was predicting developments in his own field 10 years into the future. He has an advantage that products often take >4 years to develop, and he has insider knowledge of what kind of products the big tech companies are talking about in-house. (So we could compare him to internal discussions of possible products at Microsoft or Apple, etc.).
I am interested to hear how this is turning out So further updates would be welcome. It seems you might also get some support from LW people if things aren't going well.
In some sports, applied science seems important to improving expert performance. The PhD knowledge is used to guide the sportsperson (who has exceptional physical abilities). Likewise, our skill at making reliably sturdy buildings has dramatically improved due to knowledge of physics and materials science. But the PhDs don't actually put the buildings up, they just tell the builders what to do.
An additional point (discussed intelligence.org/files/TDT.pdf) is that CDT seems to recommend modifying oneself to a non-CDT based decision theory. (For instance, imagine that the CDTer contemplates for a moment the mere possibility of encountering NPs and can cheaply self-modify). After modification, the interest in whether decisions are responsible causally for utility will have been eliminated. So this interest seems extremely brittle. Agents able to modify and informed of the NP scenario will immediately lose the interest. (If the NP seems implausible, consider the ubiquity of some kind of logical correlation between agents in almost any multi-agent decision problem like the PD or stag hunt).
Now you may have in mind a two-boxer notion distinct from that of a CDTer. It might be fundamental to this agent to not forgo local causal gains. Thus a proposed self-modification that would preclude acting for local causal gains would always be rejected. This seems like a shift out of decision theory into value theory. (I think it's very plausible that absent typical mechanisms of maintaining commitments, many humans would find it extremely hard to resist taking a large 'free' cash prize from the transparent box. Even prior schooling in one-boxing philosophy might be hard to stick to when face to face with the prize. Another factor that clashes with human intuitions is the predictor's infallibility. Generally, I think grasping verbal arguments doesn't "modify" humans in the relevant sense and that we have strong intuitions that may (at least in the right presentation of the NP) push us in the direction of local causal efficacy.)
EDIT: fixeds some typos.