36 20 November 2007 07:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

You are viewing a single comment's thread.

Comment author: 21 November 2007 08:26:35PM 1 point [-]

With the graphical-network insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop.

Consider the following example, from Menzies's "Causal Models, Token Causation, and Processes"[*]:

An assassin puts poison in the king's coffee. The bodyguard responds by pouring an antidote in the king's coffee. If the bodyguard had not put the antidote in the coffee, the king would have died. On the other hand, the antidote is fatal when taken by itself and if the poison had not been poured in first, it would have killed the king. The poison and the antidote are both lethal when taken singly but neutralize each other when taken together. In fact, the king drinks the coffee and survives.

We can model this situation with the following structural equation system:

A = true G = A S = (A and G) or (not-A and not-G)

where A is a boolean variable denoting whether the Assassin put poison in the coffee or not, G is a boolean variable denoting whether the Guard put the antidote in the coffee or not, and S is a boolean variable denoting whether the king Survives or not.

According to Pearl and Halpern's definition of actual causation, the assassin putting poison in the coffee causes the king to survive, since changing the assassin's action changes the king's survival when we hold the guard's action fixed. This is clearly an incorrect account of causation.

IMO, graphical models and related techniques represent the biggest advance in thinking about causality since Lewis's work on counterfactuals (though James Heckman disagrees, which should make us a bit more circumspect). But they aren't the end of the line, even if we restrict our attention to manipulationist accounts of causality.

[*] The paper is found here. As an aside, I do not agree with Menzies's proposed resolution.

Comment author: 29 October 2009 03:20:32AM *  6 points [-]

Um, this sounds not correct. The assassin causes the bodyguard to add the antidote; if the bodyguard hadn't seen the assassin do it, he wouldn't have so added. So if you compute the counterfactual the Pearlian way, manipulating the assassin changes the bodyguard's action as well, since the bodyguard causally descends from the assassin.

Comment author: 29 October 2009 03:46:31AM 3 points [-]

Right -- and according to Pearl's causal beam method, you would first note that the guard sustains the coffee's (non)deadliness-state against the assassin's action, which ultimately makes you deem the guard the cause of the king's survival.

Comment author: 10 June 2013 08:09:38PM 2 points [-]

Furthermore, if you draw the graph the way Neel seems to suggest, then the bodyguard is adding the antidote without dependence on the actions of the assassin, and so there is no longer any reason to call one "assassin" and the other "bodyguard", or one "poison" and the other "antidote". The bodyguard in that model is trying to kill the king as much as the assassin is, and the assassin's timely intervention saved the king as much as the bodyguard's.