SilasBarta comments on Causal Diagrams and Causal Models - Less Wrong

61 Post author: Eliezer_Yudkowsky 12 October 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (274)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 05 November 2012 03:51:31AM 0 points [-]

I was saying that not every independence property is representable as a Bayesian network.

You mean when all variables are independent, or some other class of cases?

No! Once you have learned a distribution using Bayesian network-based methods, the minimal representation of it is the set of factors. You don't need the direction of the arrows any more.

Read the rest: you need the arrows if you want to efficiently look up the conditional (in)dependencies.

Comment author: Pfft 05 November 2012 04:20:56AM 0 points [-]

You mean when all variables are independent, or some other class of cases?

Well, there are doubly-exponentially many possibilities...

The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.

you need the arrows if you want to efficiently look up the conditional (in)dependencies.

But again, why would you want that? As I said in the grand^(n)parent, you don't need to when doing inference.

Comment author: SilasBarta 05 November 2012 07:35:49PM 0 points [-]

The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.

Okay, I'm recalling the "troublesome" cases that Pearl brings up, which gives me a better idea of what you mean. But this is not a counterexample. It just means that you can't do it on a Bayes net with binary nodes. You can still represent that situation by merging (either pair of) the screening nodes into one node that covers all combinations of possibilities between them.

Do you have another example?

But again, why would you want that? As I said in the grand^(n)parent, you don't need to when doing inference.

Sure you do: you want to know which and how many variables you have to look up to make your prediction.

Comment author: Pfft 05 November 2012 08:53:49PM *  0 points [-]

merging (either pair of) the screening nodes into one node

Then the network does not encode the conditional independence between the two variables that you merged.

The task you have to do when making predictions is marginalization: in order to computer P(Rain|WetGrass), you need to compute the sum of P(Rain|WetGrass, X,Y,Z) for all possible values of the variables X, Y, Z that you didn't observe. Here it is very helpful to have the distribution factored into a tree, since that can make it feasible to do variable elimination (or related algorithms like belief propagation). But the directions on the edges in the tree don't matter, you can start at any leaf node and work across.