Seeing as there is little secondary literature for the Finite Factored Set formalism, I thought I’d write up my experience of exploring it through some toy examples that are classic examples in the Pearlian paradigm. My goal was to see how these models that I understood very well in the Pearlian paradigm would work in Finite Factored Sets.
As a warning, this doesn’t make any use of the more exciting properties of Finite Factored Sets. It’s just an exploration of how this formalism handles the mundane stuff. This also means that I’m using the factored set directly, without the abstractions of the orthogonality database. Which I think is fine here, because these are tiny toy examples whose structure is fully known. (However, it’s possible that I’ve missed the entire point of Finite Factored Sets.)
The first example is the 3-variable collider that is very central to Pearl’s formalism. It is given by the following Causal Diagram:
A, B, and C are all binary variables (0=false, 1=true).
The intended meaning of a Causal Diagram (or rather, function causal model) is that the value of a node xi is given by a deterministic function that takes as input the parents, pa(xi), (indicated by the arrows) and an “error”, or “noise”, variable ui that is governed by a probability distribution that is independent from the other error/noise variables: xi=fi(pa(xi),ui). Thus, the value of C is given by c=fc(a,b,uC) where uC is noise or uncertainty that is not explicitly modeled, which we can visualize like this:
We could also split up A and B into a deterministic and a random part, but as they are root nodes, there is little point. It would just be a=fa(uA).
The Pearlian formalism runs on graphs, but Finite Factored Sets run on the setS of all possible outcomes – the sample space. So, the goal is now to construct a sample space that is consistent to the above graph. After that, we’ll find a factorization of that sample space.
I think it should be clear that to cover the whole sample space S, it is sufficient to consider all possible combinations of the outcomes of A, B, and UC (but not C), because if we know the value of these three, then we also know the value of C, via fc.
So, we simply define S as the Cartesian product of the sets of possible values of A and B: A={0,1}, B={0,1}, and the possible values of UC: UC, which I’ll leave undefined for the moment (except to note that it must be a finite set):
S=A×B×UC,(a,b,uC)∈S
(We use the lowercase letters to represent elements of sets that make up the Cartesian product: a∈A, b∈B, and uC∈UC.)
Then – as is custom in the formalism of Finite Factored Sets – the variables A, B, UC are defined as partitions on S. For example, A is a partition consisting of two parts: 1) the set of elements of S where a=0 and 2) those where a=1:
A={{(a,b,uC)∈S|a=0},{(a,b,uC)∈S|a=1}}
This captures exactly what the variable A is supposed to represent: the question of whether the first element of the Cartesian product is 0 or 1. B is defined analogously:
B={{(a,b,uC)∈S|b=0},{(a,b,uC)∈S|b=1}}
And UC as well:
UC={{(a,b,uC)∈S|uC=0},…}
with an as of yet undefined number of parts in the partition.
Now, given the way we constructed S, the set of partitions G={A,B,UC} is a factorization of S: F=(S,G), because any element of S can be uniquely identified by knowing in which part of A, B, and UC it is, because S is just the Cartesian product of A, B, and UC.
Now that we have the set, let’s look at the probability distribution over S. Let P(A=a′) be shorthand for P({(a,b,uC)∈S|a=a′}), i.e., the probability that the outcome lands in the subset {(a,b,uC)∈S|a=a′} of the sample space S, where the first element of the Cartesian product is equal to a′. We define P(B=b) and P(UC=uc) analogously. Finally, let P(a,b,uC) refer to the probability of the individual outcome (a,b,uC)∈S.
The fact that we want our finite factored set model to be consistent with the Causal Diagram above, implies some things about P. In particular, the diagram implies that A, B, and UC should be independent, which means that the joint probability should factorize like this:
P(a,b,uC)=P(A=a)P(B=b)P(UC=uC)
But this is exactly how our (finite) set S factorizes! Thus, P factorizes the same way that S does.
This concludes the translation of the graphical model into a semantically-equivalent finite factored set. To recap what we have accomplished: we turned the original model with the variables A, B, and C into one with three independent variables: A, B, and UC. And defined C as a deterministic function of these. We then constructed a finite factored set with the factors A, B, and UC.
Now, let’s define C on S as well. We again define it as a partition of S:
C={{(a,b,uC)∈S|fC(a,b,uC)=0},{(a,b,uC)∈S|fC(a,b,uC)=1}}
We simply partition S depending on the output of the function fC. This also allows us to define P(C=c) as P({(a,b,uC)∈S|fC(a,b,uC)=c}).
In the Pearlian formalism, we can read off the fact that A and B are independent from the graph structure. With Finite Factored sets, we have to look at histories. The history of a partition (aka a variable) is roughly speaking the minimal set of factors in the factorization G that is sufficient to fully specify the partition (aka variable). A and B are factors in their own right, so their history consists just of themselves: h(A)={A} and h(B)={B}, because surely knowing A is enough to know A. As h(A)∩h(B)=∅, we can conclude that A and B are orthogonal, but this is just because we defined the factorization that way, so this is no new information. Still, it’s a good consistency check.
The history of C is more interesting, but still pretty trivial to determine. As long as fC makes use of all its arguments and is generally non-pathological, all the factors are needed to determine C. So: h(C)={A,B,UC}. This implies h(A)⊂h(C) and h(B)⊂h(C) which implies A and B are both strictly beforeC in the time defined by our factorization. This is a non-trivial result which matches what we would have expected from the graphical model that I drew in the beginning. So, that’s good.
But what if we condition on C? Colliders are special because they have A⊥B but A⊥/B|C. Can we recover this results from our finite factored set model?
To compute conditional orthogonality, we have to compute conditional histories. In order to condition on C, we need the histories conditioned on the subsets of S where C=0 and where C=1. We’ll call these subsets C0 and C1:
C0:={(a,b,uC)∈S|fC(a,b,uC)=0}C1:={(a,b,uC)∈S|fC(a,b,uC)=1}
Let’s start with the latter: let’s determine h(A|C1) – the conditional history of A given C1. However, the definition of conditional history is not as straightforward as the definition of history. I’m reproducing it here:
Let F=(S,G) be a finite factored set, let X, Y, Z∈Part(S), and let E⊆S. We use s∼Xt to say that two elements s and t are in the same part in X. The conditional history of X given E, written hF(X|E), is the smallest set of factors H⊆G satisfying the following two conditions:
For all s,t∈E, if s∼bt for all b∈H, then s∼Xt.
For all s,t∈E and r∈S, if r∼b0s for all b0∈H and r∼b1t for all b1∈G∖H, then r∈E.
The first condition is easy. It just says that the factors in h(A|C1) should be sufficient to pin down A in C1. We can safely assume that A itself will be enough. So, is H:=h(A|C1) just {A} again? Well there is that second condition. To show its effect, let’s assume that indeed H={A}. The condition then specifies something that has to be true for all s,t∈C1 and r∈S. However, given the assumption we just made, I can construct a counterexample to the stated condition. (Thus showing by contradiction that h(A|C1) is not just {A}.)
To make things concrete, I’ll give a concrete definition of how C is computed. First, let’s say that UC is the result of a 6-sided die; so: UC={1,2,3,4,5,6}. We’ll then say that C is XOR of A and B, except when UC rolls a 6, in which case C is just 0:
fC(a,b,uC)={a⊕bif uC≠60else.
The counterexample is then:
s=(as,bs,uCs)=(1,0,1)t=(at,bt,uCt)=(0,1,2)r=(ar,br,uCr)=(1,1,2)
Let’s first confirm that s and t are indeed in C1. To be in C1, we need fC(a,b,uC)=1. This is true for both, because 1⊕0=1 and 0⊕1=1 and in neither case is uC=6.
Now, in the if-statement in the second condition, we first ask whether r and s can be distinguished by the factors in H. We’re assuming for now that H consists only of A, so the question becomes whether r and s can be distinguished by looking at A. The answer is no, because in both cases, the first entry is 1 (so, a=1). Thus, as far as the factors in H are concerned r and s are the same.
Next we must investigate r and t in light of the the factors that are not in H. In our case, that’s B and UC. As we can see, r and t are indeed indistinguishable under B and UC because r and t only differ in the first entry of the tuple (which corresponds to A). The if-statement is thus true, and so according to the condition, we should have r∈C1. However, this is not the case, because 1⊕1=0, and so r is in C0. In other words, we have a contradiction and h(A|C1) can’t consist of only A.
So, what does the second condition in the definition of conditional history look out for? It seems to want to prevent that both H and its complement are incomplete when it comes to being able to answer the question of whether an element is in C1 or not. That is, if both the factors within H and the factors without seem to indicate that an element r is in C1, then – the definition says – it should really be in C1. The factors should not be split in such a way that neither H nor its complement (G∖H where G is the set of all factors) can reconstruct C; otherwise, the border itself leaks information about the likely values of factors.
The problem can be easily fixed by adding B and UC to h(A|C1) as well, so that H has all the factors needed to fully pin down the border of C1: h(A|C1)={A,B,UC}. Via symmetry, we also get h(A|C0)={A,B,UC} and also the same for h(B|C0,1). We thus definitely don’t have h(A|C0/1)∩h(B|C0/1)=∅ anymore. And so, A and B are not orthogonal given C.
This means we have recovered the two most important independence statements about the collider: A⊥B and A⊥/B|C, as well as C⊥/A and C⊥/B (just from the fact that A and B are strictly before C in time). What remains to be confirmed are C⊥/A|B and C⊥/B|A. I’ll leave that as an exercise for the reader.
As our next example, we’ll have a look at the 3-variable chain:
Separating out the noise/uncertainty:
I won’t repeat all the steps now and just skip to constructing the sample space S as the Cartesian product of the possible values of A, UB, and UC. The elements of S are then tuples like this one:
(a,uB,uC)∈S
We define the partitions A, UB, and UC on this in the obvious way, and so they are our factorization of S. The variables B and C are defined as partitions of S according to some deterministic function of (a,uB) and (a,uB,uC) respectively. For the histories, it follows then that h(A)={A}, h(B)={A,UB}, and h(C)={A,UB,UC}, which implies A<B<C, as one would expect.
(It might seem remarkable here that we can reconstruct the exact order of A, B, and C, but that is only because we definedS that way. Nothing interesting has happened yet. This is just a self-consistency check.)
I tried visualizing these histories but had limited success:
The interesting independence statement in this model is A⊥C|B. So, to investigate this, let’s look at h(C|B1) – the conditional history of C on the subset of S where B=1. The first step is to clarify that C is computed via B:
fC(a,uB,uC)=gC(fB(a,uB),uC)
That is, a and uB are only used via fB. But if we already know the output of fB (because we’re in the subset B1⊂S where fB=1), then we only need uC to compute the value of C. Thus, it would be consistent with condition 1 of the conditional histories definition if we had h(C|B1)={UC}. But is it also consistent with condition 2?
In condition 2, we have first this part: “if r∼UCs ...” However, UC has no bearing on whether r is in B1 or not, because fB doesn’t take UC as an input. So, we can ignore that part. Without that part we are left with: if r∼Xt for all X∈{A,UB}, then r∈B1. Is this true for all t∈B1 and r∈S? Yes, because what it is saying is, if r seems to be in B1 according to A and UB, then it really is in B1. And that is true, because A and UB already fully determine B, so if they say you are in B1, then you are.
So, we avoided the trap we had above, where neither H nor its complement could reconstruct the boundary of the set we were conditioning on. Here, A and UB (which are both not in H) are able to reconstruct the boundary of B1 perfectly, so condition 2 is fulfilled. Thus, h(C|B1)={UC} (and analogously h(C|B0)={UC}), which implies that h(A|B0/1)∩h(C|B0/1)=∅. (I didn’t check that h(A|B0/1) doesn’t contain UC, but a quick argument shows that it won’t: UC is neither needed to pin down A nor to pin down the border of B0/1.) So, A⊥C|B as expected.
There is one interesting 3-variable model left – the common cause:
The reader may want to try this out for themself before reading on.
Splitting off the randomness, we get:
As before, we can construct a sample space that is factorized by G={UB,A,UC}, giving us the finite factored set F=(S,G). The histories for A, B, and C are then again obvious: h(A)={A}, h(B)={UB,A}, and h(C)={A,UC}. We can see that B and C are not independent, because their histories both include A. But they also don’t have a definite temporal order; we can neither say B<FC nor C<FB.
From Pearl’s theory, we expect that B and C become independent when conditioned on A. So let’s look at those conditional histories. As we know all the tricks by now, it will be a very high-level analysis.
We recall the first rule of conditional histories: the history should contain those factors that are needed to pin down the variable given that we already know the value of the conditioned variable. If we know the value of A, then it suffices to know UB in order to know B. So, the conditional history, H, of B given that we know the value of A, contains UB at the least.
The second rule of conditional histories demands that either H or its complement, G∖H, (or both) is on its own sufficient to determine the value of the conditioned variable (A in our case). Assuming H={UB}, the complement of H, {A,UC}, contains A itself, and so is definitely sufficient to determine A. Thus, when conditioning on values for A, H={UB} is a permitted history by the second rule.
By symmetry, we get {UC} for the conditional history for C. This all then implies B⊥C|A as expected.
Conclusions
I hope this was useful to someone. And I hope I didn’t completely mess up the intended use of this formalism.
One thing I appreciate about this formalism is that I find it easy to drop to the base level (the sample space with the factorization) to explicitly check my higher-level thoughts when I get confused. It’s nice to have that solid ground level available whenever I need it.
The rule about conditional histories is not exactly easy, but it feels closer to a fundamental law than the d-separation rules of the Pearlian paradigm, which always felt a bit arbitrary.
Finally, I still kind of think that a DAG is a really nice way of visualizing dependencies and independencies of random variables. I wonder if there is a visualization that feels more native to Finite Factored Sets while still looking nice.
Seeing as there is little secondary literature for the Finite Factored Set formalism, I thought I’d write up my experience of exploring it through some toy examples that are classic examples in the Pearlian paradigm. My goal was to see how these models that I understood very well in the Pearlian paradigm would work in Finite Factored Sets.
As a warning, this doesn’t make any use of the more exciting properties of Finite Factored Sets. It’s just an exploration of how this formalism handles the mundane stuff. This also means that I’m using the factored set directly, without the abstractions of the orthogonality database. Which I think is fine here, because these are tiny toy examples whose structure is fully known. (However, it’s possible that I’ve missed the entire point of Finite Factored Sets.)
The first example is the 3-variable collider that is very central to Pearl’s formalism. It is given by the following Causal Diagram:
A, B, and C are all binary variables (0=false, 1=true).
The intended meaning of a Causal Diagram (or rather, function causal model) is that the value of a node xi is given by a deterministic function that takes as input the parents, pa(xi), (indicated by the arrows) and an “error”, or “noise”, variable ui that is governed by a probability distribution that is independent from the other error/noise variables: xi=fi(pa(xi),ui). Thus, the value of C is given by c=fc(a,b,uC) where uC is noise or uncertainty that is not explicitly modeled, which we can visualize like this:
We could also split up A and B into a deterministic and a random part, but as they are root nodes, there is little point. It would just be a=fa(uA).
The Pearlian formalism runs on graphs, but Finite Factored Sets run on the set S of all possible outcomes – the sample space. So, the goal is now to construct a sample space that is consistent to the above graph. After that, we’ll find a factorization of that sample space.
I think it should be clear that to cover the whole sample space S, it is sufficient to consider all possible combinations of the outcomes of A, B, and UC (but not C), because if we know the value of these three, then we also know the value of C, via fc.
So, we simply define S as the Cartesian product of the sets of possible values of A and B: A={0,1}, B={0,1}, and the possible values of UC: UC, which I’ll leave undefined for the moment (except to note that it must be a finite set): S=A×B×UC,(a,b,uC)∈S (We use the lowercase letters to represent elements of sets that make up the Cartesian product: a∈A, b∈B, and uC∈UC.)
Then – as is custom in the formalism of Finite Factored Sets – the variables A, B, UC are defined as partitions on S. For example, A is a partition consisting of two parts: 1) the set of elements of S where a=0 and 2) those where a=1: A={{(a,b,uC)∈S|a=0},{(a,b,uC)∈S|a=1}} This captures exactly what the variable A is supposed to represent: the question of whether the first element of the Cartesian product is 0 or 1. B is defined analogously: B={{(a,b,uC)∈S|b=0},{(a,b,uC)∈S|b=1}} And UC as well: UC={{(a,b,uC)∈S|uC=0},…} with an as of yet undefined number of parts in the partition.
Now, given the way we constructed S, the set of partitions G={A,B,UC} is a factorization of S: F=(S,G), because any element of S can be uniquely identified by knowing in which part of A, B, and UC it is, because S is just the Cartesian product of A, B, and UC.
Now that we have the set, let’s look at the probability distribution over S. Let P(A=a′) be shorthand for P({(a,b,uC)∈S|a=a′}), i.e., the probability that the outcome lands in the subset {(a,b,uC)∈S|a=a′} of the sample space S, where the first element of the Cartesian product is equal to a′. We define P(B=b) and P(UC=uc) analogously. Finally, let P(a,b,uC) refer to the probability of the individual outcome (a,b,uC)∈S.
The fact that we want our finite factored set model to be consistent with the Causal Diagram above, implies some things about P. In particular, the diagram implies that A, B, and UC should be independent, which means that the joint probability should factorize like this: P(a,b,uC)=P(A=a)P(B=b)P(UC=uC) But this is exactly how our (finite) set S factorizes! Thus, P factorizes the same way that S does.
This concludes the translation of the graphical model into a semantically-equivalent finite factored set. To recap what we have accomplished: we turned the original model with the variables A, B, and C into one with three independent variables: A, B, and UC. And defined C as a deterministic function of these. We then constructed a finite factored set with the factors A, B, and UC.
Now, let’s define C on S as well. We again define it as a partition of S: C={{(a,b,uC)∈S|fC(a,b,uC)=0},{(a,b,uC)∈S|fC(a,b,uC)=1}} We simply partition S depending on the output of the function fC. This also allows us to define P(C=c) as P({(a,b,uC)∈S|fC(a,b,uC)=c}).
In the Pearlian formalism, we can read off the fact that A and B are independent from the graph structure. With Finite Factored sets, we have to look at histories. The history of a partition (aka a variable) is roughly speaking the minimal set of factors in the factorization G that is sufficient to fully specify the partition (aka variable). A and B are factors in their own right, so their history consists just of themselves: h(A)={A} and h(B)={B}, because surely knowing A is enough to know A. As h(A)∩h(B)=∅, we can conclude that A and B are orthogonal, but this is just because we defined the factorization that way, so this is no new information. Still, it’s a good consistency check.
The history of C is more interesting, but still pretty trivial to determine. As long as fC makes use of all its arguments and is generally non-pathological, all the factors are needed to determine C. So: h(C)={A,B,UC}. This implies h(A)⊂h(C) and h(B)⊂h(C) which implies A and B are both strictly before C in the time defined by our factorization. This is a non-trivial result which matches what we would have expected from the graphical model that I drew in the beginning. So, that’s good.
But what if we condition on C? Colliders are special because they have A⊥B but A⊥/B|C. Can we recover this results from our finite factored set model?
To compute conditional orthogonality, we have to compute conditional histories. In order to condition on C, we need the histories conditioned on the subsets of S where C=0 and where C=1. We’ll call these subsets C0 and C1: C0:={(a,b,uC)∈S|fC(a,b,uC)=0}C1:={(a,b,uC)∈S|fC(a,b,uC)=1} Let’s start with the latter: let’s determine h(A|C1) – the conditional history of A given C1. However, the definition of conditional history is not as straightforward as the definition of history. I’m reproducing it here:
The first condition is easy. It just says that the factors in h(A|C1) should be sufficient to pin down A in C1. We can safely assume that A itself will be enough. So, is H:=h(A|C1) just {A} again? Well there is that second condition. To show its effect, let’s assume that indeed H={A}. The condition then specifies something that has to be true for all s,t∈C1 and r∈S. However, given the assumption we just made, I can construct a counterexample to the stated condition. (Thus showing by contradiction that h(A|C1) is not just {A}.)
To make things concrete, I’ll give a concrete definition of how C is computed. First, let’s say that UC is the result of a 6-sided die; so: UC={1,2,3,4,5,6}. We’ll then say that C is XOR of A and B, except when UC rolls a 6, in which case C is just 0: fC(a,b,uC)={a⊕bif uC≠60else. The counterexample is then: s=(as,bs,uCs)=(1,0,1)t=(at,bt,uCt)=(0,1,2)r=(ar,br,uCr)=(1,1,2) Let’s first confirm that s and t are indeed in C1. To be in C1, we need fC(a,b,uC)=1. This is true for both, because 1⊕0=1 and 0⊕1=1 and in neither case is uC=6.
Now, in the if-statement in the second condition, we first ask whether r and s can be distinguished by the factors in H. We’re assuming for now that H consists only of A, so the question becomes whether r and s can be distinguished by looking at A. The answer is no, because in both cases, the first entry is 1 (so, a=1). Thus, as far as the factors in H are concerned r and s are the same.
Next we must investigate r and t in light of the the factors that are not in H. In our case, that’s B and UC. As we can see, r and t are indeed indistinguishable under B and UC because r and t only differ in the first entry of the tuple (which corresponds to A). The if-statement is thus true, and so according to the condition, we should have r∈C1. However, this is not the case, because 1⊕1=0, and so r is in C0. In other words, we have a contradiction and h(A|C1) can’t consist of only A.
So, what does the second condition in the definition of conditional history look out for? It seems to want to prevent that both H and its complement are incomplete when it comes to being able to answer the question of whether an element is in C1 or not. That is, if both the factors within H and the factors without seem to indicate that an element r is in C1, then – the definition says – it should really be in C1. The factors should not be split in such a way that neither H nor its complement (G∖H where G is the set of all factors) can reconstruct C; otherwise, the border itself leaks information about the likely values of factors.
The problem can be easily fixed by adding B and UC to h(A|C1) as well, so that H has all the factors needed to fully pin down the border of C1: h(A|C1)={A,B,UC}. Via symmetry, we also get h(A|C0)={A,B,UC} and also the same for h(B|C0,1). We thus definitely don’t have h(A|C0/1)∩h(B|C0/1)=∅ anymore. And so, A and B are not orthogonal given C.
This means we have recovered the two most important independence statements about the collider: A⊥B and A⊥/B|C, as well as C⊥/A and C⊥/B (just from the fact that A and B are strictly before C in time). What remains to be confirmed are C⊥/A|B and C⊥/B|A. I’ll leave that as an exercise for the reader.
As our next example, we’ll have a look at the 3-variable chain:
Separating out the noise/uncertainty:
I won’t repeat all the steps now and just skip to constructing the sample space S as the Cartesian product of the possible values of A, UB, and UC. The elements of S are then tuples like this one: (a,uB,uC)∈S We define the partitions A, UB, and UC on this in the obvious way, and so they are our factorization of S. The variables B and C are defined as partitions of S according to some deterministic function of (a,uB) and (a,uB,uC) respectively. For the histories, it follows then that h(A)={A}, h(B)={A,UB}, and h(C)={A,UB,UC}, which implies A<B<C, as one would expect.
(It might seem remarkable here that we can reconstruct the exact order of A, B, and C, but that is only because we defined S that way. Nothing interesting has happened yet. This is just a self-consistency check.)
I tried visualizing these histories but had limited success:
The interesting independence statement in this model is A⊥C|B. So, to investigate this, let’s look at h(C|B1) – the conditional history of C on the subset of S where B=1. The first step is to clarify that C is computed via B: fC(a,uB,uC)=gC(fB(a,uB),uC) That is, a and uB are only used via fB. But if we already know the output of fB (because we’re in the subset B1⊂S where fB=1), then we only need uC to compute the value of C. Thus, it would be consistent with condition 1 of the conditional histories definition if we had h(C|B1)={UC}. But is it also consistent with condition 2?
In condition 2, we have first this part: “if r∼UCs ...” However, UC has no bearing on whether r is in B1 or not, because fB doesn’t take UC as an input. So, we can ignore that part. Without that part we are left with: if r∼Xt for all X∈{A,UB}, then r∈B1. Is this true for all t∈B1 and r∈S? Yes, because what it is saying is, if r seems to be in B1 according to A and UB, then it really is in B1. And that is true, because A and UB already fully determine B, so if they say you are in B1, then you are.
So, we avoided the trap we had above, where neither H nor its complement could reconstruct the boundary of the set we were conditioning on. Here, A and UB (which are both not in H) are able to reconstruct the boundary of B1 perfectly, so condition 2 is fulfilled. Thus, h(C|B1)={UC} (and analogously h(C|B0)={UC}), which implies that h(A|B0/1)∩h(C|B0/1)=∅. (I didn’t check that h(A|B0/1) doesn’t contain UC, but a quick argument shows that it won’t: UC is neither needed to pin down A nor to pin down the border of B0/1.) So, A⊥C|B as expected.
There is one interesting 3-variable model left – the common cause:
The reader may want to try this out for themself before reading on.
Splitting off the randomness, we get:
As before, we can construct a sample space that is factorized by G={UB,A,UC}, giving us the finite factored set F=(S,G). The histories for A, B, and C are then again obvious: h(A)={A}, h(B)={UB,A}, and h(C)={A,UC}. We can see that B and C are not independent, because their histories both include A. But they also don’t have a definite temporal order; we can neither say B<FC nor C<FB.
From Pearl’s theory, we expect that B and C become independent when conditioned on A. So let’s look at those conditional histories. As we know all the tricks by now, it will be a very high-level analysis.
We recall the first rule of conditional histories: the history should contain those factors that are needed to pin down the variable given that we already know the value of the conditioned variable. If we know the value of A, then it suffices to know UB in order to know B. So, the conditional history, H, of B given that we know the value of A, contains UB at the least.
The second rule of conditional histories demands that either H or its complement, G∖H, (or both) is on its own sufficient to determine the value of the conditioned variable (A in our case). Assuming H={UB}, the complement of H, {A,UC}, contains A itself, and so is definitely sufficient to determine A. Thus, when conditioning on values for A, H={UB} is a permitted history by the second rule.
By symmetry, we get {UC} for the conditional history for C. This all then implies B⊥C|A as expected.
Conclusions
I hope this was useful to someone. And I hope I didn’t completely mess up the intended use of this formalism.
One thing I appreciate about this formalism is that I find it easy to drop to the base level (the sample space with the factorization) to explicitly check my higher-level thoughts when I get confused. It’s nice to have that solid ground level available whenever I need it.
The rule about conditional histories is not exactly easy, but it feels closer to a fundamental law than the d-separation rules of the Pearlian paradigm, which always felt a bit arbitrary.
Finally, I still kind of think that a DAG is a really nice way of visualizing dependencies and independencies of random variables. I wonder if there is a visualization that feels more native to Finite Factored Sets while still looking nice.