When constructing a high-level abstract causal DAG from a low-level DAG, one operation which comes up quite often is throwing away information from a node. This post is about how to do that.

First, how do we throw away information from random variables in general? Sparknotes:

  • Given a random variable , we can throw away information by replacing with for some function .
  • Given some other random variable , “contains all information in relevant to ” if and only if . In particular, the full distribution function () is a minimal representation of the information about Y contained in X.

For more explanation of this, see Probability as Minimal Map.

For our purposes, starting from a low-level causal DAG, we want to:

  • Pick a node
  • Pick a set of nodes (with )
  • Replace by for some function , such that

Here denotes all the node indices outside . (Specifying rather than directly will usually be easier in practice, since is usually a small neighborhood of nodes around .) In English: we want to throw away information from , while retaining all information relevant to nodes outside the set .

Two prototypical examples:

  • In a digital circuit, we pick the voltage in a particular wire at a particular time as . Assuming the circuit is well designed, we will find that only the binary value is relevant to voltages far away in the circuit or in time. So, with all “nearby” voltages as , we can replace by .
  • In a fluid, we pick the (microscopic) positions and momenta of all the particles in a little cell of spacetime as . Assuming uniform temperature, identical particles, and some source of external noise - even just a little bit - we expect that only the total number and momentum of particles in the cell will be relevant to the positions and momenta of particles far away in space and time. So, with all “nearby” cells/particles as , we can replace the microscopic positions and momenta of all particles in the cell with the total number and momentum of particles in the cell.

In both examples, we’re throwing out “local” information, while maintaining any information which is relevant “globally”. This will mean that local queries - e.g. the voltage in one wire given the voltage in a neighboring wire at the same time - are not supported; short-range correlations violate the abstraction. However, large-scale queries - e.g. the voltage in a wire now given the voltage in a wire a few seconds ago - are supported.

Modifying Children

We still have one conceptual question to address: when we replace by , how do we modify children nodes of to use instead?

The first and most important answer is: it doesn’t matter, so long as whatever they do is consistent with . For instance, suppose ranges over {-1, 0, 1}, and . When , the children can act as though were -1 or 1 - it doesn’t matter which, so long as they don’t act like . As long as the childrens’ behavior is consistent with the information in , we will be able to support long-range queries.

There is one big catch, however: the children do need to all behave as if had the same value, whatever value they choose. The joint distribution (where = children of and = spouses of ) must be equal to for some value consistent with . The simplest way to achieve this is to pick a particular “representative” value for each possible value of , so that .

Example: in the digital circuit case, we would pick one representative “high” voltage (for instance the supply voltage ) and one representative “low” voltage (for instance the ground voltage ). would then map any high voltages to and any low voltages to .

Once we have our representative value function , we just have the children use in place of .

If we want, we could even simplify one step further: we could just choose to spit out representative values directly. That convention is cleaner for proofs and algorithms, but a bit more confusing for human usage and examples.

New Comment
2 comments, sorted by Click to highlight new comments since:

Instead of saying " contains all information in relevant to ", it would be better to say that, contains all information in that is relevant to if you don't condition on anything. Because it may be the case that if you condition on some additional random variable , no longer contains all relevant information.

Example:

Let be i.i.d. binary uniform random variables, i.e. each of the variables takes the value 0 with probability 0.5 and the value 1 with probability 0.5. Let be a random variable. Let be another random variable, where is the xor operation. Let be the function .

Then contains all information in that is relevant to . But if we know the value of , then no longer contains all information in that is relevant to .

Good point, thanks.