For definitions and notational conventions used here, see What is Abstraction?.

The world turns out to run on quantum mechanics, rather than classical. But classical mechanics still predicts the world exactly as well as it did all along. In particular, it still works in all the situations where we already knew it worked before figuring out quantum, i.e. high-level (“macroscopic”) objects. It all adds up to normality.

Formalizing this idea somewhat in the context of abstraction suggests an interesting class of questions.

We learn some high-level model  by looking only at high-level data - e.g. we learned the laws of classical physics by looking at the behavior of macroscopic objects. Eventually, we learn a lower-level model  of the same system. What properties of  must carry over to ? When we learn high-level properties of a system, what does that tell us about the low-level properties?

This post is about one particular class of properties: causal structure, i.e. arrows in graphical models. Basic results:

  • Conditional independence relations between far-apart high-level components must also hold between the corresponding low-level components
  • Interventional dependence relations between far-apart high-level components must also hold between the corresponding low-level components

Together, these imply that the causal structure of far-apart low-level components is compatible with the causal structure of the corresponding high-level components, assuming that both levels follow a causal DAG at all. Visually:

Conditional Independence

We’ll start with unconditional independence, as an easy warmup. We’ll assume that far-apart high-level variables  and  are independent:

Working directly from the systems view of abstraction, we then have:

 (by independence assumption)

We can then integrate out the high-level variables to obtain:

This is fairly obvious from a visual representation of the abstraction condition:

Abstraction validity means that far-apart low-level X_i’s are independent given their corresponding high-level summaries f_i(X_i). In other words, we can swap each low-level variable with its summary, and just keep track of dependence relations between the summaries rather than the low-level variables themselves, as long as all the variables are far-apart.

Far-apart low-level variables are independent given their summaries, so if the summaries are themselves unconditionally independent, then the low-level variables must be unconditionally independent.

The proof for conditional independence follows a similar pattern, but with a couple extra steps in the middle, and uses the lemma from the appendix here. We start with high-level conditional independence:

… where  , and  are all far-apart. Again, we factor via the systems view of abstraction:

 (by high-level conditional independence)

 (by abstraction conditions)

 (by lemma)

Then, we just integrate out all the high-level variables, and find

Also note that this entire proof goes through exactly the same if we replace the individual variables  and  with sets of far-apart variables  and .

In summary: if high-level variables  and  are independent given  (all far apart), then the corresponding low-level variables  and  are independent given .

Interventions

What is Abstraction gave a few possible forms of the interventional query constraints for causal abstraction. These different possibilities correspond to stronger/weaker assumptions about the methods used to perform high-level interventions and/or the data from which the interventional behavior is learned. For purposes of this post, we won’t dissect the particulars of the different possibilities; we’ll just use a general constraint form which matches any of them:

… where  and  are far apart. Here “whatever” could range over all possible values of , or it could randomly sample from possible values given some parameter, or it could sample the possible values from some entirely different distribution, or …. 

What matters is that there’s two ways to expand . One of them is the abstraction condition, but the other is to just use the factorization associated with the original construction of  from  (i.e. the “true” underlying causal structure):

Equating these two, we find

 (by Bayes)

Our low-level interventional distribution is proportional to our high-level interventional distribution. Key thing to notice: the proportionality factor  does not depend on our interventional parameters (i.e. “whatever”)¹.

Why is that so important? Well, it means that varying the intervention value  changes the far-away low-level distribution  if-and-only-if it changes the far-away high-level distribution . In particular, if interventions on some subset of variables “screen off” interventions on others at the high level, then interventions on the corresponding low-level variables follow the same pattern.

Also note that our general abstraction constraint implies that any variables not intervened upon still follow the usual abstraction constraints - i.e. far-apart low-level variables are still independent given their high-level summaries, even after intervening on other far-away variables.

In combination with our conditional independence results from the previous section, this trivially implies things like:

  • If  and  are conditionally independent given  under an intervention on  (with everything far apart), then  and  are conditionally independent given  under a corresponding intervention on 
  • If the distribution of  given  under an intervention on  does not depend on the intervention value, then the distribution of  given  under an intervention on  also does not depend on the intervention value.

These match the conditions Pearl uses (on pg 24 of Causality, 2nd ed) to define what it means for a DAG  to model a family of interventional distributions². In particular, this means that, if a DAG  models the family of high-level interventional distributions on far-apart variables, then it also models the family of low-level interventional distributions on the corresponding variables.

Or, in English: the low-level causal structure is compatible with the high-level causal structure.

Fine Print

Lastly, I’ll highlight the major loopholes in the argument above.

First, this is saying that if the relationship between high-level and low-level model matches the definitions we’ve been using, then causality adds up to normality. But we still haven’t talked about why this would actually happen in practice - i.e. why humans would learn high-level models which are related to the universe’s lower-level structure in this way. For now, my main argument is “after looking at lots of examples that’s what seems to happen in practice”, but eventually I expect to prove that this is a general property of Bayesian learning in causal universes.

Second, we’ve only mentioned DAG-shaped causality, not causality with loops. If we want to deal with e.g. equilibrium processes, then we need to expand out the whole trajectory-over-time. I expect the arguments above do extend to loopy graphs, but it’s not something I’ve put any thought into; I usually just ignore non-DAG notions of causality.

Third, our main conclusion says “if a DAG  models the family of high-level interventional distributions on far-apart variables, then it also models the family of low-level interventional distributions on the corresponding variables”. Notably, this does not say that  is the only DAG which models the family of low-level interventional distributions. We don’t have a uniqueness result here.

Footnotes

¹ This assumes that , which is a big assumption;  is zero for most  value pairs. So this formula only applies for values of  compatible with the  value. For purposes of causality, that’s not an issue; we’re mostly worried about varying the intervention parameters, not the query values.

² Along with one other condition, which is trivially satisfied as long as both the high- and low-level models are causal DAG models by themselves.

New Comment
2 comments, sorted by Click to highlight new comments since:

this would actually happen in practice - i.e. why humans would learn high-level models which are related to the universe’s lower-level structure in this way.

My intuition is that this makes it possible to model long-range dependencies with minimum cognitive costs.

Also, do you think there's some utility in modelling abstracting into as a forgetful functor?

Plausibly, it does have that flavor to it.