I was recently talking with a friend about the practical usefulness of modern academic epistemology. Intuitively, I would have wanted to say that Judea Pearl's work causality has a lot of practical implications, however, I couldn't think of any examples.

Do you have examples of conclusions you have drawn because you learned about Pearl's causality that you wouldn't have drawn otherwise, or can point to other people making practical use of the concepts?

New Answer
New Comment

3 Answers sorted by

Algon

40

I remember a medical researcher saying that causal inference has applications there. I can't remember what, but building a causal model of a person after observing diseases and various interventions seems obviously useful, if costly at the moment. 

There corrigibility and reward tampering literature uses causal models frequently. 

And whenever you're investigating causal relationships, the Do-calculus lets you perform crisp calculations which seems clearly useful. Sure, you could do some good approximations without it, or make decent guesses using intuition and some awkward statistics which are (probably) a reflection of the do calculus. But why use a crummier tool when you don't have to? 

But why use a crummier tool when you don't have to? 

How often have you actually used the tool in your life? 

CallumMcDougall

Ω030

This may not exactly answer the question, but I'm in a research group which is studying selection for modularity, and yesterday we published our fourth post, which discusses the importance of causality in developing a modularity metric.

TL;DR - if you want to measure information exchanged in a network, you can't just observe activations, because two completely separate tracks of the network measuring the same thing will still have high mutual information even though they're not communicating with each other (the input is a confounder for both of them). Instead, it seems like you'll need to use do calculus and counterfactuals.

We haven't actually started testing out our measure yet so this is currently only at the theorising stage, hence may not be a very satisfying answer to the question

What is selection for modularity and why is it important?

3CallumMcDougall
Probably the best explanation of this comes from John Wentworth's recent AXRP podcast, and a few of his LW posts. To put it simply, modularity is important because modular systems are usually much more interpretable (case in point: evolution has produced highly modular designs, e.g. organs and organ systems, whereas genetic algorithms for electronic circuit design frequently fail to find designs that are modular, and so they're really hard for humans to interpret, and verify that they'll work as expected). If we understood a bit more about the factors that select for modularity under a wide range of situations (e.g. evolutionary selection, or standard ML selection), then we might be able to use these factors to encourage more modular designs. On the more abstract level, it might help us break down fuzzy statements like "certain types of inner optimisers have separate world models and models of the objective", which are really statements about modules within a system. But in order to do any of this, we need to come up with a robust measure for modularity, and basically there isn't one at present.

Oliver Sourbut

10

In a former role working on software control systems for internet-scale bidding stuff, we'd often talk in terms of confounders, upstream/downstream, causal terms, etc. when developing and tuning system improvements. Pretty rare to actually draw a causal diagram (a few times?) or crack out do-calculus (never?) and I don't know if everyone had read Pearl (probably not?) but at least passing fluency with the concepts was a big help.

I saw other teams (us too) fail or waste effort confused when they missed things that with a better appreciation for causal structure they'd have spotted.

My guess is this is a similar story for some technologists, and likely in medicine and other experimental fields, at least some of the time.