(this was written in April 2020 and I only just now realized I never posted it to LW)

This post is going to explore the consequences of different choices you can make when thinking about things causally. Shout out to johnswentworth for first seeding in my head this sort of investigation.

One mistake people are known to make is to vastly underestimate the causal factors behind a variable. Scott writes about this tendency in genetics:

What happens if your baby doesn’t have the gene for intelligence? Can they still succeed? [...] By the early 2000s, the American Psychological Association was a little more cautious, was saying intelligence might be linked to “dozens – if not hundreds” of genes. [...] The most recent estimate for how many genes are involved in complex traits like height or intelligence is approximately “all of them” – by the latest count, about twenty thousand.

Probably not too surprising. Everyone wants "The One Thing" that explains it all, but normally its the case that "These 35,000 Things" explain it all. The Folk Theory of Essences might be the most egregious example of people inferring a mono-causal relationship when reality is vastly poly-causal. George Lakoff (the metaphors and embodied cognition guy) explains:

The Folk Theory of Essences is commonplace, in this culture and other cultures around the world. According to that folk theory, everything has an essence that makes it the kind of thing it is. An essence is a collection of natural properties that inheres in whatever it is the essence of. Since natural properties are natural phenomena, natural properties (essences) can be seen as causes of the natural behavior of things. For example, it is a natural property of trees that they are made of wood. Trees have natural behaviors: They bend in the wind and they can burn. That natural property of trees-being made of wood (which is part of a tree's "essence")-is therefore conceptualized metaphorically as a cause of the bending and burning behavior of trees. Aristotle called this the material cause. 

As a result, the Folk Theory of Essences has a part that is causal. We will state it as follows: Every thing has an essence that inheres in it and that makes it the kind of thing it is. The essence of each thing is the cause of that thing's natural behavior.

Thinking in terms of essences is very common. It seems to be how a lot of people think about things like personality or disposition. "Of course he lied to you, he's a crook" "I know it was risky and spontaneous, but I'm an ENTJ, so yeah."

My first reflex is to point out that your behavior is caused by more than your personality. Environmental contexts have huge effects on the actions people make. Old news. I want to look at the problems that pop up when you even consider personality as a causal variable in the first place

Implicit/Emergent Variables

Let's think about modeling the weather in a given region, and how the idea of climate factors into. A simple way to model this might be with the below graph:

Certain geographic factors determine the climate, and the climate determines the weather. Boom, done. High level abstraction that let's us model stuff.

Let's see what happens when we switch perspectives. If we zoom in to a more concrete, less abstract model, where the weather is a result of things like air pressure, temperature, and air density, all affecting each other in complex ways, there is no "climate variable" present. A given region exhibits regularities in its weather over time. We see similarities between the regularities in different regions. We develop labels for different clusters of regularities. We still have a sense of what geographic features lead to what sorts of regularities in weather, but in our best concrete models of weather there is no explicit climate variable.

What are the repercussions of using one model vs the other? It seems like they could both be used to make fine predictions. The weirdness happens when we remember we're thinking causally. Remember, the whole point of causal reasoning is to know what will happen if you intervene. You imagine "manually setting" causal variables to different values and see what happens. But what does this "manual setting" of variables look like?

In our graph from last post:

 all the variables are ones that I have some idea on how to manually set. I can play Mozart for a kid. I can give someone's family more money. I can get College Board to give you fake SAT scores. But what would it mean to intervene on the climate node?

We know that no single factor controls the climate. "Desert" and "rain-forest" are just labels for types or regularities in a weather system. Since climate is an emergent feature, "intervening on climate" means intervening on a bunch of geographic variables. The previous graph leads me to erroneously conclude that I could somehow tweak the climate without having to change the underlying geography, and that's not possible. The only way to salvage this graph is to put a bunch of additional arrows in, representing how "changing climate" necessitates change in geography.

Contrast this with another example. We're looking at the software of a rocket, and for some reason the developer chose to hardcode the value into every location where they needed the value of the gravitational constant. What happens if we model the software as having a causal variable for g? Like climate, this g is not explicit; it's implicit. There's no global variable that can be toggled to control g. But unlike climate, this g isn't really an emergent feature. The fact that the software acts as if the gravitational constant is is not an complex emergent property of various systems interacting. It's because you hardcoded everywhere.

If we wanted to model this software, we could include a causal variable for every instance of , but we could just as easily lump that all into one variable. Our model would basically give the same answer to any intervention question. Yeah, it's more of a pain to find and replace every hardcoded value, but it's still the same sort of causal intervention that leaves the rest of the system intact. Even though g is an implicit variable, it's much more amenable to being modeled as an explicit variable at a higher level of abstraction.

Causal Variables and Predictive Categories

A few times I've told a story that goes like this: observe that a system has regularities in it's behavior, see other systems with similar clusters of regularity, develop a label to signify "System that has been seen to exhibit Type X regularities."

Previously I was calling these "emergent features", but now I want to frame them as predictive categories, mostly to emphasize the pitfalls of thinking of them as causal variables. For ease, I'll be talking about it as a dichotomy, but you can really think of it as a spectrum, where a property slides from being relatively easy to isolate and intervene on while leaving the rest of the system intact (g in the code), all the way up to complete interdependent chaos (more like climate).

A problem we already spotted; thinking of a predictive category (like climate) as a causal variable can lead you to think that you can intervene on climate in isolation from the rest of the system.

But there's an even deeper problem. Think back to personality types. It's probably not the case that there's an easily isolated "personality" variable in humans. But it is possible for behavior to have regularities that fall into similar clusters, allowing for "personality types" to have predictive power. Focus on what's happening here. When you judge a person's personality, you observe their behavior and make predictions of future behavior. When you take a personality quiz, you tell the quiz how you behave and it tells you how you will continue to behave. The decision flow in your head looks something like this (but with more behavior variables):

All that's happening is you predict behavior you've already seen, and other behavior that has been know to be in the same "cluster" as the behavior you've already seen. This model is a valid predictive model (results will vary based on how good your pattern recognition is) but gives weird causal answers. What causes your behavior? Your personality. What causes your personality? Your behavior.

Now, it's not against the rules of causality for things to cause each other, that's what control theory is all about (play with negative feedback loops here!). But it doesn't work with predictive categories[^1]. Knowing what personality is, we can expand "Your personality causes the regularities in your behavior" to "Your regularities in your behavior cause the regularities in your behavior." There is no causal/explanatory content. At best, it's a tautology that doesn't tell you anything.

This is the difference between personality and climate. Both are predictive categories, but with climate we had a decent understanding of what variables we might need to alter to produce a "desert" pattern or a "rain-forest" pattern. How the hell would you change someone from an ENTP pattern to an ISFJ pattern? Even ignoring the difficulties of invasive brain surgery, I don't think anyone has any idea on how you would reshape the guts of a human mind to change it to another personality cluster.

Thinking of personality as a causal node will lead you to believe you have an understanding that you don't have. Since you're already conflating a predictive model for a causal one, you might even build a theory of intervention where you can fiddle with downstream behavior to change the predictive category (this sort of thinking we'll explore more in later posts).

To recap: if you treat a predictive category as a causal variable, you have a good chance at misleading yourself on your ability to:

  • Intervene on said variable in isolation from the rest of the system.
  • Perform an intervention that shifts you to another predictive category cluster.

Humans and Essences

Finally we circle back to essences. You can probably already put together the pieces. Thinking with essences is basically trying to use predictive categories as causal nodes which are the source of all of an entities behavior. This can work fine for predictive purposes, but leads to mishaps when thinking causally.

Why can it be so easy to think in terms of essences? Here's my theory. As already noted, our brains are doing causal learning all the time. The more "guts" of a system you are exposed to, the easier it is to learn true causal relationships. In cases where the guts are hidden and you only interact with a system as a black-box (can't peer into people's minds), you have to rely on other faculties. Your mind is still great at pattern recognition, and predictive categories get used a lot more.

Now all that needs to happen is for you to conflate this cognition you use to predict, for cognition that represents a causal model. Eliezer describes it in "Say not complexity":

In an eyeblink it happens: putting a non-controlling causal node behind something mysterious, a causal node that feels like an explanation but isn’t. The mistake takes place below the level of words. It requires no special character flaw; it is how human beings think by default, how they have thought since the ancient times.

An important additional point is to address why this easy to make mistake doesn't get corrected (I make mistakes in arithmetic all the time, but I fix them). The key piece of this not getting corrected is the inaccessibility of the guts of the system. When you think of the essences of people's personalities, you don't get to see inside their heads. When Aristotle posited the "essence of trees" he didn't have the tools to look into the tree's cells. People can do good causal reasoning, but when the guts are hidden and you've got no way to intervene on them, you can posit crazy incorrect causal relationships all day and never get corrected by your experience.

Quick Summary

  • Properties of a system can be encoded implicitly instead of explicitly.
  • The more a property is the result of complex interactions within a system, the more likely it is to be a predictive category instead of a useful causal variable.
  • When you treat a PC as a CV, you invite yourself to overestimate the ease of intervening on the variable in isolation from the rest of the system, and to feel like you know how to coherently alter the value of the variable even when you don't.
  • The less exposed you are to the guts of a system, the easier it is to treat a predictive model as a causal one and never get corrected.

[^1] The category is a feature of your mind. For it to exert cause on the original system, it would have to be through the fact that you using this category caused you to act on the system in a certain way. When might you see that happen?

New Comment
6 comments, sorted by Click to highlight new comments since:

I don't think anyone has any idea on how you would reshape the guts of a human mind to change it to another personality cluster.

Actors do. I have an actor friend who thinks the socionics people were onto something. Their work is based on the more complex model of personality that Jung abandoned because people kept doing dumb things with it due to language confusions, culminating in the famous Meyers-Briggs.

Nice. There's something about essence thinking that, in my experience, is quite sticky. There's many layers to it, and it's a life's work to keep pulling back the layers to look at the guts underneath. Often the surest sign there's more essences thinking lurking is when one is certain one's ripped out all the essences and blown them apart.

Thanks for this useful reminder to always keep digging!

[-]Mir10

this is rly good.  summary of what i lurned:

  • assume the ravens call a particular pattern iff it rains the next day. 
    • iow, , thus observing the raven's call is strong evidence u ought to hv an umbrella rdy for tomorrow.
    • "raven's call" is therefore a v good predictive var.
  • but bribing the ravens to hush still might not hv any effect on whether it actually rains tomorrow.
    • it's therefore a v bad causal var.
  • it cud even be the case that, up until now, it never not rained unless the raven's called, and intervening on the var cud still be fruitless if nobody's ever done that bfr.
  • for systems u hv 100% accurate & 100% complete predictive maps of, u may still hv a terrible causal map wrt what happens if u try to intervene in ways that take the state of the system out of the distribution u'v been mapping it in.

how then do u build good causal maps?

  • ig u can still improve ur causal maps wo trial-and-error (empirically testing interventions) if u just do predictive mapping of the system, and u focus in on the predictive power of the simplest vars, and do trial-and-error in ur simulations.  or smth.

I'm not sure how much I agree/disagree with this post. Potentially relevant: my defense of g here, not sure what you'd make of this.

But what would it mean to intervene on the climate node?

We know that no single factor controls the climate. "Desert" and "rain-forest" are just labels for types or regularities in a weather system. Since climate is an emergent feature, "intervening on climate" means intervening on a bunch of geographic variables.

I don't really agree that this is a problem for regarding the climate as a causal variable. Counterfactuals don't have to correspond to interventions you can physically do. Rather, they are thought experiments, "what would it be like if this was different?", which is something you can do regardless of whether you can intervene in an isolated sense or not.

Insofar as differences in long-term weather trends between locations all share common causes (which get lumped together under 'climate'), it seems to me that understanding the consequences of these weather trends could often benefit from abstracting them into an overall "climate" variable. Of course sometimes, such as when you are trying to understand how exactly these trends appear, it may be useful to disaggregate them (as long as you can do so accurately! an overly aggregated true model is better than a highly disagregated but highly misleading model). That said, I'm not familiar enough with climate to say much about when it is or is not useful to lump it like this.

A problem we already spotted; thinking of a predictive category (like climate) as a causal variable can lead you to think that you can intervene on climate in isolation from the rest of the system.

I think this is a problem that happens with causality more generally. For instance, consider the appraisal theory of emotion, that how you feel about things is a result of how you cognitively think about them. So for instance, if you feel fear, that is a result of appraising some situation as probably dangerous.

This theory seems true to me. It makes sense a priori. And to give some example, I was once flying on a plane with someone who was afraid of flying. During the flight, the wings of the plane wobbled a bit, and when he saw that he got very afraid that they were going to break and we were going to crash. (Before that, he hadn't been super afraid, he was relatively calm.) This seemed to be a case of appraisal; looking at wobbly wings, and feeling like they weren't strong enough and therefore might break, and this might lead to the plane crashing.

So suppose we buy the appraisal theory of emotion. Apparently I've heard that this has led to therapies where after disasters have struck and e.g. killed someone's family, the person's therapists have suggested that the person should try to reframe their family's death as something positive in order to improve their mood, which is obviously going to lead to that person feeling that the therapist is crazy/condescending/evil/delusion-encouraging/???. This doesn't mean that the original causal theory of emotions is wrong, it just means that sometimes you cannot or should not directly intervene on certain variables.

But there's an even deeper problem. Think back to personality types. It's probably not the case that there's an easily isolated "personality" variable in humans. But it is possible for behavior to have regularities that fall into similar clusters, allowing for "personality types" to have predictive power. Focus on what's happening here. When you judge a person's personality, you observe their behavior and make predictions of future behavior. When you take a personality quiz, you tell the quiz how you behave and it tells you how you will continue to behave. The decision flow in your head looks something like this (but with more behavior variables):

[image]

All that's happening is you predict behavior you've already seen, and other behavior that has been know to be in the same "cluster" as the behavior you've already seen. This model is a valid predictive model (results will vary based on how good your pattern recognition is) but gives weird causal answers. What causes your behavior? Your personality. What causes your personality? Your behavior.

I have a lot of issues with personality tests (e.g. they have high degrees of measurement error, the items are overly abstracted, nobody has managed to create an objective behavioral personality test despite many people trying, the heritability results from molecular genetic studies are in strong contradiction to the results from twin studies, etc.), but I think this is the wrong way to see it.

There's a distinction between the decision flow in your head and the causal model you propose to follow it, because while causal influence only travels down arrows, correlations travel up and then down arrows. That is, if you have some sort of situation like:

behavior at time 1 <- personality -> behavior at time 2

Then this is going to imply a correlation between behavior at time 1 and behavior at time 2, and you can use this correlation to predict behavior at time 2 from behavior at time 1. Thus for the personality model, you don't need the arrows going into personality, only the arrows going out of personality.

Finally we circle back to essences. You can probably already put together the pieces. Thinking with essences is basically trying to use predictive categories as causal nodes which are the source of all of an entities behavior. This can work fine for predictive purposes, but leads to mishaps when thinking causally.

I don't think this is a wrong thing to do in general.

Consider for instance the species of an organism; in ancient times, this would be considered an unobservable essence with unobservable effects, such as the physical form and dynamics of the organism. Today, we know the essence exists - namely DNA. DNA is a common underlying cause which determines the innate characteristics that is present in some organism.

In fact, I would argue that causal inference must start with postulating an essentialism, where some hidden unobservable variable causes our observations. After all, ultimately we only observe basic sense-data such as light hitting our eyes; the postulation that this sense-data is due to a physical world that generates it is completely isomorphic to other forms of essentialism that propose that correlations in variables are due to some underlying hidden essence. Without this essentialism, we would have to propose that correlations in sense-data over time is due to the earlier sense-data directly influencing later sense-data, which seems inaccurate. So I'd say that in a way, essentialism is the opposite of solipsism.

More generally, I think essentialism is a useful belief whenever one doesn't think that one is observing all the relevant factors.

It's not clear to me what if anything we disagree on.

I agree that personality categories are useful for predicting someone's behavior across time.

I don't think using essences to make predictions is the "wrong thing to do in general" either.

I agree climate can be a useful predictive category for thinking about a region. 

My point about taking the wrong thing as a causal variable "leading you to overestimate your ability to make precise causal interventions" is actually very relevant to Duncan's recent post. Many thought experiments are misleading/bogus/don't-do-what-they-say-on-label exactly because they posit impossible interventions.

If I had to pick a core point of disagreement, it would be something like:

I believe that if you have a bunch of different variables that are correlated with each other, then those correlations are probably because they share causes. And it is coherent to form a new variable by adding together these shared causes, and to claim that this new variable is an underlying factor which influences the bunch of different variables, especially when the shared causes influence the variables in a sufficiently uniform way. Further, to a good approximation, this synthetic aggregate variable can be measured simply by taking an average of the original bunch of correlated variables, because that makes their shared variance add up and their unique variances cancel out. This holds even if one cannot meaningfully intervene on any of this.

I have varying levels of confidence in the above, depending on the exact context, set of variables, deductions one wants to make on the basis of common cause, etc., but it seems to me like your post is overall arguing against this sentiment while I would tend to argue in favor of this sentiment.