As a general rule, street maps of New York City do not form spontaneously - they involve some cause-and-effect process which takes in data from the territory (NYC’s streets) and produces the map from that data. Let’s call these “cartographic processes”: causal processes which produce a map from some territory.

Formalizing a bit:

  • We have a territory and a map. I’m mostly interested in the case where both of these are causal models (possibly with symmetry), but other models are certainly possible.
  • Both the territory and the map are embedded in a larger causal model, the cartographic process, in which the map is generated from the territory. Note that the “territory” may be the entire cartographic process, including the map.
  • There is some class of queries on the territory which can be “translated” into queries on the map, yielding answers which reliably predict the answers to the corresponding territory-queries - this is what it means for the map to “match” the territory. I’m mostly interested in counterfactual queries on the map, and some preimage of those queries in the territory.

In our NYC streetmap example, the physical streets are the territory, the paper with lines on it is the map, the cartographic process encompasses the map and territory and all the people and equipment and computations which produced the map from the territory, and the class of queries includes things like distance and street connectivity. Note that, in this example, neither the territory nor the map is a causal model, although the cartographic process is a causal model. In general, the cartographic process itself will always be a causal model - accurate maps do not form spontaneously, there is always a cause-and-effect process which creates the map. Part of the reason I’m specifically interested in causal models for the map and territory is because I ultimately want maps of cartographic processes themselves.

For purposes of embedded agency via abstraction, we want to answer questions like:

  • Given a cartographic process, characterize the queries which the map can reliably answer
  • Given a cartographic process, translate queries on the map into queries on the territory, and vice-versa
  • Given a class of queries and territories, construct a cartographic process whose output map reliably answers the queries on the territories
  • Given multiple cartographic processes on the same territory, how can we integrate their output maps, i.e. translate queries between the two maps?

More generally, since we’re talking about processes which make maps “match” their territories, we’d like to know what role probabilistic models play. It seems like it should be possible to talk about the role probabilistic models play in map - territory correspondence without introducing limiting behavior (i.e. frequentism) or Cartesian agents (i.e. Bayesianism). I suspect that there is some inherently embedded interpretation of probability which would make answers to many of our questions obvious. But that’s still speculation; I do not yet know what such an interpretation might be.

The rest of this post will build up to an informal conjecture on the correspondence between counterfactual queries on causal maps and causal territories. First, though, we’ll take a brief detour to talk about controllers.

Controller <-> Cartographer Duality

A simple example of a cartographic process is a digital thermometer in a room. The temperature readout is the map, the average kinetic energy of the air molecules is the territory, and the interactions between air molecules, electrons, various circuit components, and ultimately the LCD display together comprise the cartographic process.

More surprisingly, we can also model a thermostat as a cartographic process: it’s like the thermometer’s cartographic process, but with map and territory switched. For a thermostat, the digital readout is the “territory”, and the average kinetic energy of the air molecules in the room is the “map”. The cartographic process consists of the temperature sensor, feedback control circuitry, heater/air conditioner, and vents - all of which act together to make the “map” match the “territory”, i.e. to make the room temperature match the temperature setting on the thermostat’s interface.

This suggests a general principle: take a control process, call the controller’s target point a “territory” and the environment a “map”, and the control process looks like a cartographic process.

(Thankyou to romeostevensit for hashing out the controller-cartographer duality idea with me at MSFP 2019.)

Abstract Causality

The first challenge of abstract causality - i.e. causal maps of causal territories - is to translate counterfactuals on the map into counterfactuals on the territory, in such a way that the causal structure “works” - i.e. the causal structure of the map is implied by the causal structure of the territory. This should be straightforward to formalize, but for now we’re just going to run with the idea intuitively. (As Wheeler put it: “never make a calculation until you know the answer.”)

The obvious way for causal structure on the map to correspond to causal structure on the territory is coarse-graining: take a set of nodes whose combined parents/ancestors (mostly) don’t overlap with their combined children/descendents, and combine them into a single node. The thermometer in a room is an example: the kinetic energy of all the different air molecules mostly comes from the same places (walls, other air molecules) over time, so we can glom all those air molecules together into a single abstract “gas” with a “temperature”, then talk about how things affect the aggregate temperature rather than how things affect the individual molecules.

But coarse-graining is not the only way for causal structure to line up. As a counterexample, consider a thermostat. At the abstract level, a great causal model for a thermostat is (temperature setting in interface) -> (kinetic energy of air molecules). But in the physical world, there’s a complicated feedback controller in the middle, with causal arrows going back-and-forth (though always forward in time). The net effect of that complicated controller is that we can replace it with a simple causal arrow in an abstract map.

More generally, cartographic processes make the world behave-as-if there were a simple causal arrow (territory) -> (map), even when the underlying causality is more complicated.

This suggests an informal conjecture: every correspondence between counterfactual structures on a map and a territory, for which the causal structure of the map is implied by the causal structure of the territory, can be described by some combination of coarse-graining and embedded cartographic processes. Intuitively, either the causal arrows line up already, or we need some kind of controller to make the system behave-as-if the causal arrows line up.

New Comment
3 comments, sorted by Click to highlight new comments since:

> or we need some kind of controller to make the system behave-as-if the causal arrows line up.

This seems like a toe-hold for thinking about counterfactuals. i.e. counterfactuals as recomputing over a causal graph with an arrow flipped or the coarse graining bucketed differently.

Hmm. I might have a sense of where you're going, but the terminology is confusing to me. Nothing happens spontaneously, every future state happens because of the past state of the universe, so your intro makes very little sense to me. I think the distinction you're pointing to isn't spontaneous/caused, I think it's natural/artificial, or maybe automatic/planned, or maybe inevitable/intentional. In any case, it seems to be about human conscious decisions to create the map. I'm not sure why this doesn't apply to the human conscious decision to create the roads being mapped, but I suspect there's an element of objective/subjective in there or full-fidelity/simplified-model.

I'm also unsure if the "cartographic process" is the human intent to make a map/model, or the physical steps (measurements, update of display, etc.) that generate the map.



... I think I may have underestimated an inferential gap here.

I'm pointing to the same thing as Yudkowsky's engines of cognition essay: roughly speaking, the only way two things in the physical world have mutual information is if there's some kind of causal connection between them. In that essay, Yudkowsky is talking about this in the context of forming accurate beliefs. The main takeaway is that, in order for my beliefs to accurately reflect the territory, there has to be some sort of causal connection between the territory and my beliefs.

One particularly good example from Yudkowsky:

It happens, in miniature, every time you look down at your shoes to see if your shoelaces are untied.  Photons arrive from the Sun, bounce off your shoelaces, strike your retina, are transduced into neural firing frequences, and are reconstructed by your visual cortex into an activation pattern that is strongly correlated with the current shape of your shoelaces.  To gain new information about the territory, you have to interact with the territory.  There has to be some real, physical process whereby your brain state ends up correlated to the state of the environment.

That's a cartographic process in action. It makes the map (belief regarding whether my shoe is tied) correlate with the territory (physical shoelace).

I'm trying to take that same idea, and formalize it in a way that makes sense without any human and without any "beliefs" - just one physical system which models another physical system. The idea is that, in order for one physical system to model another (i.e. in order for the results of queries on one system to predict the results of queries on another) there has to be some kind of causal connection between the two systems. That connection is what I'm calling a cartographic process - regardless of whether there's any human involved, or any intention.