Many people who find value in the Sequences do something which looks to me like adopting a virtue called "align your map to the territory." I recently was thinking about experimental results, and it got me thinking about how we don't really know what the territory is, other than the thing we look at to see if our maps are right. Everything we know is map. What we know consists of a variety of models that describe aspects of reality, and we have to treat them like reality to get anything done. It wasn't relevant to my post at the time, but it occurred to me that it doesn't really matter what reality is, because my values live at a higher level of abstraction with my sense of self. Don't get me wrong, it matters that I know that reality exists. If reality says something different than my models do, then I need to change my models. However, I'm beginning to believe that reality has no importance or value beyond that. The stuff I care about happens to run on some hardware which I need to have decent models for if I want to take actions to protect that stuff, but I'm done once I understand the hardware well enough to protect the stuff. I wanted to write about that a bit, because I had internalized a way of thinking which says that everything is a model without explicitly thinking about the consequences.

That was an awkward set of statements to write because I am a particle physicist. I take joy in figuring out how matter works. I have developed a picture of reality which I very much did not evolve to intuit. This is a roundabout way to convey the sentiment behind the statement "quantum mechanics is weird" while also holding the idea in mind that it very much seems to be the way that the universe works on small scales, and so we don't want to think of it as a strange, alien thing to run away from as soon as possible.

Anyway, I believe that the electromagnetic interaction gives us the physical framework for everything we directly interact with in reality, and the field theory which best explains the electromagnetic interaction paints a picture of seas of virtual particles and other such things that do not directly affect the things I can do in the world with my own hands. There is nothing my brain can cause my fingers to do which cannot be explained by classical physics. I will not have a perfect model of my fingers if I ignore quantum mechanics. It is possible that a charged particle from a cosmic ray makes my finger spasm one day, but the normal working order of my finger doesn't need to be explained at a level that close to reality. Even saying "that close to reality" may be implying that quantum mechanics is more territory than it is. Quantum mechanics is map. It is very good map, and it even includes directions for deriving the classical limit map that predates it and that still works very well for anything I can do with my fingers, but quantum mechanics is map. There is still confusion in quantum mechanics, and the territory is not confused.

All of this is to say that I am aware of maps which sure seem more aligned with the territory than the models I make of my surrounding furniture and walls in order to navigate my house, but nothing I care about lives in those models. Sometimes I have to pay attention to those "lower-level" models because they contain threats to things which I care about, but the things I care about are things like the people in my life and my record collection and my understanding of quantum mechanics. Yes, you need to have atoms before you can have vinyl. Yes, you need to develop humans by purely mechanical means before you can have human values. However, if it were possible to take all of the things I care about and upload them into a lower resolution simulation of the environment that they depend on (the Planck constant is very small relative to scales I directly care about), then I would be ambivalent about that if my "higher-level" abstractions still worked the same way. I don't need the specific underlying reality we have to be happy, I need the abstractions that I evolved to see myself in. Aligning my maps to the territory is a means to the end of tweaking the territory so that my maps include me standing on a pile of utility. I admit that I derive joy from the alignment itself, but I would derive that joy no matter what the territory looked like if my neural structure was the same on any level of abstraction.

New to LessWrong?

New Comment


27 comments, sorted by Click to highlight new comments since:

Agreed - we (and more generally, embedded agents) have no access to territory.  It's all maps, even our experiences are filtered through interpretation.  Territory is inferred as the thing that makes different maps (at different levels of abstraction or different parts of the universe) consistent with each other and across time. 

That said, some maps are very detailed, repeatable, and can support a lot of other maps.  I tend to think of those as "closer to the territory".  In colloquial discussion and informal thinking, I don't think there's much harm in pretending that the actual territory is the same as the fine-grained maps.  Not technically true - there are more levels of maps, and they're asymptotic to reaching the territory.  But close enough for a lot of things.

If I were going to go further with this idea, I'd even queer the map-territory dichotomy and recognize that the map-territory distinction can be illusory sometimes.

In what way? I find myself disagreeing vehemently, so I would appreciate an example.

Maps are territory in the sense that the territory is the substrate on which minds with maps run, but one of my main points here is that our experience is all map, and I don't think any human has ever had a map which remotely resembles the substrate on which we all run.

I'm making a general comment, but yes what I mean is that in some idealized cases, you can model the territory under consideration well enough to make the map-territory distinction illusory.

Of course, this requires a lot, lot more compute than we usually have.

I think I see what you're saying, let me try to restate it:

If the result you are predicting is course-grained enough, then there exist models which give a single prediction with probability so close to one that you might as well just take the model as truth.

Yes, and as a contrapositive, if you had enough computing power, you could narrow down the set of models to 1 for even arbitrarily fine-grained predictions.

Couldn't one say that a model is not truly a model unless it's instantiated in some cognitive/computational representation, and therefore since quantum mechanics is computationally intractable, it is actually quite far from being a complete model of the world? This would change it from being a map vs territory thing to more being a big vs precise Pareto frontier.

(Not sure if this is too tangential to what you're saying.)

This is tangential to what I'm saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say "ah, but this one detail of the quark model is wrong/incomplete" as if it changes his argument when it doesn't. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his "quark" and "quantum mechanics" words with something else, but the point still stands about the relationship between higher-level abstractions and reality.

I'm not sure I understand your objection, but I will write a response that addresses it. I suspect we are in agreement about many things. The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on. Quantum mechanics isn't computationally intractable, but making quantum mechanical systems at large scales is. That is a statement about the amount of compute we have, not about quantum mechanics. We have every reason to believe that if we simulated a spacetime background which ran on general relativity and threw a bunch of quarks and electrons into it which run on the standard model and start in a (somehow) known state of the Earth, Moon, and Sun, then we would end up with a simulation which gives a plausible world-line for Earth. The history would diverge from reality due to things we left out (some things rely on navigation by starlight, cosmic rays from beyond the solar system cause bit flips which affect history, asteroid collisions have notable effects on Earth, gravitational effects from other planets probably have some effect on the ocean, etc.) and we would have to either run every Everett branch or constantly keep only one of them at random and accept slight divergences due to that. In spite of that, the simulation should produce a totally plausible Earth, although people would wonder where all the starts went. There do not exist enough atoms on Earth to build a computer which could actually simulate that, but that isn't a weakness in the ability of the model to explain the base-level of reality.

This is tangential to what I'm saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say "ah, but this one detail of the quark model is wrong/incomplete" as if it changes his argument when it doesn't. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his "quark" and "quantum mechanics" words with something else, but the point still stands about the relationship between higher-level abstractions and reality.

My in-depth response to the rationalist-reductionist-empiricist worldview is Linear Diffusion of Sparse Lognormals. Though there's still some parts of it I need to write. The main objection I have here is that "single layer" is not so much the true rules of reality so much as it is the subset of rules that are unobjectionable due to applying everywhere and every time. It's like the minimal conceivable set of rules.

The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on.

I'd argue the practical rules of the world are determined not just by the idealized rules, but also by the big entities within the world. The simplest example is outer space; it acts as a negentropy source and is the reason we can assume that e.g. electrons go into the lowest orbitals (whereas if e.g. outer space was full of hydrogen, it would undergo fusion, bombard us with light, and turn the earth into a plasma instead). More elaborate examples would be e.g. atmospheric oxygen, whose strong reactivity leads to a lot of chemical reactions, or even e.g. thinking of people as economic agents means that economic trade opportunities get exploited.

It's sort of conceivable that quantum mechanics describes the dynamics as a function of the big entities, but we only really have strong reasons to believe so with respect to the big entities we know about, rather than all big entities in general. (Maybe there are some entities that are sufficiently constant that they are ~impossible to observe.)

Quantum mechanics isn't computationally intractable, but making quantum mechanical systems at large scales is.

But in the context of your original post, everything you care about is large scale, and in particular the territory itself is large scale.

That is a statement about the amount of compute we have, not about quantum mechanics.

It's not a statement about quantum mechanics if you view quantum mechanics as a Platonic mathematical ideal, or if you use "quantum mechanics" to refer to the universe as it really is, but it is a statement about quantum mechanics if you view it as a collection of models that are actually used. Maybe we should have three different terms to distinguish the three?

I appreciate your link to your posts on Linear Diffusion of Sparse Lognormals. I'll take a look later. My responses to your other points are essentially reductionist arguments, so I suspect that's a crux.

That said, I'm using "quantum mechanics" to mean "some generalization of the standard model" in many places. In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don't think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.

That said, I'm using "quantum mechanics" to mean "some generalization of the standard model" in many places.

I think this still has the ambiguity that I am complaining about.

As an analogy, consider the distinction between:

  • Some population of rabbits that is growing over time due to reproduction
  • The Fibonacci sequence as a model of the growth dynamics of this population
  • A computer program computing or mathematician deriving the numbers in or properties of this sequence

The first item in this list is meant to be analogous to the quantum mechanics qua the universe, as in it is some real-world entity that one might hypothesize acts according to certain rules, but exists regardless. The second is a Platonic mathematical object that one might hypothesize matches the rules of the real-world entity. And the third are actual instantiations of this Platonic mathematical object in reality. I would maybe call these "the territory", "the hypothetical map" and "the actual map", respectively.

In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don't think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.

Wouldn't this fail for metals, quantum computing, the double slit experiment, etc.? By switching back and forth between quantum and classical, it seems like you forbid any superpositions/entanglement/etc. on a scale larger than your classical lattice size. The standard LessWrongian approach is to just bite the bullet on the many worlds interpretation (which I have some philosophical quibbles with, but those quibbles aren't so relevant to this discussion, I think, so I'm willing to grant the many worlds interpretation if you want).

Anyway, more to the point, this clearly cannot be done with the actual map, and the hypothetical map does not actually exist, so my position is that while this may help one understand the notion that there is an rule that perfectly constrains  the world, the thought experiment does not actually work out.

Somewhat adjacently, your approach to this is reductionistic, viewing large entities as being composed of unfathomably many small entities. As part of LDSL I'm trying to wean myself off of reductionism, and instead take large entities to be more fundamental, and treat small entities as something that the large entities can be broken up into.

The simulation is not reality, so it can have hidden variables, it just can't simulate in-system observers knowing about the hidden variables. I think quantum mechanics experiments should still have the same observed results within the system as long as you use the right probability distributions over on-site interactions. You could track Everett branches if you want to have many possible worlds, but the idea is just to get one plausible world, so it's not relevant to the thought experiment.

The point is that I have every reason to believe that a single-level ruleset could produce a map which all of our other maps could align with to the same degree as the actual territory. I agree that my approach is reductionist. I'm not ready to comment on LDSL

Are you alluding to Wittgenstein's final passage of the Tractatus: "Whereof one cannot speak, thereof one must remain silent"?

You can only get from the premise "we can only know our own maps" to the conclusion "we can only care about our own maps" via the minor premise "you can only care about what you fully understand ". That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn't happened yet. A lot of people do care about reality directionally.

@Dagon

Embedded agents are in the territory. How helpful that is depends on the territory

@Noosphere89

you can model the territory under consideration well enough to make the map-territory distinction illusory.

Well,no. A perfect map is still a map. The map territory distinction dies not lie in imperfect representation alone.

you can only care about what you fully understand

I think I need an operational definition of "care about" to process this.  Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not.  Caring about something does not make it territory.  It's just another map.

Embedded agents are in the territory.

Kind of.  Identification of agency is map, not territory.  Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps.  The agent is a subset of the territory, but doesn't have access at the agent level to the territory.

you can only care about what you fully understand

I think I need an operational definition of “care about” to process this

If you define "care about" as "put resources into trying to achieve" , there's plenty of evidence that people care about things that can't fully define, and don't fully understand, not least the truth-seeking that happens here.

Do you want joy or to know what things are out there? Like it's a fundamental question about justifications, do you use joy to keep yourself going while you gain understanding or you gain understanding to get some high quality joy?

That sounds like two different kinds of creatures in transhumanist limit of it, some trade off knowledge to joy, others trade off joy to knowledge.

Or whatever, not necessarily "understanding", like you can use other properties of your territory to bind yourself to. Well, in terms of maps it's preference for good correspondence, and preference for not spoofing that preference.

From the inside, it feels like I want to know what's going on as a terminal value. I have often compared my desire to study physics to my desire to understand how computers work. I was never satisfied by the "it's just ones and zeros" explanation, which is not incorrect, but also doesn't help me understand why this object is able to turn code into programs. I needed to have examples of how you can build logic gates into adders and so on and have the tiers of abstraction that go from adders, etc to CPU instructions to compilers to applications, and I had a nagging confusion about using computers for years until I understood that chain at least a little bit. There is a satisfaction which comes with the dissolution of that nagging confusion which I refer to as joy.

There's a lot to complain about when it comes to public education in the United States, but I at least felt like I got a good set of abstractions with which to explain my existence, which was a chain that went roughly Newtonian mechanics on top of organs on top of cells on top of proteins on top of DNA on top of chemistry on top of electromagnetism and quantum mechanics, the latter of which wasn't explained at all. I studied physics in college, and the only things I got out of it were a new toolset and an intuitive understanding for how magnets work. In graduate school, I actually completed the chain of atoms on top of standard model on top of field theory on top of quantum mechanics in a way that felt satisfying. Now I have a few hanging threads, which include that I understand how matter is built out of fields on top of spacetime, but I don't understand what spacetime actually is, and also the universe is full of dark matter which I don't have an explanation for.

How about geology, ecology and history? It seems like you are focused on mechanisms rather than contents.

Or in the words of Sean Carroll's Poetic Naturalism:

  1. There are many ways of talking about the world.
  2. All good ways of talking must be consistent with one another and with the world.
  3. Our purposes in the moment determine the best way of talking.

A "way of talking" is a map, and "the world" is the territory.