Today's post, Reductionism was originally published on 16 March 2008. A summary (taken from the LW wiki):

 

We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Qualitatively Confused, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 6:49 PM

It's not that the laws of physics themselves use different descriptions at different levels

I think cases could be constructed in which they do. Assume we live in a world where the basic physics are cellular automata, the game of Life, for example. Then there would be the standard laws, specified at the level of individual cells. At the same time, there would be laws defined at the level of higher-level structures - gliders, etc. If the initial configuration was chosen carefully, these laws could be completely sufficient to fully describe the evolution of the world. Then, there would be no reason to suppose that the "basic" (low-level) description is any more fundamental than the "emergent" (high-level). In fact, it can be argued that the reverse is true, if the low-level configuration was specifically chosen to make the high-level one possible.

I see an argument, probably best expressed here, that there could be more than one territory.

The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.

What exactly does this mean? In some sense it is true: "(approximation of approximation) of reality" is "approximation of (approximation of reality)". You can build a high-level map based on low-level map; it will not be perfect, but since no map is perfect, this should not be a problem.

But the difference between "map" and "territory" is more than just "less precise" vs "more precise". The territory is, by definition, always correct. Maps are imprecise, but they can be improved (usually at the cost of greater complexity). By making better maps we can make our predictions of experiments more precise. And the experiments happen on the reality level, not on the lower-map level.

So if we have a territory and a map, we can improve our predictions by improving the map. But if we have a territory and a map1, and a map2 which is based on map1, there is some difference between the territory and map1... and no matter how much we improve map2 (if we only improve map2 with regard to map1, not the territory), we cannot fix this difference. Even if map2 would model map1 perfectly, our predictions would still sometimes fail experimentally if map1 is imperfect.

If we taboo the words, then "map" is a mental model, and "territory" is a hypothetical cause of all experimental results. (The only way a map could become a territory is if we do thought experiments; but if these contradict with the real experiments, then they are wrong.)

For example we could have a molecule-level model of the world, and an organism-level model of the world. We can explain a lot of how organisms function by how their cells chemically interact, etc. But there still remains an experimental fact that radioactivity can kill living organisms. Radioactivity is a part of territory that is not included in the molecular level, but can leak to a higher level. This is why it would be wrong to call the molecule-level map a territory.

[META] Is there some way we could automatically put links to these threads on the sequence posts? It would be helpful for people going through them in the future as most of the discussion is out of date.

Only if there is actual substantive discussion in the rerun post, which is false for most posts.

So is the 747 made of something other than quarks? No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.

It seems to me that the reductionism as advocated by Eliezer is a poor-quality model. It only works one way: you can potentially trace the 747 all the way down to its subatomic particles, but you cannot construct a particular configuration of subatomic particles that will fly, rather than bark or crawl, using just the laws of QFT. Thus it has no testable predictions (that everything we can see or touch consists of quarks, leptons and gauge bosons is not a prediction, but an observation), just like his other favorite myth, the MWI. He dislikes the term "emergence", and he is entitled to his emotions, but the sad experience is that nuclear physics is no help in psychology, not even after you say "reductionism" three times.

He dislikes the term "emergence", and he is entitled to his emotions, but the sad experience is that nuclear physics is no help in psychology, not even after you say "reductionism" three times.

/looks up from reading a study using positron-emission-from-nuclear-isotopes tracing in the brain

I'm sorry, were you saying something?

[-][anonymous]12y90

I think the point was that modeling the brain with enough precision that nuclear physics became necessary to take into account would be intractable and likely not very useful anyway.

But you got the snark thing down, so upvotes for everyone.

you cannot construct a particular configuration of subatomic particles that will fly, rather than bark or crawl, using just the laws of QFT.

Saying "you cannot" is awfully gutsy. Are you familiar with renormalization groups? They're a mathematical tool for getting high-level laws out of low-level laws. The only theoretically unsolved problem I know of between QFT and predicting planes is the prediction of a periodic solid as the ground state of your structural metal, though there are probably a few more.

Saying "you cannot" is awfully gutsy.

True, let's say that I'd bet my house on it not being done in my lifetime.

Are you familiar with renormalization groups?

Yes, in the HEP context.

They're a mathematical tool for getting high-level laws out of low-level laws.

Not quite. You can use renormalization to help explain some of what you observe at lower energies from a HE model point of view. I am yet to see an RNG prediction of a new low-energy effect, though I suppose it might happen for one-level-up problems, but not for any kind of multi-level jumps (do you seriously think that one can potentially renormalize quarks to cognitive biases?)

The only theoretically unsolved problem I know of between QFT and predicting planes is the prediction of a periodic solid as the ground state of your structural metal, though there are probably a few more.

You are confusing predicting with explaining.

I can't figure out what you're trying to say. Are you saying:

(1) QFT is inherently incapable of explaining the aerodynamics of rigid macroscopic bodies, ever

(2) QFT can do that in principle, but in practice we can't yet justify some of the intermediate steps

(3) QFT can do that in principle, but in practice it's pointless because the higher-level theories already tell you everything about the higher levels

(4) something else?

As I said, explaining != predicting.

Is that my option 3? ETA: I still don't know what your point is. Try s/explaining/predicting in my previous comment.

ETA 2: The meaning of your original comment is what I really don't get. Eliezer is saying that reality consists of elementary particles, not elementary particles and other things. You don't seem to be disagreeing with this, but you're depreciating the proposition somehow. You say it's not predictive, but what is the significance of that? The fact that every natural number has a successor won't help you do arithmetic, but it's still true; and you really can do arithmetic with the larger axiom set of which it is a part. Analogously, the proposition that everything is made of elementary particles is not in itself very predictive, but it is a property of fundamental theories which we use and which are predictive.

Eliezer is saying that reality consists of elementary particles, not elementary particles and other things. You don't seem to be disagreeing with this, but you're depreciating the proposition somehow.

What I am saying is that this is irrelevant for higher-level concepts. You can make the same brain out of neurons, or, if you believe in upload, out of bits. It will have all the same cognitive processes, same biases etc. Knowing that the former can be eventually decomposed into subatomic particles adds nothing to our understanding of psychology.

[-][anonymous]12y00

Knowing that the former can be eventually decomposed into subatomic particles adds nothing to our understanding of psychology.

So? What query are you trying to answer?

Are you asking whether we ought to study and understand reductionism? Answer: yes, if we don't get reductionism, we might miss that uploads, etc. are possible.

Are you saying it may not be worth it to learn all the low-level detail, because our higher abstractions aren't all that leaky. Answer: agree for most things, but some require the lower stuff.

Why are you bringing this up?

I thought I had clearly explained it in my original top-level comment: the underlying structure is irrelevant for the entities a few levels removed.

the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.

And if it did, it wouldn't matter for atomic physics and up.

[-][anonymous]12y10

I understand what you are saying. Why are you saying it? What is interesting about the idea that higher levels of your map are agnostic to lower level details? What is the query?

Saying "you cannot" is awfully gutsy.

True, let's say that I'd bet my house on it not being done in my lifetime.

To quote the article, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment." Eliezer's point is not that we should discard all our higher level models. Of course you can't feasibly build a working 747 using a subatomic particle model. Of course you need to use higher level models if you want to get any useful work done. His point is that we need to recognise that they are models; that the universe does not really work with different rule sets for different levels.

A street directory of your town isn't actually your town. It is important to understand this, so you don't end up trying to go to your friend's house by jumping up and down on the appropriate page. But that doesn't mean you should throw the street directory away.

do you seriously think that one can potentially renormalize quarks to cognitive biases?

Are you saying that such a task would be prohibitively difficult, and presumably not something that is worth the effort, or are you saying that doing this is impossible in principle?

I don't think the renormalization group flow has anything to do with the topic, honestly.

It looked to me like the entire original post was about how you couldn't use lower level laws to extrapolate higher level laws. Renormalization is the only way I know of to do this. Additionally, every comment in this direct thread is about renormalization. I think renormalization is the topic.

Renormalization is a very specific technique, mostly used in HEP, to work around the infinities cropping up into the calculations. It does not let you predict anything about a lower-energy model from the higher-energy one, only to replace "bare" quantities with the renormalized ones.