Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:
"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"
I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...
But now it's time to begin addressing this question. And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".
First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.
This seems like a strong statement, at least the first part of it. General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?
On the other hand, we are never going back to Newtonian mechanics. The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.
"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.
And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.
I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you'll get the wrong answer."
And I, and another person who was present, said flatly, "No." I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean? But the relativistic answer will always be more accurate than the Newtonian one."
"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."
"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize."
Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.
But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.
So is the 747 made of something other than quarks? No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.
Why not model the 747 with a chromodynamic representation? Because then it would take a gazillion years to get any answers out of the model. Also we could not store the model on all the memory on all the computers in the world, as of 2008.
As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment." Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory. The scale of a map is not a fact about the territory, it's a fact about the map.
If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions. Better predictions than the aerodynamic model, in fact.
To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift. There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings. It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.
"What?" cries the antireductionist. "Are you telling me the 747 doesn't really have wings? I can see the wings right there!"
The notion here is a subtle one. It's not just the notion that an object can have different descriptions at different levels.
It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.
It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought. Rather we, for our convenience, use different simplified models at different levels.
If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.
You, looking at the model, and thinking about the model, would be able to figure out where the wings were. Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM. In your mind.
You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—
And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.
The way a belief feels from inside, is that you seem to be looking straight at reality. When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.
So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)
The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.
This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory. Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?" The critical words are really and see.
Sure, they exist in both the lowest (so far) level and in the next level up. But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory. If that doesn't include electrons in this example, then I don't know what it includes. I don't understand exactly what it is that is forbidden. Is it type errors - confusing map entities with territory entities? Is it failing to yet be convinced by what someone else thinks is the best low-level model? Is it somehow imagining that, say, atoms still exist in the territory while simultaneously imagining that atoms are made of more fundamental things which also exist in the territory? I seems to me that the definition of reductionism that Eliezer has given is completely useless because no one sane would proclaim themselves as non-reductionists. He is attacking a straw-man position, as far as I can see.
In short, you seem to be confusing {A} with A.