Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:
"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"
I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...
But now it's time to begin addressing this question. And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".
First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.
This seems like a strong statement, at least the first part of it. General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?
On the other hand, we are never going back to Newtonian mechanics. The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.
"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.
And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.
I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you'll get the wrong answer."
And I, and another person who was present, said flatly, "No." I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean? But the relativistic answer will always be more accurate than the Newtonian one."
"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."
"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize."
Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.
But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.
So is the 747 made of something other than quarks? No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.
Why not model the 747 with a chromodynamic representation? Because then it would take a gazillion years to get any answers out of the model. Also we could not store the model on all the memory on all the computers in the world, as of 2008.
As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment." Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory. The scale of a map is not a fact about the territory, it's a fact about the map.
If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions. Better predictions than the aerodynamic model, in fact.
To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift. There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings. It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.
"What?" cries the antireductionist. "Are you telling me the 747 doesn't really have wings? I can see the wings right there!"
The notion here is a subtle one. It's not just the notion that an object can have different descriptions at different levels.
It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.
It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought. Rather we, for our convenience, use different simplified models at different levels.
If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.
You, looking at the model, and thinking about the model, would be able to figure out where the wings were. Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM. In your mind.
You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—
And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.
The way a belief feels from inside, is that you seem to be looking straight at reality. When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.
So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)
The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.
This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory. Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?" The critical words are really and see.
Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don't mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you're not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can't be disinterested of philosophy if you make philosophical claims like that and actually consider them important.
I don't like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.
But you haven't defined reality. As long as you haven't done so, "reality" will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be "reality" in one discussion, logical analysis would probably reveal you didn't use it in the same meaning in another discussion.
You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That's a problem caused by you not realizing you didn't have to justify your activities or approach in the first place. You didn't need to make these philosophical claims, and I don't suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.
If you categorize yourself as a reductionist, why don't you go all the way? You can't be both a reductionist and a realist. Ie. you can't believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?
Drop the one you need to drop. I'm serious. You don't need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is "true" in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a "territory") for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it's somewhat similar to mine. You don't think about that metaphysical nonsense when you're actually doing something practical. So you are not a metaphycisist when you're riding a bike and enjoying the wind or something.
It's just some conception of yourself which you have, that you have defined as someone who is an advocate of "reductionism and realism". This conception is true only when you indeed are either one of those. It's not true, when you're neither of those. But you are operating in your mind. Suppose someone says to you you're not a "reductionist and a realist" when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a "reductionist and a realist", and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:
If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn't even participate in that game.
If you actually care about philosophy, great. But I haven't yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you're already finished with it - that you've obtained a framework theory which already suits your needs, and you can now focus on the needs. But you're not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don't care what you choose as long as you're fine with it, but I don't want you to contradict yourself.
There is no way to express the existence of the "territory" as a meaningfully true statement. Or if there is, I haven't heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can't construct a "metatheory of reality" which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:
Therefore, you have to define it if you want to use it for something, and just accept the fact that you can't prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can't make an AI that would use "territory" to construct a metatheory of territory, if it's somehow true to the AI that territory is all there is. The AI can't even construct a metatheory of "map and territory", if it's programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.
I've seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you'll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don't want to lug the assumption around all the time just because it's supposed to be true in some way nobody can define.
You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn't usually include such theories, because the discipline is rather outdated, but there's no inherent reason why it can't be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.
I hope you were interested of my rant.
Actually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?