This denial that "higher level" entities actually exist causes a problem when we are supposed to identify ourselves with such an entity. Does the mind of a cognitive scientist only exist in the mind of a cognitive scientist?
The belief that there is a cognitive mind calling itself a scientist only exists in that scientist's mind. The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist.
Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one.
Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.
Yet something in the real world makes it tractable to create the "map" -- to find those hidden class variables which enable Naive Bayes.
Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.
So I would I say the plane image is an effect not a primary, but that does not make it any less real than the primary. It is a real thing, just as real, that just happens to be further down the chain of cause and effect.
Reductionism does have a caveat, and this is "a fact about maps" and not "a fact about the territory": the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when...
This is a situation where a lot of confidence seems appropriate, though of course not infinite confidence. I'd put the chance that Eliezer is wrong here at below one percent.
I really have no idea what Eliezer being wrong on this would mean. Is the subject matter of this posting the nature of the territory or is it advice on the best way to construct maps?
What conceivable observations might cause you to revise that 1% probability estimate up to, say, 80%?
As I see it, reductionism is not a hypothesis about the world; it is a good heuristic to direct research.
AFAICS, he is not "forbidding" a plane's wing from existing at the level of quark. He's just saying that "plane's wing" is a label that we are giving to "that bunch of quarks arranged just so over there". This as opposed to "that other bunch of quarks arranged just so over there" that we call "a human".
That the arrangement of a set of quarks does not have a fundamental "label" at the most basic level. The classification of the first bunch o' quarks (as separate from the second) is something that we do on a "higher level" than the quarks themselves.
When an image you are looking at is altered due to viewing it through a pane of coloured glass, you don't suddenly start calling it "the map" instead of "the territory."
So why is it, when it passes through our eyes and brain it suddenly becomes "the map," when the brain is made of the same fundamental stuff (quarks etc.) as the glass?
Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.
Ian C - are you claiming that there are no maps, just lots of territory, some of which refers to other bits of territory? While probably accurate, this doesn't seem very useful if we're trying to understand minds. I don't think Eliezer ever claims that maps are stored in the glove compartments of cars in the car park, just outside The Territory. ...
Ben Jones - yes, I'm saying there's just lots of territory. I think it's useful to understanding minds, because (if correct) it means they don't work by making an internal mirror of reality to study, but rather they just "latch on" to actual reality at a certain point. The role of the brain in that case would not be to "hold" the internal mirror copy, but to manipulate reality to make it amenable to latching.
I always found Hofstadter's take on the issue illuminating.
Disappointingly, dictionaries and encyclopaedias today seem to have defined reductionism and holism away from Hofstadter's usage - to the detriment of both of the terms involved.
Ian - if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?
Sensory perception isn't like a photograph - low-resolution but essentially representative. It's like an idiot describing a photograph to someone who's been blind all their life. This is why we get our maps wrong, and that is why it's useful to think in terms of map and territory - so that we can try and draw better ones.
Ben Jones: "if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?"
Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.
I agree sensory perception is not like a photograph, but I don't think it's like an idiot trying to explain to us. I...
At present, we cannot generate accurate quantum mechanical descriptions of atoms more complex than hydrogen (and, if we fudge a bit, helium). Any attempt to do so, because of the complexity and intractability of the equations evolved, produces results that are less accurate than our empirically-derived understanding.
Even if we ignore the massive computational problems with trying to create a QM model of an airplane, such a model is guaranteed to be less accurate than the existing higher-order models of aerodynamics and material science.
We presume that our...
I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.
After talking to some non-reductionists, I've come to this idea about what it would mean for reductionism to be false:
I'm sure you're familiar with Conway's Game of Life? If not, go check it out for a bit. All the rules for the system are on the pixel level -- this is the lowest, fundamental level. Everything that happens in conway's game of life is reducible to the rules regarding individual pixels and their color (white or black), and we know this because we have access to the source code of Conway's Game, and it is in fact true that those are the only rules.
For Conways' Game to be non-reductionistic, what you'd have to find in the source code is a set of rules that override the pixel-level rules in the case of high-level objects in the game. Eg "When you see this sort of pixel configuration, override the normal rules and instead make the relevant pixels follow this high-level law where necessary."
Something like that.
It's an overriding of low-level laws when they would otherwise have contradicted high-level laws.
The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.
But if you can't do a calculation in practice, does it matter whether or not it would give you the right answer if you could?
And there goes Caledonian again, completely misrepresenting Eliezer's claims.
His arguments are completely baseless. Of course it would be very, very, very hard to make a QM model of an airplane, and attempting it now would fail miserably - Eliezer wouldn't dispute that.
But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.
I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.
Seconded.
I suppose the next post is on how a non-reductionist universe would overwhelmingly violate Occam's Razor?
Caledonian's job is to contradict Eliezer.
Not even that -- it's as if he and other commenters (e.g. Unknown in this case) are simply demanding that Eliezer express his points with less conviction.
If you think Eliezer is wrong, say so and explain why. Merely protesting that he is "confident beyond what is justified", or whatever, amounts to pure noisemaking that is of no use to anyone.
Slighlty off-topic. I am a bit new to all this. I am a bit thick too. So help me out here. Please.
Am I right in understanding that the map/territory analogy implies that the map is always evaluated outside the territory?
I guess, I'm asking the age old Star Trek transporter question. When I am beamed up, which part of which quark forms the boundary between me and Scotty.
I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.
Ian C - well put. My point is that since there is, at least, some distortion between mind and world (hence this very blog), it's useful to think in terms of map and territory. At the simplest level, it stops us confusing the two. If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?
I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model.
That was exactly the situation I found myself in at about 3am on Sunday morning.
Ben Jones: "If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?"
I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it. They are both just different parts of reality. But if your mind can only be aware of the chair then you must discover the table by deduction, which is what someone trying to "correct" the chair would do also. So yes, I guess it makes...
I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it.
But the chair isn't seeking to imitate the table. That's one thing that minds do that nothing else does - form abstract representations. It's not magic, but it's a pretty impressive trick for a couple of pounds of quivering territory.
Besides, you've already acknowledged that the mental concept has a causal link with the object itself. Chairs aren't causally linked to t...
Ben Jones: "But the chair isn't seeking to imitate the table."
But the mind isn't seeking to imitate reality either. The mind seeks to provide awareness of reality, that is all. In taking the data of the senses and processing it only following the laws of cause and effect, it achieves this goal (because the output of the pipeline remains reality).
The idea that it is trying to imitate (and the associated criticisms like map, territory and distortion) come from looking at the evolved design after the fact and assuming how it is supposed to work without taking a wide enough view of all the ways awareness of reality could be implemented.
'I wish I knew where Reality got its computing power.'
Assume Reality has gotten computing power and that it makes computations. Computation requires time. Occurrence would require the time required for the occurrence plus the time necessary for Reality to make the computation for that occurrence. The more complex the occurrence, either more computing power or longer computation time, or both. Accounting for that seems a challenge that can not be overcome.
Alternatively, let's assume Reality did not get computing power and that it does not make computations....
But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.
No, it follows directly from our inability to simulate 'complex' atoms. If we can't represent the basic building blocks of matter correctly, how are we supposed to represent the matter?
A correct model of physics would, given enough computational power, allow us to perfectly simulate everything in reality, on every level of reality. QM is known not to be correct; it is in fact known to be incorrect in the ultimate sense. It is merely the most correct model we possess.
"However, reductionism is incapable of explaining the real world."
Is that the argument against Reductionism? That there are things it can't, as yet, explain? That's the same position the Intelligent Design people put forward. Your post is a big fat Semantic Stop Sign.
No, we don't understand protein folding yet. Precedent suggests that one day, we probably will, and it probably won't be down to some mystical emergent phenomenon. It'll be complicated, subtle, amazing, and fully explicable within the realms of reductionist science.
A quick Google search turns up:
But the crystal growth depends strongly on temperature (as is seen in the morphology diagram). Thus the six arms of the snow crystal each change their growth with time. And because all six arms see the same conditions at the same times, they all grow about the same way.... If you think this is hard to swallow, let me assure you that the vast majority of snow crystals are not very symmetrical.
It's not that reductionism is wrong, but rather that it's only part of the story. Additional understanding can be gleaned through a bottom-up, emergent explanation which is orthogonal to the top-down reductionist explanation of the same system.
It is important to take seriously the reality of higher level models (maps). Or alternatively to admit that they are just as unreal, but also just as important to understanding, as the lower level models. As Aaron Boyden points out, it is not a foregone conclusion that there is a most basic level.
Reductionism IS the bottom-up, emergent explanation. It tries to reduce reality to basic elements that together produce the phenomena of interest - you can't get any more emergent than that.
From the Wikipedia definition for "reductionism":
"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."
and
"The limit of reductionism's usefulness stems from emergent properties of complex systems which are more common at certain levels of organization."
Rafe, do you mean that as a criticism? Because usefulness and reality are very different things. There are two things that can make a reductionist model less useful:
Both, you'll notice, are practical problems pertaining to the model, and don't invalidate the principle.
So human brains are themselves models of reality.
Do you have a deterministic view of the world, i.e. believe reality is there, independently of our existence or of our interactions with it?
Have you ever wondered what is information, at the physical level.. what is it that our brains are actually modelling?
Simply because particles are the smallest things does not mean they are the only things. Particles are defined by how they act. How a particle will act can only be determined by taking into account the particles surrounding it. And to fully examine those particles, their surrounding particles must be examined. And so on and so forth...
As you move up in scale, new rules and attributes emerge that do not exist at the smaller scales. You can speculate about whether or not these new things might have been deduced as possibilities from quantum laws. But short o...
Wockyman: It's not that they're the smallest, as such.
Yes, how a particle acts is affected by those around it. But the idea is that if you know the basic rules, then knowing those rules, plus which particles are where around it lets you predict, in principle, given sufficient computational power, stuff about how it will act. In other words, the complicated stuff that emerges arises from the more basic stuff.
Think of it this way: You know cellular automatons? Especially Conaway's Game of Life? Really simple rules, just the grid, cells that can be on and off...
But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces.
To clarify (actually, to push this further): there is only one thing (the universe) - because surely breaking the thing down into parts (such as objects) which in turn lets you notice relations between parts (which in turn lets you see time, for example) -- surely all that is stuff done by modelers of reality and not by reality itself? I'm trying to say that the universe isn't pre-parsed (if that makes any sense...)
Reductionism is great. The main problem is that by itself it tells us nothing new. Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally. For some reason the creative side of science -- and I use the word "creative" in the generative sense -- is never addressed by methodology in the same way falsifiability is:
http://emergentfool.com/2010/02/26/why-falsifiability-is-insufficient-for-scientific-reasoning/
We are at a stage of historical enlightenment ...
Really? I think of reductionism as maybe the greatest, most wildly successful abductive tool in all of history. If we can't explain some behavior or property of some object it tells us one good guess is to look to the composite parts of that thing for the answer. The only other strategy for hypothesis generation I can think of that has been comparably successful is skepticism (about evidence and testimony). "I was hallucinating." and "The guy is lying" have explained a lot of things over the years. Can anyone think of others?
Probably no one will ever see this comment, but.
"I wish I knew where reality got its computing power."
If reality had less computing power, what differences would you expect to see? You're part of the computation, after all; if everything stood still for a few million meta-years while reality laboriously computed the next step, there's no reason this should affect what you actually end up experiencing, any more than it should affect whether planets stay in their orbits or not. For all we know, our own computers are much faster (from our perspective) than the machines on which the Dark Lords of the Matrix are simulating us (from their perspective).
Sounds like one of the central tenants of discordianism. There is no such thing as wings, identity, truth, the concept of equality. These are all abstract concepts that exist only in the mind. "Out there" in "True" reality, there is only chaos (not necessarily of the random kind, just of the meaningless/purposeless kind).
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extr...
One way of tracing the uhm, data I guess might be to say, we see, naively, a chair. And know that underneath the chair out there is, at the bottom level we're aware of, energy fields and fundamental forces. And those concepts, like the chair, correspond to a physics model, which is in turn a simplification/distillation of vast reams of recorded experimental data into said rules/objects, which is in turn actual results of taking measurements during experiments, which in turn are the results of actual physical/historical events. So the reductionist model - fields and forces - I think is still a map of experimental results tagged with like, interpretations that tie them together, I guess.
Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there're still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we're aware of. But, its always going to be possible - less likely the more data we get - that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we're making about our world, stripping away higher level abstractions, we'd say that ...
This post, represents for me, the typical LW response to something like the Object Oriented Ontologies of Paul Levi Bryant and DeLanda. These Ontologies attempt to give things like numbers, computations, atoms, fundamental particles, galaxies, higher level laws, fundamental laws, concepts, referents of concepts, etc. equal ontological status. They, hence, are strictly against making a distinction between map and territory, there is only territory, and all things that are, are objects.
I'm a confident reductionist, model/reality (bayesian), type guy. I'm no...
Does the reductionist model give different predictions about the world than the non-reductionist model? If so, are any easily checked?
Solomonoff Induction, in so much as it is related to interpretations at all, rejects 'many worlds interpretation' because valid (non falsified) code strings are the ones whose output began with the actual experimental outcome rather than list all possible outcomes, i.e. are very much Copenhagen - like.
Has this point ever been answered? If we are content with the desired output appearing somewhere along the line - as opposed to the start - then the simplest theory of everything would be printing enough digits of pi, and our universe would be described somewhere down the line.
Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.
Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.
This website is doing amazing things to the way I think every day, as well as occasionally making me die of laughter.
Thank you, Eliezer!
"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory
Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that
my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.
Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or...
"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."
[extreme steelman mode on]
By “relativity” he must have meant the ultrarelativistic approximation, of course.
[extreme steelman mode off]
:-)
Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the "base" entities. Far-fetched, maybe, but not completely implausible.
Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
Less Wrongs "The Futility of Emergence" article argues against using the word "emergence", claiming that it provides no additional information. The argument went that literally everything is an emergent property, since everything can be boiled down to ...
Minsky writing in Society of Mind might bring some light here (paraphrasing):
How can a box made of six boards hold a mouse when a mouse could just walk away from any individual board? No individual board has any "containment" or "mouse-tightness" on it's own. So is "containment" an emergent property?
Of course, it is the way a box prevents motion in all directions, because each board bars escape in a certain direction. The left side keeps the mouse from going left, the right from going right, the top keeps it from leaping ...
This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
The higher levels could have been, though. The fact that we have high-level abstractions in our heads does not by itself mean that there is nothing corresponding to them in the territory. (To make that argument is a version the fallacy that since there is a form of probability in the map, there can be none in the territory).
Tangential to the main point: one hypothesis for why the artillery gunner thought that "General relativity gives you the wrong answer", is that maybe he had an experience with a software which could either run "Newtonian mode" or "GR mode", and the software had to make approximations for the relativistic calculation to be roughly tractable (which might be nonetheless useful for roughly solving problems where relativistic effects matter, but would only reduce accuracy for non-relativistic situations).
Now, the "GR mode" (with approximations) would be a diffe...
If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions. Better predictions than the aerodynamic model, in fact.
This is not very important, but I think this is not quite right in general. Assuming that we're making some modelling assumptions about the plane and the air and so (rather than, for example, running a whole Universe sim) I think it's possible for the errors of the non-QCD model to systematically cancel out the errors of the modelling assumptions and end up more accurate than the QCD model.
Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:
I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...
But now it's time to begin addressing this question. And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".
First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.
This seems like a strong statement, at least the first part of it. General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?
On the other hand, we are never going back to Newtonian mechanics. The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.
"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.
And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.
I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you'll get the wrong answer."
And I, and another person who was present, said flatly, "No." I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean? But the relativistic answer will always be more accurate than the Newtonian one."
"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."
"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize."
Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.
But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.
So is the 747 made of something other than quarks? No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.
Why not model the 747 with a chromodynamic representation? Because then it would take a gazillion years to get any answers out of the model. Also we could not store the model on all the memory on all the computers in the world, as of 2008.
As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment." Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory. The scale of a map is not a fact about the territory, it's a fact about the map.
If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions. Better predictions than the aerodynamic model, in fact.
To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift. There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings. It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.
"What?" cries the antireductionist. "Are you telling me the 747 doesn't really have wings? I can see the wings right there!"
The notion here is a subtle one. It's not just the notion that an object can have different descriptions at different levels.
It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.
It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought. Rather we, for our convenience, use different simplified models at different levels.
If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.
You, looking at the model, and thinking about the model, would be able to figure out where the wings were. Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM. In your mind.
You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—
And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.
The way a belief feels from inside, is that you seem to be looking straight at reality. When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.
So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)
The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.
This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory. Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?" The critical words are really and see.