GEB has a section on this.
In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.
Not compartmentalizing isn't just difficult, it's basically impossible.
Reminds me of the opening paragraph of The Call of Cthulhu.
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.
If understand you correctly, you are saying that most people are not knowledgeable enough about the different domains in question to make any (or judge any) cross-domain connections. This seems plausible.
I can think however of another argument that confirms this but also clarifies why on Less Wrong we think that people actively compartmentalize instead of failing to make the connection and that is selection bias. Most people on this site are scientists, programmers or other technical professions. It seems that most are also consequentialists. Not surprisingly, both these facts points to people who enjoy following a chain of logic all way to the end.
So, we tend to learn a field until we know it's basic principles. For example, if you learn about gravity, you can learn just enough so you can calculate the falling speed of an object in gravitational field or you can learn about the bending of space-time by mass. It seems rather obvious to me that the second method encourages cross-domain connections. If you don't know the basic underlying principles of the domains you can't make connections.
I also see this all the time when I teach someone how to use computers. Some people build an internal model of how a computer & programs conceptually work and are then able to use most basic programs. Others learn by memorizing each step and are looking at each program as a domain on it's own instead of generalizing across all programs.
I wonder if there'd be a difference between the survey as written (asking what a pen would do on the moon, and then offering a chance to change the answer based on Apollo astronauts) vs. a survey in which someone asked "Given that the Apollo astronauts walked on the moon, what do you think would have happened if they'd dropped a pen?"
The first method makes someone commit to a false theory, and then gives them information that challenges the theory. People could passively try to fit the astronaut datum into their current working theory, or they co...
But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.
Look at it this way: what evolutionary pressure exists for NOT compartmentalizing?
From evolution's standpoint, if two of your beliefs really need to operate at the same time, then the stimuli will be present in your environment at close enough to the same time as to get them both activated, and that's good enough to work for passive consistency checking. For active consistency checking, we have simple input filters for rejecting stuff that c...
I think part of the problem with the moon question was that it suggested two wrong answers first. How would you have answered the question if it was just "If a pen is dropped on the moon, what will happen? Explain in one sentence."
I would have shrugged and said "It will fall down, slowly." But when I saw "float away" and "float where it is", those ideas wormed their way into my head for a few seconds before I could reject them. Just suggesting those ideas managed to mess me up, and I'm someone whose mental model of m...
Yes, building mental connections between domains requires well-populated maps for both of them, plus significant extra processing. It's more properly treated as a skill which needs development than a cognitive defect. In the pen-on-the-moon example, knowing that astronauts can walk around is not enough to infer that a pen will fall; you also have to know that gravity is multiplicative rather than a threshold effect. And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity film...
I read the question as asking about THE Moon, not "a moon". The question as written has no certain answer. If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity. Above this level it will float away. The astronaut might also float away, unless he were wearing heavy boots.
Someone posted a while back that only a third of adults are capable of abstract reasoning. I've had some trouble figuring out exactly it means to go through life without abstract reasoning. The "heavy boots" response is a good example.
Without abstract reasoning, it's not possible to form the kind of theories that would let you connect the behavior of a pen and an astronaut in a gravitational field. I agree that this is an example of lack of ability, not compartmentalization. Of course, scientists are capable of abstract reasoning, so its still possible to accuse them of compartmentalizing even after considering the survey results.
...For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in th
Quite convincing, thanks. I'll want to think about it more, but perhaps it would be a good idea to toss the word out the window for its active connotations.
ISTM, though, that there is a knack for for cross-domain generalization (and cross-domain mangling) of insights, that people have this knack in varying degrees, that this knack is an important component of what we call "intelligence", in the sense that if we could figure out what this knack consists of we'd have solved a good chunk of AI. Isn't this a major reason why Hofstadter, for instance,...
I didn't interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI -- its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that's his "angle".) In this context, I therefore interpreted your remark as "given Eliezer's interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I'm surprised there isn't more discussion of these phenomena."
I'll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it's important to keep LW from becoming "about" AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you're part of the "AI crowd" or not.
(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks -- not even Eliezer thinks about nothing else.)
Nitpick: "If a pen is dropped on A moon"
It doesn't specify Earth's moon. If a pen were dropped on say, Deimos, it might very well appear to do B) for a long moment ;)
(Deimos is Mars' outermost moon and too small to retain a round shape. Its gravity is only 0.00256 m/s^2 and escape velocity is only 5.6 m/s. That means you could run off it.)
On the other hand, the word "dropped" effectively gives the game away. Things dropped go DOWN, not up, and they don't float in place. Would be better to say "released".
And now, back to our story...
I don't believe it was assumed that compartmentalization is something you actually "do" (make effort towards achieving). Making this explicit is welcome, but assuming the opposite to be the default view seems to be an error.
I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. I
Many people (probably more people) make the same mistake when asked 'which falls faster, the 1 kg weight or the 20 kg weight?'. I guess this illustrates why compartmentalization is useful. False beliefs that don't matter in one field can kill you in another.
I fail to understand how compartmentalization explains this. I got the answer right the first time. And I suspect most people who go it wrong did so because of the assumptions (unwarranted) they were making - meaning if they had just looked at the question and nothing else, and if they understood basic gravity, they would've got it right. But when you also try to imagine some hypothetical forces on the surface of the moon or relate to zero gravity images seen on tv etc, and if you visualize all these before you visualize the question, you'd probably get it wrong.
So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains.
I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation and/or (I suppose) knowledge of the masses of notable celestial objects in our solar system.
Now, I have to admit that the correct answer wasn't obvious to me at first.
I up-voted just for you admitting this in your example, but lets talk about this. ...
The way I solved the pen on the moon question is that I remembered the famous demonstration one of the Apollo astronauts did with a feather and hammer on the moon, and didn't think there should be a meaningful difference between those objects and a pen. I could've worked out the physics, but pattern-recognition was faster and easier.
So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible.
If a person's knowledge is highly compartmentalized, and consists of these three facts:
A human being walked across the moon.
There are sm
Your "wrong but not obviously and completely wrong" line made me think that the "obviously and completely" part is what makes people who are well-versed in a subject demand that everyone should know [knowledge from subject] when they hear someone express obvious-and-complete ignorance or obvious-and-complete wrongness in/of said subject. I've witnessed this a few times, and usually the thought process is something like "wow, it's unfathomable that someone should express such ignorance of something that is so obvious to me. There sh...
We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.
I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:
Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.
A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".
While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my "compartmentalization" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.
The world is a complicated place. One of the reasons we don't have AI yet is because we haven't found very many reliable cross-domain reasoning rules. Reasoning algorithms in general are quickly subject to a combinatorial explosion: the reasoning system might know which potential inferences are valid ones, but not which ones are meaningful in any useful sense. Most current-day AI systems need to be more or less fine-tuned or rebuilt entirely when they're made to reason in a domain they weren't originally built for.
For humans, it can be even worse than that. Many of the basic tenets in a variety of fields are counter-intuitive, or are intuitive but have counter-intuitive consequences. The universe isn't actually fully arbitrary, but for somebody who doesn't know how all the rules add up, it might as well be. Think of all the times when somebody has tried to reason using surface analogies, mistaking them for deep causes; or dismissed a deep cause, mistaking it for a surface analogy. Somebody might present us with a connection between two domains, but we have no sure way of testing the validity of that connection.
Much of our reasoning, I suspect, is actually pattern recognition. We initially have no idea of the connection between X and Y, but then we see X and Y occur frequently together, and we begin to think of the connection as an "obvious" one. For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.
I suspect that often when we say "(s)he's compartmentalizing!", we're operating in a domain that's more familiar to us, and thus it feels like an active attempt to keep things separate must be the cause. After all, how could they not see it, were they not actively keeping it compartmentalized?
So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible. If you don't know which cross-domain rules and reasoning patterns are valid, then building up a separate set of rules for each domain is the safe approach. Discarding as much of your previous knowledge as possible when learning about a new thing is slow, but it at least guarantees that you're not polluted by existing incorrect information. Build your theories primarily on evidence found from a single domain, and they will be true within that domain. While there can certainly also be situations calling for an active process of compartmentalization, that might only happen in a minority of the cases.