We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:

If a pen is dropped on a moon, will it:
A) Float away
B) Float where it is
C) Fall to the surface of the moon

Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.

A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".

While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it seems to indicate that my "compartmentalization" was simply a lack of a connection, and that such connections are much harder to draw than we might assume.

The world is a complicated place. One of the reasons we don't have AI yet is because we haven't found very many reliable cross-domain reasoning rules. Reasoning algorithms in general are quickly subject to a combinatorial explosion: the reasoning system might know which potential inferences are valid ones, but not which ones are meaningful in any useful sense. Most current-day AI systems need to be more or less fine-tuned or rebuilt entirely when they're made to reason in a domain they weren't originally built for.

For humans, it can be even worse than that. Many of the basic tenets in a variety of fields are counter-intuitive, or are intuitive but have counter-intuitive consequences. The universe isn't actually fully arbitrary, but for somebody who doesn't know how all the rules add up, it might as well be. Think of all the times when somebody has tried to reason using surface analogies, mistaking them for deep causes; or dismissed a deep cause, mistaking it for a surface analogy. Somebody might present us with a connection between two domains, but we have no sure way of testing the validity of that connection.

Much of our reasoning, I suspect, is actually pattern recognition. We initially have no idea of the connection between X and Y, but then we see X and Y occur frequently together, and we begin to think of the connection as an "obvious" one. For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in the right intuition.

I suspect that often when we say "(s)he's compartmentalizing!", we're operating in a domain that's more familiar to us, and thus it feels like an active attempt to keep things separate must be the cause. After all, how could they not see it, were they not actively keeping it compartmentalized?

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible. If you don't know which cross-domain rules and reasoning patterns are valid, then building up a separate set of rules for each domain is the safe approach. Discarding as much of your previous knowledge as possible when learning about a new thing is slow, but it at least guarantees that you're not polluted by existing incorrect information. Build your theories primarily on evidence found from a single domain, and they will be true within that domain. While there can certainly also be situations calling for an active process of compartmentalization, that might only happen in a minority of the cases.

New Comment
72 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

GEB has a section on this.

In order to not compartmentalize, you need to test if your beliefs are all consistent with each other. If your beliefs are all statements in propositional logic, consistency checking becomes the Boolean Satisfiability Problem, which is NP-complete. If your beliefs are statements in predicate logic, then consistency checking becomes PSPACE-complete, which is even worse than NP-complete.

Not compartmentalizing isn't just difficult, it's basically impossible.

Reminds me of the opening paragraph of The Call of Cthulhu.

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.

4BenAlbahari
Glenn Beck: P.S. The trick is to use bubble sort.
3wedrifid
It took me several seconds to guess that GEB refers to Godel, Escher, Bach.
0CronoDAS
Sorry about that!
3RobinZ
I agree, save that I think Academian's proposal should be applied and "compartmentalizing" replaced with "clustering". "Compartmentalization" is a more useful term when restricted to describing the failure mode.
1BenAlbahari
Could I express what you said as: A person is in the predicament of: 1) having a large number of beliefs 2) the mathematically impossible challenge of validating those beliefs for consistency Therefore: 3) It is impossible to not compartmentalize This leads to a few questions: * Is it still valuable to reduce, albeit not eliminate, compartmentalization? * Is there a fast method to rank how impactful a belief is to my belief system, in order to predict whether an expensive consistency check is worthwhile? * Is it possible to arrive at a (mathematically tractable) small core set of maximum-impact beliefs that are consistent? (the goal of extreme rationality?) * Does probablistic reasoning change how we answer these questions?
4bogus
Edwin Jaynes discusses "lattice" theories of probability where propositions are not universally comparable in appendix A of Probability Theory: The Logic of Science. Following Jaynes's account, probability theory would correspond to a uniformly dense lattice, whereas a lattice with very sparse structure and a few dense regions would correspond to compartmentalized beliefs.
0CronoDAS
Yes, that's basically right. As for those questions, I don't know the answers either.
0BenAlbahari
Rationalism is faith to you then? [EDIT: An explanation is below that I should have provided in this comment; obviously when I made the comment I assumed people could read my mind; I apologize for my transparency bias]
0CronoDAS
I'm not sure what you mean...
1BenAlbahari
Compartmentalization is an enemy of rationalism. If we are going to say that rationalism is worthwhile, we must also say that reducing compartmentalization is worthwhile. But that argument only scratches the surface of the problem you eloquently pointed out. Mathematically, we have a mountain of beliefs that need processing with something better than brute force. We have to be able to quickly identify how impactful beliefs are to our belief system, and focus our rational efforts on those beliefs. (Otherwise we're wasting our time processing only a tiny randomly chosen part of the mountain.) Rationality, if it's actually useful, should provide us with at least a small set of consistent and maximally impactful beliefs. We have not escaped compartmentalization of all our beliefs, but at least we have chosen the most impactful compartment within which we have consistency. Finally, if we can't perfectly process our mountain of beliefs, then at least we can imperfectly process that mountain. Hence the need for probabilistic reasoning. To summarize, I want to be able to answer "yes" to all of these questions, to justify the endeavor of rationalism. The problem is like you, my answer for each is "I don't know". For this reason, I accept my rationalism is just faith, or perhaps less pejoratively, intuition (though we're talking rationality here, right?).

If understand you correctly, you are saying that most people are not knowledgeable enough about the different domains in question to make any (or judge any) cross-domain connections. This seems plausible.

I can think however of another argument that confirms this but also clarifies why on Less Wrong we think that people actively compartmentalize instead of failing to make the connection and that is selection bias. Most people on this site are scientists, programmers or other technical professions. It seems that most are also consequentialists. Not surprisingly, both these facts points to people who enjoy following a chain of logic all way to the end.

So, we tend to learn a field until we know it's basic principles. For example, if you learn about gravity, you can learn just enough so you can calculate the falling speed of an object in gravitational field or you can learn about the bending of space-time by mass. It seems rather obvious to me that the second method encourages cross-domain connections. If you don't know the basic underlying principles of the domains you can't make connections.

I also see this all the time when I teach someone how to use computers. Some people build an internal model of how a computer & programs conceptually work and are then able to use most basic programs. Others learn by memorizing each step and are looking at each program as a domain on it's own instead of generalizing across all programs.

4Academian
One of the reasons I'm in favor of axiomatization in mathematics is that it prevents compartmentalization and maintains a language (set-theory) for cross-domain connections. It doesn't have to be about completeness. So yeah, thumbs up for foundations-encourage-connections... they are connections :)
4wnoise
I basically agree, but I'd advocate category theory as a much better base language than set theory.

I wonder if there'd be a difference between the survey as written (asking what a pen would do on the moon, and then offering a chance to change the answer based on Apollo astronauts) vs. a survey in which someone asked "Given that the Apollo astronauts walked on the moon, what do you think would have happened if they'd dropped a pen?"

The first method makes someone commit to a false theory, and then gives them information that challenges the theory. People could passively try to fit the astronaut datum into their current working theory, or they co... (read more)

But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.

Look at it this way: what evolutionary pressure exists for NOT compartmentalizing?

From evolution's standpoint, if two of your beliefs really need to operate at the same time, then the stimuli will be present in your environment at close enough to the same time as to get them both activated, and that's good enough to work for passive consistency checking. For active consistency checking, we have simple input filters for rejecting stuff that c... (read more)

0wedrifid
And, as the situation demands, not rejecting stuff even though it conflicts with important signalling beliefs.

I think part of the problem with the moon question was that it suggested two wrong answers first. How would you have answered the question if it was just "If a pen is dropped on the moon, what will happen? Explain in one sentence."

I would have shrugged and said "It will fall down, slowly." But when I saw "float away" and "float where it is", those ideas wormed their way into my head for a few seconds before I could reject them. Just suggesting those ideas managed to mess me up, and I'm someone whose mental model of m... (read more)

Yes, building mental connections between domains requires well-populated maps for both of them, plus significant extra processing. It's more properly treated as a skill which needs development than a cognitive defect. In the pen-on-the-moon example, knowing that astronauts can walk around is not enough to infer that a pen will fall; you also have to know that gravity is multiplicative rather than a threshold effect. And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity film... (read more)

2[anonymous]
I think you're on to something. I was wondering why the "heavy boots" people singled out the boots. Why not say "heavy suits" or that the astronauts themselves were heavier than pens. Didn't 2001: A Space Odyssey start the first zero-gravity scene with a floating pen and a flight attendant walking up the wall?

I read the question as asking about THE Moon, not "a moon". The question as written has no certain answer. If a moon is rotating fast enough, a pen held a few feet above its surface will be at orbital velocity. Above this level it will float away. The astronaut might also float away, unless he were wearing heavy boots.

4[anonymous]
Pens and heavy boots always do the same thing in any gravitational field, unless they modify it somehow, like by moving the moon. Acceleration due to gravity does not depend on the mass of the accelerated object.
0JamesAndrix
If the moon is small and spinning quickly, a space elevator only needs to be a few feet tall. In this admittedly contrived scenario, the boots will anchor the astronaut because they are going around it more slowly. The pen will float because it is actually in orbit. To land on this moon you would achieve orbit, and then put your feet down.
2SoullessAutomaton
I don't think you'd be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we're talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon's surface. Of course, since the centrifugal force at the equator due to rotation would almost exactly counteract weight due to gravity, the only way the thing could hold itself together would be tensile strength; it wouldn't take much for it to slowly tear itself apart.
0JamesAndrix
Hmm, I suppose it's too much handwaving to say it's only a few meters wide and super dense.
1Jordan
My rough calculation says that the density would need to be about a million times greater than Earth's, around 10^10 kg/m^3. This is too low to be a neutron star, but too high to be anything else I think. It may very well be impossible in this universe. That's assuming uniform density though. Of course you could just have a microblackhole with a hard 1-meter-diameter shell encasing it. How you keep the shell centered is ... trickier.
2SoullessAutomaton
Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That's roughly the density of electron-degenerate matter; I'm pretty sure nothing will hold together at that density without substantial outside pressure, and since we're excluding gravitational compression here I don't think that's likely. Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.
5wnoise
This won't work for spherical shells and uniformly distributed charge for the same reason that a spherical shell has no net gravitational force on anything inside it. You'll need active counterbalancing.
6SoullessAutomaton
Ah, true, I didn't think of that, or rather didn't think to generalize the gravitational case. Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.
0Baughn
Would it be possible to keep the black hole charged (use an electron gun), then manipulate electric fields to keep it centered? I don't know enough physics to tell.
1wnoise
Yes, this could work.
1wedrifid
Well, even more technically, 'may be at orbital velocity, depending on where on the moon the astronaut is standing'.
0JamesAndrix
Pesky mountains.
2wnoise
That and varying latitude.
[-]Hook40

Someone posted a while back that only a third of adults are capable of abstract reasoning. I've had some trouble figuring out exactly it means to go through life without abstract reasoning. The "heavy boots" response is a good example.

Without abstract reasoning, it's not possible to form the kind of theories that would let you connect the behavior of a pen and an astronaut in a gravitational field. I agree that this is an example of lack of ability, not compartmentalization. Of course, scientists are capable of abstract reasoning, so its still possible to accuse them of compartmentalizing even after considering the survey results.

6RobinZ
I instantly distrusted the assertion (it falls in the general class of "other people are idiots"-theories, which is always more popular among the Internet geek crowd than they should be), and went to the linked article: This already suggests that the data should be noisy. I can think of at least two problems: 1. The test only determines, at best, what methods the individual used to solve this particular problem - and, at worst, determines what methods the individual claims to have used to solve the problem. 2. The accuracy of the test may be greatly reduced by the paper-and-pencil administration thereof. Any confusion which occurs by either the evaluators or takers will obscure the data.
0Hook
The 32% number does seem low to me. Even if the number is more like two thirds of adults are capable of abstract reasoning, that still leaves enough people to explain the pen on the moon result. Is compartmentalization applying concrete (and possibly incorrect?) reasoning to an area where the person making the accusation of compartmentalization thinks abstract reasoning should be used?

For those well-versed in physics, it seems mind-numbingly bizarre to hear someone claim that the Moon's gravity isn't enough to affect a pen, but is enough to affect people wearing heavy boots. But as for some hypothetical person who hasn't studied much physics... or screw the hypotheticals - for me, this sounds wrong but not obviously and completely wrong. I mean, "the pen has less mass, so there's less stuff for gravity to affect" sounds intuitively sorta-plausible for me, because I haven't had enough exposure to formal physics to hammer in th

... (read more)
7bentarm
This is surely also true on the moon? The relative densities of the pen and the fluid you put it in don't change depending on the gravitational field they're in.
2pengvado
Gravity affects pressure affects density. To a first approximation, gases have density directly proportional to their pressure, and liquids and solids don't compress very much. With air/water/pen the conclusion doesn't change. But an example where it does: A nitrogen atmosphere at STP has a density of 1251 g/m^3. A helium balloon at STP has a density of 179 g/m^3. The balloon floats. Then reduce Earth's gravity by a factor of 10, and hold temperature constant. The atmospheric pressure reduces by a factor of 10, so its density goes to 125 g/m^3. But the helium can't expand likewise (assume the balloon is perfectly inelastic), so it's still 179 g/m^3. The balloon sinks.
2byrnema
Hmm. I actually don't know the relationship between gravity and buoyancy -- a moment with Google and I'd know, but in the meantime I'm in the position of relating to all those people who answered incorrectly.
2RobinZ
Another unobvious fact is that the force that holds up a floating object is also tied to weight - specifically, the weight of the atmosphere or liquid. Even if the atmosphere on the Moon were precisely as dense as the Earth's (it is not), the pen and the air would be lighter in the same proportion, and the pen would still fall. Edit: i.e. what bentarm said.

Quite convincing, thanks. I'll want to think about it more, but perhaps it would be a good idea to toss the word out the window for its active connotations.

ISTM, though, that there is a knack for for cross-domain generalization (and cross-domain mangling) of insights, that people have this knack in varying degrees, that this knack is an important component of what we call "intelligence", in the sense that if we could figure out what this knack consists of we'd have solved a good chunk of AI. Isn't this a major reason why Hofstadter, for instance,... (read more)

6Academian
I think what's sometimes called a "compartment" would be better called a "cluster". Learning consists of forming connections, which can naturally form distinct clusters without "barriers" causally separating them. The solution is then to simply connect the clusters (realize that the moon landing videos are relevant). But certainly at times people erect intentional barriers to prevent connections from forming (a lawyer effortfully trying not to connect his own morals to the case), and then I would use the term "compartment". Identifying the distinction between clusters and compartments could be a useful diagnostic goal.
3Paul Crowley
I'd assumed that was because the focus was not on how to build an AGI but on how you define its goals.
0komponisto
Why? It's still just as much of a flaw if it's a passive phenomenon. To make an analogy with some literal overlap, some people are creationists because they don't know any science, and others are creationists despite knowing science. Should we avoid using the term "creationist" for the first group? I think not. Compartmentalization is still compartmentalization, whether it's the result of specifically motivated cognition, or just an intellectual deficiency such as a failure to abstract. (In fact, I'd venture that motivated thought sometimes keeps people from improving their intellectual skills, just as religiously-motivated creationists may deliberately avoid learning science.) Honestly, I think this is mainly just a result of the personalities of the folks who happen to be posting. Creativity and analogy-making were often discussed in Eliezer's OB sequences; posts by Yvain and Alicorn also seem to have this flavor.
1Morendil
I would appreciate, if you can think of any examples offhand, if you'd point me to them. I'll have another look-see later to check on my (possibly mistaken) impression. Just not today, I'm ODing on LW as it is. Is it just me or has the pace of top-level posting been particularly hectic lately?
6thomblake
It is not just you
1Kaj_Sotala
I considered delaying this post for a few days until the general pace of posting had died down a bit, but then I'm bad at delaying the posting of anything I've written.
1komponisto
Creativity. Analogy-making.
1Morendil
The second link isn't really about analogy-making as topic within AI, it's more about "analogy as flawed human thinking". (And Kaj's post reminds us precisely that given the role played by analogy in cognition, it may not fully deserve the bad rap Eliezer has given it.) The first is partly about AI creativity (and also quite a bit about the flawed human thinking of AI researchers). It is the only one tagged "creativity"; and my reading of the Sequences has left me with an impression that the promise in the final sentence was left unfulfilled when I came to the end. I could rattle off a list of things I've learned from the Sequences, at various levels of understanding; they'd cover a variety of topics but creativity would be ranked quite low. I mean, CopyCat comes up once in search results. If the topic of analogy within AI was discussed much here, I'd expect it to be referenced more often.

I didn't interpret your comment as expressing an expectation that there would be more discussion about analogical reasoning or creativity as a topic within AI; keep in mind, after all, that LW is not a blog about AI -- its topic is human rationality. (There is, naturally, a fair amount of incidental discussion of AI, because Eliezer happens to be an AI researcher and that's his "angle".) In this context, I therefore interpreted your remark as "given Eliezer's interest in AI, a subject which requires an understanding of the phenomena of analogies and creativity, I'm surprised there isn't more discussion of these phenomena."

I'll use this opportunity to state my feeling that, as interesting as AI is, human rationality is a distinct topic, and it's important to keep LW from becoming "about" AI (or any other particular interest that happens to be shared by a significant number of participants) . Rationality is for everyone, whether you're part of the "AI crowd" or not.

(I realize that someone is probably going to post a reply to the effect that, given the stakes of the Singularity, rational thought clearly compels us to drop everything and basically think about nothing except AI. But...come on, folks -- not even Eliezer thinks about nothing else.)

0Morendil
Sorry I wasn't clearer first time around. Yes, rationality is a distinct topic; but it has some overlap with AI, inasmuch as learning how to think better is served by understanding more of how we can think at all. The discussions around decision theory clearly belong to that overlapping area; Eliezer makes no bones about needing a decision theory for FAI research. Analogy in the Hofstadterian sense seems underappreciated here by comparison. To my way of thinking it belongs in the overlap too, as Kaj's post seems to me to hint strongly.
0wnoise
Eliezer doesn't want to publish any useful information on producing AI, because he knows that that will raise the probability (extremely marginally) of some jackass causing an unFriendly foom.
8Jack
It seems like it would also raise the probability (extremely marginally) of Eliezer missing something crucial causing an unFriendly foom.
0thomblake
Remember, there is a long tradition here, especially for EY, to usually not refer to any scholarly research.

Nitpick: "If a pen is dropped on A moon"
It doesn't specify Earth's moon. If a pen were dropped on say, Deimos, it might very well appear to do B) for a long moment ;) (Deimos is Mars' outermost moon and too small to retain a round shape. Its gravity is only 0.00256 m/s^2 and escape velocity is only 5.6 m/s. That means you could run off it.)

On the other hand, the word "dropped" effectively gives the game away. Things dropped go DOWN, not up, and they don't float in place. Would be better to say "released".

And now, back to our story...

I don't believe it was assumed that compartmentalization is something you actually "do" (make effort towards achieving). Making this explicit is welcome, but assuming the opposite to be the default view seems to be an error.

I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. I

Many people (probably more people) make the same mistake when asked 'which falls faster, the 1 kg weight or the 20 kg weight?'. I guess this illustrates why compartmentalization is useful. False beliefs that don't matter in one field can kill you in another.

1[anonymous]
The example you use is in my opinion not a failure of compartmentalization but of communication. Humans will without fail, due to possesing sufficiently opitmised time saving heuristics, always assume when talking to a nonthreatening, nondescript and polite stranger like youself that you are a regular person (the kind they normally interact with) talking about a situation that fits their usual frame of reference (taking place on a planteary surface, reasonable temperature range, normal g, one atm of preasure, oxygen present enabling combustion ect.) except when you explicitly state otherwise. Taking two weights of different mass (all else being equal) and dropping them will not result in "neither falling faster". To realize why consider the equation for terminal velocity [which is not considering bouyancy, Vt=squr(2gm/densityprojected area of objectdrag coefficent)]. They of course won't think about it this way, and even if they did they would note that on a "normal" distance before hitting the ground would result in t1 and t2 being about the same if not really equal. The rather cringe worthy approximation comes when they unintentionally assume a slopiness of communication on your part (we leave out all except the most important factor when asking short questions) and that you really meant a few other things except mass are not equal (since the average things that they handle that have radically different masses from each other are rarley if ever identical in shape or volume). The reason it is cringe worthy is not that its a bad assumption to make in their social circle. But that their soical circle is such that they don't have enough interactions like this to categorize the question under "sciencey stuff" in their head! PS I just realized you may have mistyped and meant the old "What is heavier 10 kg of straw or 10 kg of iron?" which ilustrates the point you try to make a bit better (I actually got the wrong answer when saying my mind out right away at the te
7wedrifid
No. I meant what I wrote. The thing with the straw and or feathers is just word play, a communication problem. I am talking about an actual misunderstanding of the nature of physics. I have seen people (science teacher types) ask the question by holding out a rock and a scrunched up piece of paper and asking which will hit the ground first when dropped. There is no sophistry - the universe doesn't do 'trick questions'. Buoyancy, friction and drag are all obviously dwarfed here by experimental error. People get the answer wrong. They expect to see the rock hit the ground noticeably earlier. Even more significantly, they are surprised when they both fall about the same speed. In fact, sometimes they go as far as to accuse the demonstrator of playing some sort of trick and insist on performing the experiment themselves. The same kind of intuitive (mis)understanding of gravity would lead people to also guess wrong about things like what would happen on the moon.
3wnoise
Even better is the question "what weighs more, a pound of feathers, or a pound of gold?" Gur zrgny vf yvtugre -- vg'f zrnfherq va gebl cbhaqf, juvpu unir gjryir bhaprf gb gur cbhaq engure guna fvkgrra, naq n gebl bhapr vf nccebkvzngryl gur fnzr na nibveqhcbvf bhapr.
0Vladimir_Nesov
Feathers have lower density, so the same mass occupies greater volume, experiences greater buoyancy and weighs less.
0[anonymous]
Edit: I just realized a bit of bias on my part. I probably wouldn't have commented if you had used SI unit for mass [kg] even though that is just as often used in non-scientific context to mean "what the scale shows" as pounds. I completley misread what you actually wrote and just took the "what weighs more, a pound of feathers, or a pound of gold" of the previous commenter into account. You explicitly refer to mass, so sorry if you read the unedited comment.
0Vladimir_Nesov
We have an ambiguity between whether the weight-measure refers to mass or to what the scales show. For two objects (gold and feathers) it is stated that one of these properties is the same, and the question is about the other property. From the context, we can't obviously disambiguate one way or the other. In such situations, assumptions are usually made to make the problem statement meaningful.
[-]sk20

I fail to understand how compartmentalization explains this. I got the answer right the first time. And I suspect most people who go it wrong did so because of the assumptions (unwarranted) they were making - meaning if they had just looked at the question and nothing else, and if they understood basic gravity, they would've got it right. But when you also try to imagine some hypothetical forces on the surface of the moon or relate to zero gravity images seen on tv etc, and if you visualize all these before you visualize the question, you'd probably get it wrong.

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains.

I'm not sure about this. In your examples here, people are in fact completely lacking any full understanding of gravitation and/or (I suppose) knowledge of the masses of notable celestial objects in our solar system.

Now, I have to admit that the correct answer wasn't obvious to me at first.

I up-voted just for you admitting this in your example, but lets talk about this. ... (read more)

1wedrifid
And even without a full understanding of gravitation and the nitty gritty of what causes it, it would suffice to know 'gravity is basically acceleration'.

The way I solved the pen on the moon question is that I remembered the famous demonstration one of the Apollo astronauts did with a feather and hammer on the moon, and didn't think there should be a meaningful difference between those objects and a pen. I could've worked out the physics, but pattern-recognition was faster and easier. 

So my theory is that much of compartmentalization is simply because the search space is so large that people don't end up seeing that there might be a connection between two domains. Even if they do see the potential, or if it's explicitly pointed out to them, they might still not know enough about the domain in question (such as in the example of heavy boots), or they might find the proposed connection implausible.

If a person's knowledge is highly compartmentalized, and consists of these three facts:

  1. A human being walked across the moon.

  2. There are sm

... (read more)

Your "wrong but not obviously and completely wrong" line made me think that the "obviously and completely" part is what makes people who are well-versed in a subject demand that everyone should know [knowledge from subject] when they hear someone express obvious-and-complete ignorance or obvious-and-complete wrongness in/of said subject. I've witnessed this a few times, and usually the thought process is something like "wow, it's unfathomable that someone should express such ignorance of something that is so obvious to me. There sh... (read more)