by [anonymous]
4 min read

18

Related: Fake explanation, Guessing the teachers password, Understanding your understanding, many more

The mental model concept gets used so frequently and seems so intuitively obvious that I debated whether to bother writing this. But beyond the basic value that comes from unpacking our intuitions, it turns out that the concept allows a pretty impressive integration and streamlining of a wide range of mental phenomena.

The basics: a mental model falls under the heading of mental representations, ways that the brain stores information. It's a specific sort of mental representation - one who's conceptual structure matches some corresponding structure in reality. In short, mental models are how we think something works.
A mental model begins life as something like an explanatory black box - a mere correlation between items, without any understanding of the mechanism at work. "Flick switch -> lamp turns on" for example.  But a mere correlation doesn't give you much clue as to what's actually happening. If something stops working - if you hit the switch and the light doesn't go on - you don't have many clues as to why. This pre-model stage lacks the most important and useful portion; moving parts.
The real power of mental models comes from putting something inside this black box  - moving parts that you can fiddle with to give you an idea of how something actually works. My basic lamp model will be improved quite a bit if I add the concept of a circuit to it, for instance. Once I've done that, the model becomes "Flick switch -> switch completes circuit -> electricity flows through lightbulb-> lamp turns on". Now if the light doesn't go on, I can play with my model to see what might cause that, finding that either the circuit is broken or no electricity is being provided. We learn from models the same way we learn from reality, by moving the parts around and seeing the results.  
It usually doesn't take much detail, many moving parts, for something to "click" and make sense. For instance, I had a difficult time grasping the essence of imaginary numbers until I saw them modeled as a rotation, which instantly made all the bits and pieces I had gleaned about them fall into place.  A great deal of understanding rests in getting a few small details right. And once the basics are right, additional knowledge often changes very little. After you have the basic understanding of a circuit, learning about resistance and capacitors and alternating vs direct current won't change much about your lamp model. Because of this, the key to understanding something is often getting the basic model right - I suspect bursts of insight, a-ha moments, and magical "clicks" are often new mental models suddenly taking shape.
Now let's really open this concept up, and see what it can do. For starters, the reason analogies and metaphors are so damn useful (and can be so damn misleading) is that they're little more than pre-assembled mental models for something. Diagrams provide their explanatory mechanism through essentially the same principle. Phillip Johnson-Laird has formulated the processes of induction and deduction in terms of adjustments made to mental models. And building from the scenario concept used by Kahneman and Tversky, he's formulated a method of probabilistic thinking with them as well. Much of the work on heuristics and biases, in fact, either dovetails very nicely with mental models or can be explained by them directly.
For example, the brain seems to have a strong bias towards modifying an existing model vs. replacing it with a new one. Often in real life "updating" means "changing your underlying model", and the fact that we prefer not to causes us to make systematic errors. You see this writ large all the time with (among other things) people endlessly tweaking a theory that fails to explain the data, rather than throwing it out. Ptomely's epicycles would be the prototypical example. Confirmation bias, various attribution biases and various data neglect biases can all be interpreted as favoring the models we already have. 
The brain's favorite method for building models is to take parts from something else it already understands. Our best and most extensive experience is with objects moving in the physical world, so our models are often expressed in terms of physical objects moving about. They are, essentially, acting as intuition pumps. Of course, all models are wrong, but some are useful - as Dennett points out, converting problems to examples of something more familiar often allows us to solve problems much more easily.
One of the major design flaws of using mental models (aside from the biases they induce) is that our mental models always tend to feel like understanding, regardless of how many moving parts they have. So, for example, if the teacher asks "why does fire burn" and I answer "because it's hot", it feels like a real explanation, even if there's not any moving parts that might explain what 'burn' or 'hot' actually mean. I suspect a bias towards short causal chains may be involved here. Of course, if the model stops working, or you find yourself needing to explain yourself, it becomes quite obvious that you do not, in fact, have the understanding that you thought you did. And unpacking what turns out to be an empty box is a fantastic way to trigger cognitive dissonance, which can have the nasty effect of burrowing your flawed model in even deeper.
So how can we maximize our use of mental models? Johnson-Laird tells us that "any factor that makes it easier for individuals to flesh out explicit models of the premises should improve performance." Making clear what the moving parts are and couching it in terms of something already understood is going to help us build a better model, and a better model is equivalent to a better understanding. Again, this is not particularly groundbreaking - any problem solving technique will likely have the same insights.
Ultimately, the mental model concept is itself just a model. I'm not familiar enough with the psychological literature to know if mental models are really the correct way to explain mental functions, or if it's merely another in a long list of similar concepts - belief, schema, framework, cognitive map, etc. But the fact that it's intuitively obvious and that it explains a large swath of brain function (without being too general) suggests that it's a useful concept to carry around, so I'll continue to do so until I have evidence that it's wrong.
-
Sources:
Using concept maps to reveal conceptual typologies - Hay and Kinchin (2006)

 

New Comment
23 comments, sorted by Click to highlight new comments since:

So, for example, if the teacher asks "why does fire burn" and I answer "because it's hot", it feels like a real explanation

Looks like a real explanation to me: you can use it to predict that non-hot things do not cause burns, and that non-fire hot things (for instance boiling water) do cause burns.

It's a sufficient theory to avoid burns by moving in your hand slowly until you feel some heat, rather than directly grasping things that might be hot.

If you observe that friction heats things up and you are very persistent you will be rewarded with a way of making fire. The model is getting more sophisticated ("fire is hot; hot things burn; rubbing things heats them; hot things catch on fire") and perhaps at this point starts deserving the "mental model" tag.

You'd start needing moving parts when asked to predict e.g. what boiling water will do to a piece of paper. "Burn" is then wrong, for non-trivial reasons.

I think the quoted sentence sentence used intransitive “burn” (“the fire is burning”, etc), not transitive ("fire burns skin").

You should've waited for the article to get promoted, and it seems it's not going to be. Currently, the concept is not clearly explored or motivated on the blog.

I think mental models are interesting for being mechanisms, rather than merely classes and instances. I suspect we may refer to the concept again, in which case a brief definition on the Wiki linking a straightforward elaboration such as this is useful.

(Agree with what you are saying and add my take on it.)

I would say that the concept is explored, but this nomenclature isn't established as a dominant standard (nor expressed powerfully in this post). Part of the problem is that the post is written submissively and by an author without established status. We don't feel obligated to engage with him inside his way of carving reality, even though it isn't particularly controversial in describing how things work.

We already have the word map of 'map is not the territory' fame. The way (specific to human) 'Mental Models' would differ from and perhaps constitute maps of the territory is something that would need to be explored. But as you say we just don't seem to have the motivation to do so. The author acknowledges this in the first paragraph. In fact, that very paragraph more or less primes us to be unmotivated to explore while the final paragraph unintentionally gives us an excuse not to do so!

[-][anonymous]20

Thanks for the helpful comment.

As for the writing style, I was very deliberately trying to express my exact state of knowledge (since if I don't do this I tend to get myself into trouble), but it seems like I might have let this lead me to a somewhat uninteresting place.

my exact state of knowledge

You did that well. It is perhaps unfortunate that people respond more positively to confident assertions than well calibrated ones!

(since if I don't do this I tend to get myself into trouble)

Sometimes the trick is to do things that might get yourself in trouble. You may get in trouble because confident assertions from a (not yet) dominant individual feel like a status incursion that needs to be beaten down. In those cases doing things that push the limits of the status people assign you even though you may experience trouble is a rather direct way of making people give up giving you trouble. You may also get in trouble because you make a mistake. In that case people will be eager to correct you, which leaves both you and the other readers better informed.

but it seems like I might have let this lead me to a somewhat uninteresting place.

I sometimes suspect that 'interesting' is a lot to do with the expected social reward or penalty for paying attention or ignoring the speaker in the context. I have a hunch that if while you were writing your post you imagined yourself speaking in a deep, slow and firm voice then that would have come across in your writing style and made a difference in the mind of the reader.

By the way, I am interested in how you think your 'Mental Models' fits in with 'maps of the territory'. I obviously have my own take but I'd like to hear how it fits in with your mental model of mental models. I obviously have my own ideas but of course I absorbed your 'mental model' description by hacking it on to the conception I already have. ;) Maybe you see it differently!

Is "mental model" not an ordinary/transparent term? Is "map" actually any different?

[-][anonymous]00

A map is generally a specific sort of model, but not all models are maps. It's much harder to extend the map metaphor into something that actually resembles human thought processes.

[-][anonymous]00

It's much harder to extend the map metaphor to something that actually resembles human thought.

A map is necessarily a certain sort of model, but not all models can easily be represented as maps.

I took it to be more specific. A reference to the way humans actually build and internally represent the abstractions that they use as a map.

I don't think I'm really clear on what sort of map would not constitute a mental model.

Say... one represented as a lookup table the size of Jupiter, for example. We don't think like that.

Hm, at risk of getting facial egg, how would you say it compares to my recent hierarchy of understanding, which got to +40, and gives a useful organization of epistemic states long discussed on this site?

Slightly different topic but your hierarchy of understanding is clear, easy to read and well integrated with cultural knowledge. (Also +41 now that I've read it.)

Thanks! But I meant, how does it compare in terms of worthiness to be included in the wiki in some capacity?

Well, I'd say clear, easy to read and well integrated with cultural knowledge makes good criteria for wiki inclusion. Do you think it is the kind of thing that would be useful to link to? That's more or less what I use the wiki for. And I can imagine myself linking to your hierarchy at times.

(It was already added to the page Understanding.)

[-][anonymous]00

Large parts of it are isomorphic - At least 3 of the levels seem to closely correspond with chain-, spoke-, and network-type mental models, which I (perhaps regrettably) didn't go into here.

[-][anonymous]00

Thanks for the helpful comment.

As far as the style goes, I very consciously try to write only what I know (since otherwise I have the bad habit of jumping to unwarranted conclusions), but it seems I might have let this prevent me from exploring any new or interesting territory.

The brain's favorite method for building models is to take parts from something else it already understands.

(Non rhetorically, I think you are right but) What predictions does this make that differ from what we would predict if the brain didn't particularly like recycling?

Kaj recently talked to us about compartmentalizing. At first glance compartmentalization seems to be the opposite of what you are saying here. How do these two concepts play together and when does each apply?

[-][anonymous]00

As far as predictions go, reusing concepts from elsewhere might allow you to quickly make predictions about something with limited accuracy, but have a difficult time improving on the predictions. Without recycling understanding might take longer, but ultimately allow for more accurate predictions. Recycling seems to favor good enough solutions.

The compartmentalization parallel is interesting - I'll have to think more about it.