To begin, here are some Fun Psychology Facts:  

People who were asked to describe a face after seeing it are worse at recognizing the same face later.

People who are asked to describe a wine after drinking it are worse at recognizing the same wine later.

People who are asked to give reasons for their preferences among a collection of jellies are worse at identifying their own preferences among those jellies.

 

This effect, known as Verbal Overshadowing, occurs primarily when a principally non-verbal process is disrupted by a task which involves verbalization.  The above generalizations (and Verbal Overshadowing effects more generally), do not occur among what we can term "Verbal Experts": individuals who are as good at verbalizing the relevant process as they are at doing it implicitly or automatically.  This seems like it will be very important to keep in mind when cultivating our own Rationality.

Here's an oversimplified picture of what this means:  We've got an implicit facial recognition process, IFRP, which is pretty good.  We've also got a generalized explicit verbal thinking process, GEVTP, which is good for lots of things, but isn't especially good at recognizing faces.  Normally, IFRP is in charge of facial recognition, but there are some things we can do, like, trying to put a face into words, that wakes up GEVTP, which then muscles IFRP out of the way, and all of a sudden, we are a lot worse at recognizing faces.

The good news is that GEVTP can be trained.  To take the wine case, people who put in the time and effort can become verbal experts about wine.  This isn't to say they automatically have better judgments about wine.  Rather, it means that their GEVTP is on par with their implicit wine recognition, because it has been trained to do the same quality job as the the implicit process.

As a crude metaphor, imagine the difference between the natural process by which you go about walking, versus having to keep track of each and every instruction that needs to be sent to different joints and muscles if you had to consciously issue each one.

Now, obviously the specific studies mentioned are important for wine tasting, eye-witness identification, or determining one's own jelly preferences, but the phenomenon of Verbal Overshadowing has a much larger, more systematic importance for th Art of Rationality.

Let's bridge to the broader point with a quote from David Hume, a man whose insights were often far ahead of their time: "I shall add [...] that, as this operation of the mind, by which we infer like effects from like causes, and vice versa, is so essential to the subsistence of all human creatures, it is not probable, that it could be trusted to the fallacious deductions of our reason, which is slow in its operations; appears not, in any degree, during the first years of infancy; and at best is, in every age and period of human life, extremely liable to error and mistake. It is more conformable to the ordinary wisdom of nature to secure so necessary an act of the mind, by some instinct or mechanical tendency, which may be infallible in its operations, may discover itself at the first appearance of life and thought, and may be independent of all the laboured deductions of the understanding. As nature has taught us the use of our limbs, without giving us the knowledge of the muscles and nerves, by which they are actuated; so has she implanted in us an instinct, which carries forward the thought in a correspondent course to that which she has established among external objects; though we are ignorant of those powers and forces, on which this regular course and succession of objects totally depends."

In short, Hume is saying, in the field of inference and reasoning, our Implicit Reasoning Process often outpaces our GEVTP.  I'm not suggesting that our implicit reasoning is perfect (it is, after all, fraught with its own biases), but, supposing that Verbal Overshadowing is a general phenomenon, it would appear that, with respect to our reasoning and inferences more generally, our situation is one in which trying to talk about what we are doing is liable to mess us up.

The obvious suggestion, then, is that we become verbal experts on the subject, so that our thinking about rationality doesn't mess up our thinking rationally.

"Aha," I hear you all say, "then your advice is unnecessary, for what is it that we Rationalists are already doing, if not training ourselves to think explicitly about rationality?"  And that would be a good reply, but for one crucial fact: we are not training ourselves correctly to become verbal experts.

One does not become a verbal expert about wine by tasting only strange vintages or the wine of abnormal grapes.  One does not become a verbal expert about facial recognition by practicing only on the stunningly gorgeous or the hideously deformed.  And likewise, one does not become a verbal expert on Rational thinking by focusing on the edge cases (i.e. The Epistemic Prisoner's dilemmas, The Gettier Cases, The High Stakes scenarios, etc.).  Verbal Experts get trained, primarily, on the paradigms.

In fact, the studies on Insight Puzzles in particular (i.e. verbal overshadowing with respect to explaining the actual process by which one achieved the solution to a problem), suggest that those of us who engage in verbalization tasks relating to our reasoning and inferences (say, those of us dedicating a lot of time and energy to writing posts or comments about it), had better figure out how to train our Generalized Explicit Verbal Thinking Process not to drop the ball when it comes to thinking about reasoning.

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

New Comment
24 comments, sorted by Click to highlight new comments since:

Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited. Even if they were true, I think it was a stretch to go from those to postulating a generalized verbal overshadowing bias.

With the benefit of hindsight I can say that this post was probably a mistake, which leaves me a bit dumbfounded at its karma score of 61 and endorsement by Newsome. When I scrolled down to the bottom I saw that I had already downvoted it, which made me even more confused.

Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited.

Where? Was this after the time that the post was written?

[-]wnoise110

It's not clear that the effect is really there, and certainly isn't as strong as originally thought:

http://devoid.blogs.heraldtribune.com/11438/the-decline-effect-haunts-science/

U.Cal-Santa Barbara psychology professor Jonathan Schooler has a problem. The certitude of a phenomenon that made him a rock star in academic circles — he called it “verbal overshadowing,” and he published the results 20 years ago — is beginning to break down. And its fragility is calling the entire scientific method into question.

This is very interesting, as the concept of verbal overshadowing seems to me to make a lot of sense of a lot of things. Finding it's actually wrong would be useful too. Thank you.

It occurs to me that verbal overshadowing of feelings may be some of what people are complaining of when they consider explaining to constitute explaining away. It's a mechanism by which the explanation does, in fact, overshadow the feelings.

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

So what do the studies you read say about the right way to develop verbal expertise?

From what I've read, one needs to train oneself on paradigm cases. So, for example, with wine tasting, you develop your verbal acuity by learning how to describe fairly ordinary wines.

I don't know how to port this strategy over to verbal acuity for rationality.

I don't know how to port this strategy over to verbal acuity for rationality.

Perhaps by vocalising simple logic? When you make a simple decision, such as "I'm going to walk to work today instead of catching the bus", go over your logic for the decision, even after you've started walking, as if you're explaining your decision to someone else. I often do this (not out loud, but as a mental conversation), just for something to pass the time, and I find that it actually helps me organise my thoughts and explain my logic to other real people.

(Edit) You can pretty much ignore this comment, but you should read pangloss' responses below. I found them enlightening.

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

Personally, I think there are two extremes at studying something like this. One is to work from the edge cases toward the general/common cases and the other is to work from the general/common cases toward the edges. Some gradients in the middle certainly apply.

My hunch is that our perceptions of the common cases have been painted over by too many other ways of life/thinking. Rationality can have trouble getting its point through the layers. The edge cases make more sense to flesh out because the edge cases are explicitly designed to force specific ways of life/thinking into choosing between one or more well-defined options.

Edge cases also have the advantage of Jargon. We can make up our own words to mean something that we were not able to express previously and, by definition, be correct. While this is less amusing than describing it in laymen's terms it certainly gets the problem defined. Working from the edges with new descriptions is a long process and one easy enough to mess up. Pigeonholing is a great risk and we can describing large swaths of general/common cases with a new word is dangerous.

General cases have the advantage that they can provide a foundation that will instantaneously apply to related cases. The network can build faster but the definitions may not be as clear-cut. A word used to describe wine can be used to describe faces (e.g. bitter) and people will be able to follow the conversation. The danger this time is generalizing terms and cases so that we are no longer able to distinguish between specific concepts.

All that being said, language is nifty. Learning how to work with it correctly would certainly be useful.

(Note) I am making a fine distinction between pigeonholing and generalizing that may exist only in my head. I see pigeonholing as taking large numbers of varied things and trying to separate them into ill-defined categories that force more variance than is really necessary. I see generalizing as taking large numbers of varied things and slopping them into categories that begin to erase the variances. If someone else knows better terms for this, please let me know.

I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one's thoughts without interfering with one's thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.

This is crucial: The thought isn't to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren't tripping ourselves up.

Mmm, very good point. Strangely, now that I think about it, this sound very similar to the concept of the highest principle:

You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.

In the comparison between Rationality Recognition and Face Recognition, what is the Rationality Recognition equivalent of sight?

It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.

Thinks about it this way, we take in information all the time, and draw conclusions from it. "Sight" isn't playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.

Similarly, you see a room full of objects and events. You've got one or more "draw conclusions" programs that run on the data you see, and that program can get screwed up by putting things into words that you don't normally.

The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.

My claim isn't about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn't normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn't necessarily mean spoken aloud, but just 'thinking through in words').

My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.

Before we can advance rationality by discussion, we must first learn to discuss rationality.

My claim isn't about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn't normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn't necessarily mean spoken aloud, but just 'thinking through in words').

My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.

Before we can advance rationality by discussion, we must first learn to discuss rationality.

Understood. Thanks for the clarification. Going back and rereading the article after these comments made a few more lights click on in my head.

So, where do we start?

I guess we find out how to acquire verbal expertise in a given domain, and do so for rationality, reasoning, and inference.

... this seems to explain why, if you're a music critic too long, you start to forget why you liked music.

(I still can't go to a gig without having a notebook handy, and can hardly remember anything that I haven't actually taken notes on right there and then.)

This has answered a decades-old mystery for me. Thank you.

So that's why I write half a comment, realize that's not what I'm thinking, delete it, repeat so often!

Regarding the wine experts: if I understood correctly their recognition of the same wine later is not impaired by their verbal description of it's taste. But I wonder how accurate their description is, are there even the right words to describe the taste of a wine? I suspect that they just have build up some standard associations of what word to attribute to what taste and then just regurgitate them. If you trained this a lot you can probably do it on autopilot and therefore don't have to really think, therefore your taste memory is not impaired. That would be my ad-hoc explanation. What do you think?

PS: the "doing without thinking" part would be in contrast to non-experts who would have to deliberately reason and look for the correct words to describe the taste.

I mean, I don't know if "woody" or "dry" are the right words, in terms of whether they invoke the "correct" metaphors. But, the point is that if you have vocabulary that works, it can allow you to verbalize without undermining your underlying ability to recognize the wine.

I think the training the with vocabulary actually augments verbally mediated recall, not that it turns off the verbal center, but I'm not sure the vehicle by which it works.

I think most of the wine experts who work on their verbal ability to describe wines are wine reviewers who read other people's wine reviews. I would guess that that means they develop a common vocabulary. In that sense, I'd presume that they'd be almost as good at recognizing a wine from another expert's description as their own. Or at least that's what I'd want to verify in order to see if they were describing something that's in the wine rather than some idiosyncratic feature that's salient to one but unnoticed by others.

I think the implications for rationalists who want to train their verbal abilities are obvious, but I'll say it anyway. If you want to train your verbal abilities so what you say about rationality doesn't cloud your non-verbal understanding, you have to write about rationality and read what others write about it, and do your best to see that you're talking about the same thing.

In fact, the studies on Insight Puzzles in particular (i.e. verbal overshadowing with respect to explaining the actual process by which one achieved the solution to a problem), suggest that those of us who engage in verbalization tasks relating to our reasoning and inferences (say, those of us dedicating a lot of time and energy to writing posts or comments about it), had better figure out how to train our Generalized Explicit Verbal Thinking Process not to drop the ball when it comes to thinking about reasoning.

See also this comment, about verbal overshadowing of other types of thought processes, such as the ones involved in motivation and emotional belief.

[-][anonymous]-10

Good post. Probably the most important part of rationality, as far as I can see, is knowing how your brain actually works. Knowing its biases is a big part of that, but it also includes having a rough idea of the different structures in your brain, what they do, and how they interact with each other.

Of course, we all have different top-level circuitry, which makes this rather difficult, and largely the product of rigorous self-examination. (As far as I can tell, our low-level circuitry is more or less universal).