What can existing research tell us about teaching and learning key rationalist skills? One central idea is the monitoring and adjustment of mental processes, which appears in various forms in the LW community, debiasing research, education research concerning metacognition, and cognitive-behavioral and related therapies. I expect that rationality training can benefit from fully realizing the connections between these fields, which some mainstream research is already beginning to do. In another post, I discuss a self-experiment in noticing confusion based on these ideas.

Debiasing

In Is Rationality Teachable?, Luke noted some promising successes in teaching statistics and logic, and also in the growing body of research on debiasing. The main lines of research consider debiasing techniques that are socially-administered (incentives, accountability, group decision making) or imposed "top-down" as rules ("consider the opposite"). While individuals can internalize and use the latter techniques on their own, the top-down and domain-general nature inhibits transfer (their conversion from declarative to tacit knowledge, or migration from System 2 to System 1).1

A better debiasing technique from the individual perspective would teach someone to notice the feeling of distorted cognition and trigger corrective action: to notice mental flinching, rationalization, or confusion, and then to bring it to full conscious attention and act on it appropriately. I currently see that as the most important component of what Eliezer's getting at with The 5-Second Level.2 3

Larrick (2004) anticipated this idea in his speculation on future directions for debiasing research:

[One direction] comes from the growing focus on how affect, motivation, and self-esteem influence decision making (see Larrick, 1993; Payne & Bettman, Chapter 6, this volume; Rottenstreich & Shu, Chapter 22, this volume). Identifying debiasing techniques for affect-based biases is a promising new area – What interventions help people cope effectively with emotion endogenous to a decision, such as anticipated regret? Or help them recognize and discount emotion that is extraneous to a decision, such as anger from some unrelated experience? The answers may bring decision research surprisingly close to clinical psychology, such as techniques used in cognitive-behavioral therapy [emphasis mine].

Cognitive Behavioral Therapy

How has that speculation fared? It seems that others have made the same connection, but that progress has been limited. Leddy et al. (2013) provide an overview of decision science (DS) and cognitive behavioral therapy (CBT), and finally reflect on the similarities:

The cognitive patterns that DS and CBT have identified and labeled ["biases" and "cognitive distortions," usually] are their respective building blocks; however, there are numerous similarities between the patterns each field has identified, despite having arrived at different terminology...

A difference in the fields of CBT and decision making is in the context, the research and application. CBT is an applied field that incorporates the client's environment. CBT encourages the client to self-monitor, challenge, and modify their thoughts in their own environment, with the goal being for clients to generalize skills learned in the therapy room to real-life contexts. However, a majority of the work in decision science has been conducted in isolation from the context in which real-life decisions take place. While more work is being done in medical, legal, and other fields, there are still limitations to this research because numerous immeasurable variables are difficult to control for in applied research settings (e.g., previous experience, social influences). It is also unclear whether decisions made or skills used in the research setting would be utilized in the real-world.

Based on the parallels between biases and cognitive distortions, we suggest that CBT practitioners keep in mind various cognitive biases to supplement discussions of cognitive distortions... Conversely, utilizing the techniques of cognitive-behavioral therapy in decision science may be beneficial; the implementation of the cognitive-behavioral techniques of goal setting, cognitive-behavioral assessment, self-monitoring, cognitive restructuring, exposure, behavioral experiments and relapse prevention may improve decision outcome.

The authors then point out research on "cognitive bias modification,"4 which looks to be pointed in a good direction concerning the teaching and learning of mental habits to counter cognitive biases, as well as some other work in clinical psychology with a decision-science perspective. "Self-monitoring" does come up in the above connection between CBT and decision science, but the authors don't particularly seize on the metacognitive aspect that seems to be the key commonality to me.5

I should perhaps note that CBT, while vastly better than nothing at all, is not necessarily the uniquely evidence-based therapy that many of us came to know it as. I mention this because it is at least slight evidence against the mechanism of action lying in patients' successfully learning specific mental skills, and in favor of (among other hypotheses) the effectiveness of simply talking to someone charismatic and empathetic. It could also be that different things work for different people.

What should we conclude? Perhaps that rationality training could end up looking in some ways more like therapy than like martial arts. It may be worth adapting therapeutic models specifically to debiasing.

Metacognition

Finally, the education research community seems to have their own body of work on similar concepts under different names. Metacognition plays a key role in learning, and it can be effectively taught.6 The term includes a number of important skills and processes, most of which we aren't interested in right now: declarative knowledge about study tactics, "offline" planning, retrospective evaluation of performance. These are roughly analogous with "top-down" techinques for debiasing.

But some metacognition research is concerned with real-time or "online" self-monitoring—that is, the "metacognitive experiences" associated with ongoing cognitive processing—very nearly what we're looking for. In general it's difficult to address these processes directly, but they can still be indirectly scaffolded, engaged, and measured. For example, activities that require examining mental models (as opposed to tapping short-term memory alone) have been found effective in promoting calibration of confidence in understanding ("metacomprehension accuracy," which presumably relies on metacognitive experiences). One such activity is generative testing, which includes summarizing learned material after some time has passed. Another is self-explanation/self-questioning during reading, where you progressively construct and test your model by asking yourself how ideas fit with each other.7

Visible Learning: A synthesis of over 800 meta-analyses relating to achievement by John Hattie (2009) ranks teaching strategies that emphasize metacognition very highly. Teaching of self-verbalization/self-questioning and general metacognitive strategies respectively occupy ranks #18 and #13 out of 138 meta-analysis subjects in terms of effect size.8

One of the meta-analyses discussed finds teaching "awareness of textual inconsistency" together with "the use of self-questioning as both a monitoring and a regulating strategy" to be particularly effective. "Reinforcement" stood out as a particularly effective instructional approach, and combining many instructional strategies generally gave better results.9

So metacognitive self-monitoring can be usefully taught. In studies these skills are generally scaffolded with prompts and checklists that accompany unrelated learning activities. It's not clear to me how well the skills thus taught can be expected to transfer beyond the immediate learning environment. I've also found little effort to explicitly relate these results to either decision science or therapy. Still, it seems encouraging for someone trying to teach rationality-oriented metacognition.

My impression is that CFAR's approach is informed to some extent by all of the above research fields, although their current focus is (rightly, I suspect) on the mechanics of personal effectiveness/strategy/goal-directedness/agency.

 

The above considerations led to my trying a simple self-experiment, in which I counted instances of noticing confusion over the course of a month. I feel it's been successful, and report on it here.


Notes

1. Larrick (2004)

2. See also the CFAR "Skill of the Week" contests (check consequentialismbe specificavoid motivated cognition), which search for exercises which give participants the opportunity to practice those skills, as well as more examples in From First PrinciplesValue of Information, and the comments to The 5-Second Level. 

3. Of course, there's also the question of what you actually do after the noticing; this is important if you want to have a concrete skill that you can practice and reinforce—the routine in cue-routine-reward. I almost want to say you can approximate any automatically-triggered routine to first order as interrupting your train of thought and beginning deliberate reflection on what you noticed. On the other hand, maybe it's better to have the corrective action itself be automatic. A related question comes up in Hertel and Mathews (2011) as a potential research direction for cognitive bias modification: "In short, an important direction for bias-modification research is the exploration of the extent to which training experiences establish new habits or provide the basis for controlled recollection, as well as the contexts under which automatic or controlled processes dominate in producing good transfer." 

4. reviewed in Hertel and Mathews (2011) 

5. As it turns out, there's also a thing literally called metacognitive therapy (MCT), which emphasizes awareness of and response to the nature of thoughts and beliefs rather than the content of those thoughts and beliefs. It has its own body of theory and practice, which I haven't particularly looked into. It's nearly all led by the same person, which worries me a little. Elsewhere, some people claim to get introspective and metacognitive boosts from practicing mindfulness meditation. That sounds plausible to me, but I haven't done my homework on the subject, so it gets only a shared footnote. While I'm at it, there's also Eugene Gendlin's Focusing, which I vaguely recall seeing mentioned in a CFAR context before—maybe in a rationality diary? 

6. See e.g. Schraw (1998)

7. Griffin et al. (2013). 

8. The effect sizes are d = 0.64 and d = 0.69, synthesized from 3 and 2 meta-analyses, respectively. For comparison, the #1 "effect size" is students' self-reported past grades, with d = 1.44. The local favorite of spaced practice is #12 at d = 0.71. The author explains effect size thus: "An effect size of d = 1.0 indicates an increase of one standard deviation on the outcome—in this case the outcome is improving school achievement. A one standard deviation increase is typically associated with advancing children's achievement by two to three years, improving the rate of learning by 50%, or a correlation between some variable (e.g., amount of homework) and achievement of approximately r = 0.50." Such an effect, he points out, would be blatantly obvious. In the other direction, seemingly nearly every intervention you can try has a positive effect size, with an average of d = 0.40 among those studied. 

9. Haller et al. (1988). The authors unfortunately don't define their terminology, but instructional reinforcement in this context means giving praise, gold stars, and so on for the desired behavior. 

References

Griffin, Wiley, & Salas (2013). Supporting effective self-regulated learning: The critical role of monitoring. International Handbook of Metacognition and Learning Technologies, 19-34.

Haller, Child, & Walberg (1988). Can comprehension be taught? A quantitative synthesis of "metacognitive" studies. Educational Researcher, 17(9), 5-8.

Hattie (2009). Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Hertel & Mathews (2011). Cognitive bias modification: Past perspectives, current findings, and future applications. Perspectives on Psychological Science, 6, 521-536.

Larrick (2004). Debiasing. Blackwell Handbook of Judgment and Decision Making, 316-337.

Leddy, Anderson, & Schulkin (2013). Cognitive-behavioral therapy and decision science. New Ideas in Psychology, 31, 173-183.

Schraw (1998). Promoting general metacognitive awareness. Instructional Science, 26, 113-125.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:58 AM

Very good post. I'd love to hear more about this. In particular, the CBT-decision science discussion was interesting.

[-][anonymous]10y-10

Metacognition can also be harmful: ''Well’s and Matthew’s theory proposes that when faced with an undesired choice, an individual can operate in two distinct modes: ‘object’ and ‘Metacognitive.’ Object mode interprets perceived stimuli as truth, where Metacognitive mode understands thoughts as cues that have to be weighted and evaluated. They are not as easily trusted. There are targeted interventions unique of each patient, that gives rise to the belief that assistance in increasing metacognition in people diagnosed with schizophrenia is possible through tailored psychotherapy. With a customized therapy in place clients then have the potential to develop greater ability to engage in complex self-reflection.[41] This can ultimately be pivotal in the patient’s recovery process. In the Obsessive Compulsive Disorder spectrum, cognitive formulations have greater attention to intrusive thoughts related to the disorder. “Cognitive Self-Consciousness” are the tendencies to focus attention on thought. Patients with OCD exemplify varying degrees of these ‘intrusive thoughts.’ Patients also suffering from Generalized Anxiety Disorder also show negative thought process in their cognition.[42]

With any metacognition strategy, the general consensus is to believe that they are good. But in all actuality some may be very harmful. Cognitive-Attentional Syndrome (CAS) characterizes a Metacognitive model of emotion disorder. CAS is consistent with the constant with the attention strategy of excessively focusing on the source of a threat. This ultimately develops through the client’s own beliefs. Metacognitive therapy attempts to correct this change in the CAS. One of the techniques in this model is called Attention Training (ATT). It was designed to diminish the worry and anxiety by a sense of control and cognitive awareness. Also ATT trains clients to detect threats, test how controllable reality appears to be.[43]''