Shifting Load to Explicit Reasoning
Related to: Which Parts Are "Me"?, Making your explicit reasoning trustworthy, The 5-Second Level.
What's damaging about moralizing that we wish to avoid, what useful purpose does moralizing usually serve, and what allows to avoid the damage while retaining the usefulness? It engages psychological adaptations that promote conflict (by playing on social status), which are unpleasant to experience and can lead to undesirable consequences in the long run (such as feeling systematically uncomfortable interacting with a person, and so not being able to live or work or be friends with them). It serves the purpose of imprinting your values, which you feel to be right, on the people you interact with. Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn't engage the same parts of your brain that make moralizing undesirable.
What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.
The 5-Second Level
To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less. Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.
As our first example, let's take the vital rationalist skill, "Be specific."
Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"
A couple of formative childhood readings that taught me to be specific:
"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"You have pushed him into the clouds. If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about. This habit displays itself in an answer such as this:
"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them. Also, you might go to the fire department and see how their trucks are painted."-- S. I. Hayakawa, Language in Thought and Action
and:
"Beware, demon!" he intoned hollowly. "I am not without defenses."
"Oh yeah? Name three."-- Robert Asprin, Another Fine Myth
And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?" Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."
But the real subject of today's lesson is how to see skills like this on the 5-second level. And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.
Verifying Rationality via RationalPoker.com
Related to: Problem of verifying rationality
We're excited to announce the (soft) launch of RationalPoker.com! It's a new guide developed by me, Zvi, Kevin, and patrissimo detailing how to use online poker as rationality training to conquer your cognitive biases. We want our community to go from knowing a lot about cognitive biases to actually having a training method that allows us to integrate that knowledge into our habits -- truly reducing biases instead of just leaving us perpetually lamenting our flawed brain-ware. In the coming weeks, we'll be making the case that online poker is a useful rationalist pursuit along with developing introductory "How To" material that allows those who join us to play profitably.
We want to make sure we aren’t wasting our time practicing an ungrounded art with methods that don’t work. Poker gives us an objective way to test x-rationality. The difference between winning and losing in poker once you know a small amount of domain-specific knowledge is due to differing levels of rationality. Our site will be presenting the case that a strong rationalist who can act on their knowledge of cognitive biases (a defining feature of x-rationality but not traditional rationality) should have a distinct advantage. We'll be offering the connecting material between the sequences and online poker to teach you how to apply knowledge of cognitive biases to poker in a way that verifies your current level of rationality and naturally teaches you to improve your rationality over time.
Incidentally, this also presents a solution for those of us looking to earn money from anywhere with a flexible schedule that leaves time for outside interests.
Make your training useful
As Tom slips on the ice puddle, his arm automatically pulls back to slap the ground. He’s been taking Jiu-Jitsu for only a month, but, already, he’s practiced falling hundreds of times. Tom’s training keeps him from getting hurt.
By contrast, Sandra is in her second year of university mathematics. She got an “A” in calculus and in several more advanced courses, and she can easily recite that “derivatives” are “rates of change”. But when she goes on her afternoon walk and stares at the local businesses, she doesn’t see derivatives.
For many of us, rationality is more like Sandra’s calculus than Tom’s martial arts. You may think “overconfidence” when you hear an explicit probability (“It’s 99% likely I’ll make it to Boston on Tuesday”). But when no probability is mentioned -- or, worse, when you act on a belief without noticing that belief at all -- your training has little impact.
Learn error patterns ahead of time
If you want to notice errors while you’re making them, think ahead of time about what your errors might look like. List the circumstances in which to watch out and the alternative action to try then.
Here's an example of what your lists might look like. A bunch of visiting fellows generated this list at one of our rationality trainings last summer; I’m including their list here (with some edits) because I found the specific suggestions useful, and because you may be able to use it as a model for your own lists.
How are critical thinking skills acquired? Five perspectives
Link to source: http://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mapping: Argument Maps Improve Critical Thinking, Debate tools: an experience report
How are critical thinking skills acquired? Five perspectives: Tim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.
In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis. This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice. We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems. To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach. In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students. (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)
LW has been introduced to argument mapping before.
Eric Drexler on Learning About Everything
Related to: The Simple Math of Everything, Your Strength as a Rationalist, Teaching the Unteachable.
Eric Drexler wrote a couple of articles on the importance and methods of obtaining interdisciplinary knowledge:
Note that the title above isn't "how to learn everything", but "how to learn about everything". The distinction I have in mind is between knowing the inside of a topic in deep detail — many facts and problem-solving skills — and knowing the structure and context of a topic: essential facts, what problems can be solved by the skilled, and how the topic fits with others.
This knowledge isn't superficial in a survey-course sense: It is about both deep structure and practical applications. Knowing about, in this sense, is crucial to understanding a new problem and what must be learned in more depth in order to solve it.
This topic was discussed intermittently on Overcoming Bias. Basic understanding of many fields allows to recognize how well-understood by science a problem is and to see its place in the structure of scientific knowledge; to develop better intuitive grasp on what's possible and what's not; and to adequately perceive the natural world.
The advice he gives for obtaining general knowledge feels right, even for studying the topics that you intend to eventually understand in depth:
Don't drop a subject because you know you'd fail a test — instead, read other half-understandable journals and textbooks to accumulate vocabulary, perspective, and context.
You Are A Brain
Here is a 2-hour slide presentation I made for college students and teens:
It's an introduction to realist thinking, a tour of all the good stuff people don't realize until they include a node for their brain's map in their brain's map. All the concepts come from Eliezer's posts on Overcoming Bias.
I presented this to my old youth group while staffing one of their events. In addition to the slide show, I had a browser with various optical illusions open in tabs, and I brought in a bunch of lemons and miracle fruit tablets. They had a good time and stayed engaged.
I hope the slides will be of use to others trying to promote the public understanding of rationality.
Note: When you view the presentation, make sure you can see the speaker notes. They capture the gist of what I was saying while I was showing each slide.
Added 6 years later: I finally made a video of myself presenting this, except this time it was an adult audience. See this discussion post.
Secret Identities vs. Groupthink
From Marginal Revolution:
A new meta-analysis (pdf) of 72 studies, involving 4,795 groups and over 17,000 individuals has shown that groups tend to spend most of their time discussing the information shared by members, which is therefore redundant, rather than discussing information known only to one or a minority of members. This is important because those groups that do share unique information tend to make better decisions.
Another important factor is how much group members talk to each other. Ironically, Jessica Mesmer-Magnus and Leslie DeChurch found that groups that talked more tended to share less unique information.
A result that shouldn't surprise this group. I've noticed obvious attempts to avoid this tendency in Less Wrong (for instance, Yvain's avoiding further Christian-bashing). We've had at least one post asking specifically for information that was unique. And I don't know about the rest of you, but I've already had plenty of new food for thought on Less Wrong.
But are we tapping the full potential? Each of us has, or should have, a secret identity. The nice thing about those identities is that they give us access to unique knowledge. We've been asked (though I can't find the link) to avoid large posts applying learned rationality techniques to controversial topics, for fear of killing minds, which seems reasonable to me. Is there a better way to allow discipline-specific knowledge to be shared among Less Wrong readers without setting off our politicosensors? It seems beneficial not only for improved rationality training, but also to enhance our secret identities. For instance, I, as an economist-in-training, would like to know not just what an anthropologist can tell me, but what a Bayesian-trained anthropologist can tell me.

Mandatory Secret Identities
Previously in series: Whining-Based Communities
"But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy. I expected much of them, and they came to expect much of themselves." —Jeffreyssai
Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...
...becoming a teacher and having their own martial arts dojo someday.
To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything. Writers tend to look down on literary critics' understanding of the art form itself, for just this reason. (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying "This wine has a great bouquet", and goes off to tell their students "You've got to make sure your wine has a great bouquet". When the student asks, "How? Does it have anything to do with grapes?" the critic replies disdainfully, "That's for grape-growers! I teach wine.")
Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn. You do that on Sundays, or full-time after you retire.
And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities. They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors. And to enforce this, I suggest the rule:
Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))
That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The main danger for LW is that it could become rationalist-porn for daydreamers.
I suggest a pattern of counterattack:
-
-
-
-
(This used to be a comment, here.)Find a nonrational aspect of your nature that is hindering you right now.
Determine privately to fix it.
Set a short deadline. Do the necessary work.
Write it up on LW at the deadline. Whether or not it worked.