## The most important meta-skill

9 27 May 2015 03:51PM

Note: This article underwent a significant revision on 5/28/2015. Thank you to estimator for all your feedback.

The most important meta-skill that anyone can learn is how to learn skills. With practice, you learn how to pick up new skills as they are needed, which is infinitely (quite literally) more efficient than trying to learn each skill individually in advance.

There are two basic premises that this method relies on:

1. A skill can be eventually be broken down into a series of trivial sub-skills.

2. The skill and its sub-skills follows a Pareto distribution.

The Pareto principle states that typically, 80% of a system's effects can be linked to 20% of their causes. Or in this case, learning 20% of the trivial sub-skills will make you 80% proficient at the overall skill. Empirically, many systems, both artificial and natural have been proven to follow this distribution, and skills are no exception. This guide is intended to teach you how to identify that 20%.

What lies below this is almost 1,000 words to describe something that's ultimately about condensing things and taking shortcuts. So, to be true to this attitude, I'll start with the "20% version", and those so inclined can continue to read the other 80%.

1. Break the skill you want to learn into several sub-skills.

2. For each sub-skill, ask "Is this trivial?" If so, add that to your "trivial list". If not, repeat steps 1-2 for each sub-skill. Continue to iterate until all you have left is a list of trivial sub-skills.

3. For each trivial sub-skill, ask, "How can this go wrong, and what can I do if it does?" Add this to your list of back-up plans, unless it is redundant.

4. Sort your list of sub-skills by how easy they will be to learn, then start learning and practicing them. Any time something goes wrong or you encounter a situation you did not account for, use one of your back-up plans.

5. Repeat steps 1-4 for any sub-skills you encounter that you did not account for.

So, that was the short version. If you find you need more context, here goes. Note that the first premise uses the word "trivial", which then begs the question: "What makes a sub-skill trivial?" A convenient answer to that is: "If you personally feel sufficiently confident that you can do it." Or, in other words, "Can you look up how to do it on the internet?" Which means, if the problem itself is trivial, you don't need to bother with this. Just look up a guide online.

Most skills are too complicated for someone to sit down and analyze every possible sub-skill needed to accomplish it. Fortunately, you don't have to. Your goal isn't to learn all the sub-skills, its to learn the important 20%. The overall efficiency of a sub-skill is a function of two things: how how integral it is to the overall skill, and how easy it is to learn. You're going to let System 1 do most of the heavy lifting here.

Fortunately, our brains are pretty good at pattern-matching. Goals are high-level concepts whose meanings are derived from the combination of several patterns and archetypes that you've got stored away somewhere. When you say, "I want to learn a foreign language", your brain immediately starts filling in the patterns of what exactly that means. It starts identifying the things that are integral to your idea of the concept.  Then it combines them into one coherent concept, and that's what you're left with. The trouble is, most people don't preserve these individual patterns before combining them, and thus they're left with something that's purely conceptual, rather than actionable. "I want to learn a foreign language" or "I want to learn to code" or "I want to learn social skills".

So just let your brain go to work doing what it already does, but pay attention during the process and identify the key components before they get mushed into a concept. Make System 1 tell you "You want to be able to converse, interact, and function in a society that speaks a different language," instead of just, "You want to learn a foreign language." Remember that you don't need to identify all the components. Just the ones that are important enough for System 1 to dredge up on a moment's notice. Most likely, these will be the 20% that you're looking for. Of course, chances are the initial output is going to be a high level concept unto itself. There's no "to-do list" for "being able to converse in a society that speaks a different language". So you put System 1 to work again. What exactly do I mean by that? "Oh, what you mean is: you want to be able to ask and understand both questions and answers, and be able to express your thoughts."

Eventually you'll reach the point of triviality. You'll have a sizable list of trivial tasks such as "You want to be able to say the following twenty basic sentences: XYZ", and "You want to know the following 100 basic vocabulary words: ABC." and "You want to be able to identify the most common articles, prepositions and conjunctions." Here's where System 2 goes to work: you look at this big list and ask yourself, which of these would be easiest for me to accomplish? And then you sort the list accordingly.

Now, all of this is fine and good, but at some point you will encounter a situation that doesn't fall under this convenient little roadmap you've followed. So you want to make a backup plan. System 2 needs to look over your roadmap and ask: "How can this go wrong, and what can I do if it does?" If you do this for each item on your list, chances are there will be a lot of duplicates and redundancies, which you can pare down. When all is said and done, you'll have a few plans of action in case things go wrong.

So, you have a roadmap to guide you through the 20%, and a generalized plan for the other 80%. What now?

Well, there's always room for improvement. If you do things right, you'll be pretty well immersed in the nitty-gritty of whatever skill you are trying to learn, which means you will be getting loads of first-hand experience as to all the different ways things can go wrong which you probably never could have anticipated. And you'll run in to scenarios that make you say, "I can't believe I didn't think about that."

Fortunately you don't need to get things perfect on the first try. If you encounter a situation you didn't account for it, then account for it. Ask yourself what happened, and let System 1 go to work on breaking it down. If something goes wrong in a way you hadn't thought about, come up with a separate plan for that. Eventually your model will become more and more robust as you start to learn many of the fundamentals that you probably skipped over when you made your roadmap.

There seem to be two different types of learning styles, the "academic" way of starting with the fundamentals and building from the ground up, and the "immersion" method of just throwing someone into the deep end of the pool and working from the top-down. This method combines both: you learn the fundamentals of the things that are necessary to immerse yourself. Instead of being "top-down" or "bottom-up", this is more like, "start at the bottom, skip to the top, then work your way back down through the middle."

## Skill: The Map is Not the Territory

49 06 October 2012 09:59AM

Followup to: The Useful Idea of Truth (minor post)

So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:

"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.

But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:

Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly?  How exactly does it help, on what sort of problem?

## SotW: Check Consequentialism

38 29 March 2012 01:35AM

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of \$50 for any suggestion we decide to test, and \$500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)

Exercise Prize:  Check Consequentialism

In philosophy, "consequentialism" is the belief that doing the right thing makes the world a better place, i.e., that actions should be chosen on the basis of their probable outcomes.  It seems like the mental habit of checking consequentialism, asking "What positive future events does this action cause?", would catch numerous cognitive fallacies.

For example, the mental habit of consequentialism would counter the sunk cost fallacy - if a PhD wouldn't really lead to much in the way of desirable job opportunities or a higher income, and the only reason you're still pursuing your PhD is that otherwise all your previous years of work will have been wasted, you will find yourself encountering a blank screen at the point where you try to imagine a positive future outcome of spending another two years working toward your PhD - you will not be able to state what good future events happen as a result.

Or consider the problem of living in the should-universe; if you're thinking, I'm not going to talk to my boyfriend about X because he should know it already, you might be able to spot this as an instance of should-universe thinking (planning/choosing/acting/feeling as though within / by-comparison-to an image of an ideal perfect universe) by having done exercises specifically to sensitize you to should-ness.  Or, if you've practiced the more general skill of Checking Consequentialism, you might notice a problem on asking "What happens if I talk / don't talk to my boyfriend?" - providing that you're sufficiently adept to constrain your consequentialist visualization to what actually happens as opposed to what should happen.

Discussion:

The skill of Checking Consequentialism isn't quite as simple as telling people to ask, "What positive result do I get?"  By itself, this mental query is probably going to return any apparent justification - for example, in the sunk-cost-PhD example, asking "What good thing happens as a result?" will just return, "All my years of work won't have been wasted!  That's good!"  Any choice people are tempted by seems good for some reason, and executing a query about "good reasons" will just return this.

The novel part of Checking Consequentialism is the ability to discriminate "consequentialist reasons" from "non-consequentialist reasons" - being able to distinguish that "Because a PhD gets me a 50% higher salary" talks about future positive consequences, while "Because I don't want my years of work to have been wasted" doesn't.