(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills. The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil. We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt. This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired. See here for details.)
Exercise Prize: Be Specific
During YCombinator's Startup School 2011, Paul Graham and Harj Tagger did "office hours" onstage. One pair of entrepreneurs were doing a matchmaking (dating) startup, and Paul and Harj were trying to figure out what their startup did, exactly - for example, what their startup could do that the existing low-tech solution couldn't. (Video.)
Harj: Low-tech like, you know, just like word of mouth, telling someone "hey, you should like, meet up with my friend" or "we're getting drinks, why don't you come along?" Like, what can the software do that's specifically better than that?
Entrepreneur: I think that our software specifically is providing the better connections for people, um...
Paul: Providing the better connections for people...?
Entrepreneur: I mean, one way you can think about it, I don't know if this is the right answer, but... there's a lot of things that are happening in real life that they're trying to mimic online, maybe that's not the correct way to... Look at it like this: to give them an online tool to also do this, like they're already doing in real life, maybe they could reach, uh expand their reach through the online website.
This had been happening with most of the startups Paul and Harj were interrogating - they just could not seem to provide a customer use-case - and I couldn't stand it any more; which is why at this point I whispered audibly enough for a few nearby people to hear, "Be specific! Be specific!"
A moment later, on stage:
Paul: Hm. Not very specific.
I got some strange looks from the people sitting next to me.
I hope this provides some background for my guess that around half of Paul Graham's advantage is based on years of incubator experience, and the other half is unusual rationality skills of the sort that the Center for Modern Rationality is trying to figure out how to teach. Obviously this is only a very rough conjecture. But you can see the basis for the hope that - after a fair amount more work - we'll be able to offer a 2-day course for YCombinator entrepreneurs that eliminates 50% of the overhead from their conversations with Paul Graham.
(Also, note how this post starts off with a specific example - an instance of the concrete-abstract writing pattern in which you state the example first and the generalization afterward. This is one of the most common bits of nonfiction writing advice I dispense: "Open with the concrete example, not the abstract explanation!")
Theoretical background:
S. I. Hayakawa once gave this illustration of the "ladder of abstraction", and in particular, the difference between going up or down:
"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
vs.
"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them. Also, you might go to the fire department and see how their trucks are painted."
"Red is a color" is moving up the ladder; "color" is a supercategory of red. All things which are red, have colors; but not all things which have colors, are red. And similarly, if you look at a specific firetruck, that firetruck is a red thing, but there are also many other red things which are not that firetruck.
What is true of one apple may not be true of another apple; suppose apple1 weighs 100 grams and is slightly green in some places, and apple2 weighs 200 grams and is entirely dark-red. You can say more truths about apple2, like "apple2 is dark red", then you can say that is true of all apples. (For more on this point see The Virtue of Narrowness.)
Thus, it may be easier to mentally picture "a firetruck" than "something red" - "firetruck" describes a narrower section of Thingspace, so you're less likely to get lost along the way.
S. I. Hayakawa called this the ladder of abstraction. I'm not sure if understanding the following section will really help with the skill of Being Specific, or help anyone construct exercises for the skill of being specific. But a better theoretical understanding does sometimes prove useful. So I will now digress to explain that abstraction isn't really a ladder, but a lattice.
Let's illustrate this using a classic example from the field of machine learning. Suppose that Days have three properties:
- Weather: {Sunny, Cloudy, Rainy}
- Temperature: {Cool, Hot}
- Timing: {Weekday, Weekend}
And suppose that we've been given some examples of Days on which it was good, or alternatively bad, to play tennis. For example, the Day {Sunny, Cool, Weekend} was good for playing tennis, but the day {Rainy, Hot, Weekday} was bad for playing tennis. A classic task in machine learning is to induct, from a set of pre-classified examples like these, a rule describing when it is good to play tennis.
Any proposed rule which can classify all days as good or bad is a concept, in the lingo of machine learning. "Sunny Days" is a concept; likewise "Sunny Cool Days", and "Days which are either Cool or Sunny". Each of these is a concept which classifies all 12 possible days either positively or negatively - instances or non-instances of the concept.
There are 212 possible concepts over the 12 possible Days. Why so many? Because - for example - there's a concept which only includes the two Days {Sunny+Cool+Weekday} and {Cloudy+Cool+Weekend}}, but classifies all other Days as noninstances. This is a way of classifying all Days into instances or noninstances, hence a possible concept. It's not a compact concept, but it's a concept. Each Day can be classified either positively or negatively - one binary decision per Day - so 212 possible concepts. (That's why induction is a difficult problem in machine learning.)
The concept "Sunny" is a superconcept of "Sunny and Cool"; it lies above it in the lattice of abstraction, since all days which are "Sunny and Cool" are "Sunny". "Sunny or Hot" is a supercategory of "Sunny". "Weekend" is neither a superconcept nor a subconcept of "Sunny".
Concepts form a directed lattice from most general to most specific, with "all Days" at the top (every Day classified as an instance) and "no Days" at the bottom (the concept which classifies every Day as a noninstance).
If you now go back to the problem of telling someone what "red" means, when you say "red is a color", then, even if the listener does happen to know what "color" means, you're still moving upward in the lattice of abstraction. When you said "color", you were talking about a concept that included all red things, but also many other things that were not red.
"Our software is providing the better connections for people" - the entrepreneur who said that might have had something specific in mind, or they might have just been bluffing or succumbing to wishful thinking. But they described it using an abstract statement so broad that it included Facebook, or Western Union back when they were sending telegrams. They might - though this is somewhat optimistic - they might have known themselves what they had in mind; they didn't think of Facebook; so they didn't realize how many other possibilities fit their words. This is a classic manifestation of the Illusion of Transparency, and it's why we have to keep telling people to navigate the lattice downward.
The skill of Being Specific is the skill of understanding how to navigate the lattice of abstraction. You can see why this would be a key element of cognition on a par with Bayes's Theorem or consequentialism.
And this is true in practice as well as theory. When I'm talking to anyone outside the local LW community, I find that a very large amount of my conversation involves repeatedly asking them to be more specific - and if you think that's just me being annoying, watch Paul Graham in the video.
A closely related skill is concreteness, which has to do with nearness-to-sensory-experience or actionability.
According to David Allen's "Getting Things Done", for your brain to stop thinking about an unfinished task, you must (1) know and trust that an external system will remind you to perform that task when it is time to perform it, and (2) have chosen the next action taken at a sufficiently concrete level that your brain is no longer trying to plan it out in the background. "Contact Luke about dispersing prize awards" is not a sufficiently concrete to-do; it leaves open the question of whether to phone or email, and what exactly to say. "Read through the comments, gather the LessWrong usernames of everyone who made a suggestion we tried or adopted, and email the list to Luke" is an action item I know how to perform straightforwardly, without my brain trying to plan it in the background. When you have a trustworthy external system to remind you of what to do, at the time you need to do it - so that the back of your mind isn't worrying about remembering to check the to-do list - and all to-do items have been concretized to the point of being executable without further background planning - then you have, in GTD parlance, "gotten to zero", a state of pure mental blissfulness in which your brain is not worrying about anything except what you're doing right now.
Similarly, for a statement like "Wulky Wilkinsen is a post-utopian" or "Earth gravity pulls at 9.8 meters per second squared" to be falsifiable, it must be concretized - rendered near-to-experience - to a sufficient degree that you can potentially see something and say "Oh, guess the hypothesis was wrong"; you must be able to have an experience which the concretized statement constrains, and which falsifies the theory if the experience is out-of-bounds.
Theoretically: If you imagine the universe as a huge directed graph of causes and effects - the Great Web of Causality - then "concreteness" is being near enough in the Web to either your sensory inputs or motor outputs that you can directly see the prediction unfold, or directly implement the plan, without much further thought.
"Be Specific" and "Be Concrete" could easily end up being the same unit - they're closely related - and we're happy to entertain exercises for Being Concrete, as well as Being Specific. Visualizing what your customer literally sees or does after navigating to your site, would've been a good first step toward being able to answer many of Paul Graham's questions.
A possible success criterion:
One question that we spent a lot of time discussing at CMR, was translating our sense of "specific enough" or "concrete enough" into a describable criterion. (Instead of just a wordless intuition for when something is "too abstract".)
There was an exchange in Paul Graham's office hours that went like this, while interviewing a startup that did metrics - analyzing pageviews, roughly - and the entrepreneur was having great trouble describing what they did that MixPanel didn't. It went on for a while. It was painful to watch.
Paul: I don't get what the difference is. I still don't get what the difference is. What's the difference between you and MixPanel?
Entrepreneur: The difference is - when you have to supplement - they're a view company and we're a platform. That's what it comes down to. They're like a view, a reporting company. If you need something they don't have, a feature -
Harj: So what's an example of somewhere you'd use your thing over MixPanel? Can you give a use-case?
Entrepreneur: Yeah, I mean, we had revenue on day zero. There's a good reason for um... it's a start up, it's a series A company in the daily deals space. One we've signed a social game company to -
Harj: And why do they prefer your thing?
Paul: That wasn't what Harj was asking.
The problem (from the perspective of our present discussion) is that the Entrepreneur did not understand that Paul and Harj were repeatedly asking him to move downward on the ladder of abstraction. When the Entrepreneur said "We had revenue on day zero", he was trying to offer confirmation of the abstract statement "We can do things MixPanel can't", but Paul and Harj still had no idea what his startup actually did.[1]
A quick bit of theoretical background: There's an important difference, in the field of mathematical logic, between models and axioms. An axiom is something like "All kittens are cute", i.e. "All x: kitten(x)->cute(x)". A model is a particular universe of objects that includes {Obj #19834, kitten: T, cute: T, color: grey} and {Obj #19835, kitten: F, cute: F, color: striped}, and so on.
Correspondingly, in logical inference, there's a distinction between model-checking and deduction. Suppose you want to know whether it's true that all positive integers less than 5, when multiplied by 7, are less than 50. If you prove the general truth that all integers less than 5, times 7, are less than 35, by manipulating the axioms of multiplication and inequality, that's deduction. If you notice that the only positive integers less than 5 are just {1, 2, 3, 4} and enumerate their products {7, 14, 21, 28}, which are all less than 50, that's model-checking.
My hypothesis about what it means to be "specific enough" or "concrete enough" is that the picture painted is detailed enough to use in model-checking whatever points are being debated. Paul and Harj don't want to trust you when you state the abstract generalization, "We're better than MixPanel". They aren't even content with deducing support for this generalization from the further generalization, "We already have customers." They want a picture of something you do that MixPanel doesn't, which is detailed enough that they can model-check whether you have a competitive advantage.
Not to mention that Paul Graham is probably thinking about a number of other questions:
- How much would I pay for this product?
- Is this startup exciting enough that I would tweet about using it?
- How much resources will it take to develop these features further?
Paul Graham doesn't want you to say, "$50, yes, and twenty engineer-months". He wants a sufficiently specific picture of (a customer using) your product that he can arrive at his own answers by model-checking.
If Paul Graham is reading this, he's welcome to contradict my interpretation of what was going on in that particular session - but it did seem like a very nice concrete illustration.
That's my guess for what often constitutes "specific enough" - though I'm not sure that's the only thing that ever determines specific-enoughness.
[1]: The strange part was, near the end of that session, it started to look like this might be an interesting startup; that the Entrepreneur wasn't just bluffing. Their actual use-case was to let customers easily roll their own code to measure, e.g., the page-viewing behavior of only customers who'd bought more than $200 worth of stuff, which allegedly MixPanel wouldn't let you do. Which would've been a perfectly good answer if the Entrepreneur had given it at the start of the session, instead of the whole session being about Paul and Harj trying to get at that information.
Five-second-level skill:
The 5SL skill for this problem requires:
- Trigger: Recognizing when your words or thoughts are too abstract.
- Action: Moving downward in the abstraction lattice, or moving nearer to sense input or motor output; being able to render your thoughts more specific or more concrete.
Both of these are targetable for exercises.
Pain points & Pluses:
• You want Paul Graham to believe your startup is better than MixPanel. So you say, "My startup is better than MixPanel" - just produce the pure abstract conclusion you want Paul Graham to arrive at. You keep trying to convince Paul Graham of this statement, saying that you have customers or that you have venture capital, but never actually move downward to the level where Paul Graham could arrive at this conclusion by model-checking.
• You want to describe what your software does, so you say it makes connections between people. You have something specific in mind, but the words coming out of your mouth are so general that - although you're not thinking of those other cases - they could apply equally well to Facebook or telegraph lines. Paul Graham has no idea at all what you're trying to describe and is giving you blank looks.
• The worse version - and the reason why Paul Graham doesn't just trust you, even if he thinks you're honest - is the case where you yourself want to believe your startup is better than Facebook, but you can't think of any specific thing your startup does better than Facebook, so you think of other abstract generalizations that seem to support the conclusion, like "We have smarter people" or "We got more funding earlier." Where fuzzy thinking is motivated, overly abstract thinking is motivated.
• Abstract words can also avoid emotion. George Orwell: "Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification." Or contrast "Humanity is awful, it'd be better for the planet if we all died" to "Everyone including my little sister is awful, we'd be better off if everyone died including her." To feel sympathy, we need enough concrete detail that our emotions can model-check the picture and be activated.
• Cognitive-behavioral therapy is the big experimentally supported version of therapy, for anyone not aware of this, bearing very little resemblance to anything Freudian. CBT talks about using requests for specific details to interrupt thoughts looping around vague but affectively laden centers, like "I am a good husband", "I am a bad husband", or "my roommate is a slob". How are you a good husband? How are you a bad husband? Which specific feature of your roommate are you objecting to? Taboo the emotionally valent word at the center, like "slob", and replace it with something that's specific enough to be testable, or concrete enough to be acted upon.
•• Contrast also "It bothers me when you leave soda cans on the table" vs. "You're such a slob, stop being such a slob." Or contrast: "I'm upset" -> "I'm upset because I think the other person is looking down on me" -> "I'm upset because the person's tone of voice sounds like people who looked down on me in high school". This is related to the incredibly important skill, search for the historical causes of your thoughts, rather than their justifications.
• Focusing on the specific details of a concrete example, instead of repeating a word or arguing about a category, can interrupt Sneaking in Connotations and Arguing By Definition.
• All the failures of concreteness warned against in the Mysterious Answers sequence, where you go on and on about how Wulky Wilkinsen is a post-utopian without ever once asking or imagining how the world ought to look, and what you yourself should experience, if that were true or alternatively false.
• Visualizing specific examples often improves quality of thought in general - we're often smarter when we're using both model-checking and deduction, visualizing a picture of what we're supposed to be reasoning about, constantly checking our deductive steps against some specific model those deductions are supposed to be true about. Saith Richard Feynman:
I had a scheme, which I still use today when somebody is explaining something that I'm trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they're all excited. As they're telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) - disjoint (two halls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say, "False!"
If it's true, they get all excited, and I let them go on for a while. Then I point out my counterexample.
"Oh. We forgot to tell you that it's Class 2 Hausdorff homomorphic."
"Well, then," I say, "It's trivial! It's trivial!"
• Being specific helps notice and call bluffs, should you be mischievously inclined.
"Beware, demon!" he intoned hollowly. "I am not without defenses."
"Oh yeah? Name three."
-- Robert Asprin, Another Fine Myth
Wannabe executive: "I will improve communications between employees and management."
Me: "Can you give me a specific example of how you would do that?"
Known exercises for this skill:
In our previous Rationality Camps, Anna found that her attempt to teach a unit on "Being Specific" didn't seem to work. Her central exercise was picking a category and asking people to name examples.
This isn't to say that the Camps were unsuccessful at teaching the skill. Attendees picked it up, not from the explicit unit, but from all the instructors having to repeatedly ask the attendees to be more specific, and then having to ask them again, while being specific themselves, until the attendees picked up the rhythm by example and feedback.
Given our present teaching technology, this skill seems transmissible from master to apprentice, but not yet replicable by exercises. That's why we're turning it over to you.
Why is "be specific" a hard skill to teach?
I think it is because being specific is not really the problem, and by labeling it as such we force ourselves into a dead-end which does not contain a solution to the real problem. The real problem is achieving communication. By 'achieving communication', I mean that concepts in one mind are reproduced with good fidelity in another. By good fidelity, I mean that 90% (arbitrary threshold) of assertions based on my model will be confirmed as true by yours.
There are many different ways that the fidelity can be low between my model and yours:
specific vs abstract
mismatched entity-relationship semantic models
ambiguous words
vague concepts
Surely there are many more.
Examples of what I mean by these few:
specific vs abstract: dog vs toy chihuahua puppy
model mismatch: A contract lawyer, a reservoir modeler, and a mud-logger are trying to share the concept "well". Their models of what a "well" is have some attributes with similar names, but different meanings and uses, like "name" or "location". To the mud-logger, a well is a stack of physical measurements of the drilling mud sampled at different drilling depths. To the lawyer, a well is a feature of land-use contracts, service contracts, etc.
Another kind of model mismatch: I think of two entities as having a "has-a" relationship. A house "has" 0 or 1 garages (detached). But you think of the same two entities using a mixin pattern: a house can have or not have garage attributes (not detached). "I put my car in the house" makes no sense to me, because a car goes in a garage but not in a house, but might make sense to you for a house with a built-in garage. We may go a long time before figuring out that my "house" isn't precisely the same as yours.
ambiguity: I'm a cowboy, you're an artist. We are trying to share the concept "draw". We can't because the concept doesn't equate.
vagueness: I say my decision theory "one-boxes". You have no idea what that means, but you create a place-holder for it in your model. So on some level you feel like you understand, but if you drill down, you can get to a point where something important is not defined well enough to use.
It is difficult to know when something that is transparent to you is being misrepresented in my head based on how you explain it to me. "I know you think you understand what you thought I said, but I'm not sure you're aware that what I said was not what I meant."
I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.
You make an assertion about your model.
I generate a challenge that is in logical agreement with your assertionss, but which I expect will fail to match your actual model. If I succeed, I get a point.
Repeat, until I am unable to create a successful challenge.
The longer it takes you to create an airtight set of assertions, the more I get.
Then we switch roles.
So I am looking for all the ways your model might be ill-defined, and all the ways your description might be ambiguous or overly abstract. You are trying to cement all of those gaps as parsimoniously as possible.
I've left the hardest part for last: the players need to be supplied with a metaphoric tinkertoy set of model parts. The parts need to support all of the kinds of fidelity-failure we can think of. And the set should be exensible, for when we think of more.
I suspect the Socratic method (the old one, not the bland one) fits under this heading- "put forth a proposition, and I'll demolish you with your own statements."