When you are trying to understand something by yourself, a useful skill to check your grasp on the subject is to try out the moving parts of your model and see if you can simulate the resulting changes.
Suppose you want to learn how a rocket works. At the bare minimum, you should be able to calculate the speed of the rocket given the time past launch. But can you tell what happens if Earth gravity was stronger? Weaker? What if the atmosphere had no oxygen? What if we replaced the fuel with Diet Coke and Mentos?
To really understand something, it's not enough to be able to predict the future in a normal, expected, ceteris paribus scenario. You should also be able to predict what happens when several variables are changed is several ways, or, at least, point to which calculations need to be run to arrived at such a prediction.
Douglas Hofstadter and Daniel Dennett call that "turning the knobs". Imagine your model as a box with several knobs, where each knob controls one aspect of the modeled system. You don't have to be able to turn all the possible knobs to all possible values and still get a sensible, testable and correct answer, but the more, the better.
Doug and Dan apply this approach to thought experiments and intuition pumps, as a way to explore possible answers to philosophical questions. In my experience, this skill is also effective when applied to real world problems, notably when trying to understand something that is being explained by someone else.
In this case, you can run this knob-turning check interactively with the other person, which makes it way more powerful. If someone says “X+Y = Z” and “X+W = Z+A”, it’s not enough to mentally turn the knobs and calculate “X+Y+W = Z+A+B”. You should do that, then actually ask the explainer “Hey, let me see if I get what you mean: for example, X+Y+W would be Z+A+B”?
This interactive model knob-turning has been useful to me in many walks of life, but the most common and mundane application is helping out people at work. In that context, I identify six effects which make it helpful:
1) Communication check: maybe you misunderstood and actually X+W = Z-A
This is useful overall, but very important if someone uses metaphor. Some metaphors are clearly vague and people will know that and avoid them in technical explanations. But some metaphors seem really crisp for some people but hazy to others, or worse, very crisp to both people, but with different meanings! So take every metaphor as an invitation to interactive knob-turning.
To focus on communication check, try rephrasing their statements, using different words or, if necessary, very different metaphors. You can also apply a theory in different contexts, to see if the metaphors still apply.
For example, if a person talks about a computer system as if it were a person, I might try to explain the same thing in terms of a group of trained animals, or a board of directors, or dominoes falling.
2) Self-check: correct your own reasoning (maybe you understood the correct premises, but made a logical mistake during knob turning)
This is useful because humans are fallible, and two (competent) heads are less likely to miss a step in the reasoning dance than one.
Also, when someone comes up and asks something, you’ll probably be doing a context-switch, and will be more likely to get confused along the way. The person asking usually has more local context than you in the specific problem they are trying to solve, even if you have more context on the surrounding matters, so they might be able to spot your error more quickly than yourself.
Focus on self-check means double checking any intuitive leaps or tricky reasoning you used. Parts of you model that do not have a clear step-by-step explanation have priority, and should be tested against another brain. Try to phrase the question in a way that makes your intuitive answer look less obvious.
For example: “I’m not sure if this could happen, and it looks like all these messages should arrive in order, but do you know how we can guarantee that?”
3) Other-check: help the other person to correct inferential errors they might have made
The converse of self-checking. Sometimes fresh eyes with some global context can see reasoning errors that are hidden to people who are very focused on a task for too long.
To focus on other-check, ask about conclusions that follow from your model of the situation, but seem unintuitive to you, or required tricky reasoning. It’s possible that your friend also found them unintuitive, and that might have lead them to a jump to the opposite direction.
For example, I could ask: “For this system to work correctly, it seems that the clocks have to be closely synchronized, right? If the clocks are off by much, we could have a difference around midnight.”
Perhaps you successfully understand what was said, and the model you built in your head fits the communicated data. But that doesn’t mean it is the same model that the other person has in mind! In that case, your knob-turning will get you a result that’s inconsistent with what they expect.
4) Alternative hypothesis generation: If they cannot refute your conclusions, you have shown them a possible model they had not yet considered, in which case it will also point in the direction of more research to be made
This is doesn't happen that much when someone is looking for help to something. Usually the context they are trying to explain is the prior existing system which they will build upon, and if they’ve done their homework (i.e. read the docs and/or code) they should have a very good understanding of that already. One exception here is with people who are very new to the job, which are learning while doing.
On the other hand, this is incredibly relevant when someone asks for help debugging. If they can’t find the root cause of a bug, it must be because they are missing something. Either they have derived a mistaken conclusion from the data, or they’ve made an inferential error from those conclusions. The first case is where proposing a new model helps (the second is solved by other-checking).
Maybe they read the logs, saw that a request was sent, and assumed it was received, but perhaps it wasn’t. In that case, you can tell them to check for a log on the receiver system, or the absence of such a log.
To boost this effect, look for data that you strongly expect to exist and confirm your model, where the absence of such data might be caused by relative lack of global context, skill or experience by the other person.
For example: “Ok, so if the database went down, we should’ve seen all requests failing in that time range; but if it was a network instability, we should have random requests failing and others succeeding. Which one was it?”
5) Filling gaps in context: If they show you data that contradicts your model, well, you get more data and improve your understanding
This is very important when you have much less context than the other person. The larger the difference in context, the more likely that there’s some important piece of information that you don’t have, but that they take for granted.
The point here isn’t that there something you don’t know. There are lots and lots of things you don’t know, and neither does your colleague. And if there’s something they know that you don’t, they’ll probably fill you in when asking the question.
The point is that they will tell you something only if they realize you don’t know it yet. But people will expect short inferential distances, underestimate the difference in context, and forget to tell you stuff because it’s just obvious to them that you know.
Focus on filling gaps means you ask about the parts of your model which you are more uncertain about, to find out if they can help you build a clearer image. You can also extrapolate and make a wild guess, which you don’t really expect to be right.
For example: “How does the network works on this datacenter? Do we have a single switch so that, if it fails, all connections go down? Or are those network interfaces all virtualized anyway?”
6) Finding new ideas: If everybody understands one another, and the models are correct, knob-turning will lead to new conclusions (if they hadn’t turned those specific knobs on the problem yet)
This is the whole point of having the conversation, to help someone figure something out they haven’t already. But even if the specific new conclusion you arrive when knob-turning isn’t directly relevant to the current question, it may end up shining light on some part of the other person’s model that they couldn’t see yet.
This effect is general and will happen gradually as both your and the other person's models improve and converge. The goal is to get all obstacles out of the way so you can just move forward and find new ideas and solutions.
The more global context and skill your colleague has, the lower the chance that they missed some crucial piece of data and have a mistaken model (or, if they do, you probably won't be able to figure that out without putting in serious effort). So when talking to more skilled or experienced people, you can focus more in replicating the model from their mind to yours (communication check and self-check).
Conversely, when talking to less skilled people, you should focus more on errors they might have made, or models they might not have considered, or data they may need to collect (other-check and alternative hypothesis generation).
Filling gaps depends more on differences of communication style and local context, so I don't have a person-based heuristic.
(Please discuss on LessWrong 2.0)
(Cross-posted from my medium channel)
When you are trying to understand something by yourself, a useful skill to check your grasp on the subject is to try out the moving parts of your model and see if you can simulate the resulting changes.
Suppose you want to learn how a rocket works. At the bare minimum, you should be able to calculate the speed of the rocket given the time past launch. But can you tell what happens if Earth gravity was stronger? Weaker? What if the atmosphere had no oxygen? What if we replaced the fuel with Diet Coke and Mentos?
To really understand something, it's not enough to be able to predict the future in a normal, expected, ceteris paribus scenario. You should also be able to predict what happens when several variables are changed is several ways, or, at least, point to which calculations need to be run to arrived at such a prediction.
Douglas Hofstadter and Daniel Dennett call that "turning the knobs". Imagine your model as a box with several knobs, where each knob controls one aspect of the modeled system. You don't have to be able to turn all the possible knobs to all possible values and still get a sensible, testable and correct answer, but the more, the better.
Doug and Dan apply this approach to thought experiments and intuition pumps, as a way to explore possible answers to philosophical questions. In my experience, this skill is also effective when applied to real world problems, notably when trying to understand something that is being explained by someone else.
In this case, you can run this knob-turning check interactively with the other person, which makes it way more powerful. If someone says “X+Y = Z” and “X+W = Z+A”, it’s not enough to mentally turn the knobs and calculate “X+Y+W = Z+A+B”. You should do that, then actually ask the explainer “Hey, let me see if I get what you mean: for example, X+Y+W would be Z+A+B”?
This interactive model knob-turning has been useful to me in many walks of life, but the most common and mundane application is helping out people at work. In that context, I identify six effects which make it helpful:
1) Communication check: maybe you misunderstood and actually X+W = Z-A
This is useful overall, but very important if someone uses metaphor. Some metaphors are clearly vague and people will know that and avoid them in technical explanations. But some metaphors seem really crisp for some people but hazy to others, or worse, very crisp to both people, but with different meanings! So take every metaphor as an invitation to interactive knob-turning.
To focus on communication check, try rephrasing their statements, using different words or, if necessary, very different metaphors. You can also apply a theory in different contexts, to see if the metaphors still apply.
For example, if a person talks about a computer system as if it were a person, I might try to explain the same thing in terms of a group of trained animals, or a board of directors, or dominoes falling.
2) Self-check: correct your own reasoning (maybe you understood the correct premises, but made a logical mistake during knob turning)
This is useful because humans are fallible, and two (competent) heads are less likely to miss a step in the reasoning dance than one.
Also, when someone comes up and asks something, you’ll probably be doing a context-switch, and will be more likely to get confused along the way. The person asking usually has more local context than you in the specific problem they are trying to solve, even if you have more context on the surrounding matters, so they might be able to spot your error more quickly than yourself.
Focus on self-check means double checking any intuitive leaps or tricky reasoning you used. Parts of you model that do not have a clear step-by-step explanation have priority, and should be tested against another brain. Try to phrase the question in a way that makes your intuitive answer look less obvious.
For example: “I’m not sure if this could happen, and it looks like all these messages should arrive in order, but do you know how we can guarantee that?”
3) Other-check: help the other person to correct inferential errors they might have made
The converse of self-checking. Sometimes fresh eyes with some global context can see reasoning errors that are hidden to people who are very focused on a task for too long.
To focus on other-check, ask about conclusions that follow from your model of the situation, but seem unintuitive to you, or required tricky reasoning. It’s possible that your friend also found them unintuitive, and that might have lead them to a jump to the opposite direction.
For example, I could ask: “For this system to work correctly, it seems that the clocks have to be closely synchronized, right? If the clocks are off by much, we could have a difference around midnight.”
Perhaps you successfully understand what was said, and the model you built in your head fits the communicated data. But that doesn’t mean it is the same model that the other person has in mind! In that case, your knob-turning will get you a result that’s inconsistent with what they expect.
4) Alternative hypothesis generation: If they cannot refute your conclusions, you have shown them a possible model they had not yet considered, in which case it will also point in the direction of more research to be made
This is doesn't happen that much when someone is looking for help to something. Usually the context they are trying to explain is the prior existing system which they will build upon, and if they’ve done their homework (i.e. read the docs and/or code) they should have a very good understanding of that already. One exception here is with people who are very new to the job, which are learning while doing.
On the other hand, this is incredibly relevant when someone asks for help debugging. If they can’t find the root cause of a bug, it must be because they are missing something. Either they have derived a mistaken conclusion from the data, or they’ve made an inferential error from those conclusions. The first case is where proposing a new model helps (the second is solved by other-checking).
Maybe they read the logs, saw that a request was sent, and assumed it was received, but perhaps it wasn’t. In that case, you can tell them to check for a log on the receiver system, or the absence of such a log.
To boost this effect, look for data that you strongly expect to exist and confirm your model, where the absence of such data might be caused by relative lack of global context, skill or experience by the other person.
For example: “Ok, so if the database went down, we should’ve seen all requests failing in that time range; but if it was a network instability, we should have random requests failing and others succeeding. Which one was it?”
5) Filling gaps in context: If they show you data that contradicts your model, well, you get more data and improve your understanding
This is very important when you have much less context than the other person. The larger the difference in context, the more likely that there’s some important piece of information that you don’t have, but that they take for granted.
The point here isn’t that there something you don’t know. There are lots and lots of things you don’t know, and neither does your colleague. And if there’s something they know that you don’t, they’ll probably fill you in when asking the question.
The point is that they will tell you something only if they realize you don’t know it yet. But people will expect short inferential distances, underestimate the difference in context, and forget to tell you stuff because it’s just obvious to them that you know.
Focus on filling gaps means you ask about the parts of your model which you are more uncertain about, to find out if they can help you build a clearer image. You can also extrapolate and make a wild guess, which you don’t really expect to be right.
For example: “How does the network works on this datacenter? Do we have a single switch so that, if it fails, all connections go down? Or are those network interfaces all virtualized anyway?”
6) Finding new ideas: If everybody understands one another, and the models are correct, knob-turning will lead to new conclusions (if they hadn’t turned those specific knobs on the problem yet)
This is the whole point of having the conversation, to help someone figure something out they haven’t already. But even if the specific new conclusion you arrive when knob-turning isn’t directly relevant to the current question, it may end up shining light on some part of the other person’s model that they couldn’t see yet.
This effect is general and will happen gradually as both your and the other person's models improve and converge. The goal is to get all obstacles out of the way so you can just move forward and find new ideas and solutions.
The more global context and skill your colleague has, the lower the chance that they missed some crucial piece of data and have a mistaken model (or, if they do, you probably won't be able to figure that out without putting in serious effort). So when talking to more skilled or experienced people, you can focus more in replicating the model from their mind to yours (communication check and self-check).
Conversely, when talking to less skilled people, you should focus more on errors they might have made, or models they might not have considered, or data they may need to collect (other-check and alternative hypothesis generation).
Filling gaps depends more on differences of communication style and local context, so I don't have a person-based heuristic.