We've learned not to expect short inferential distances when explaining ideas we understand. We've also learned that leaping too far ahead when explaining ideas like transhumanism can freak people out.
I want to be really really good at explaining ideas. Does anyone have recommendations about how to figure out what the next inferential step is in another person's mind?
Categories which are not answers themselves but are areas in which I expect to find answers:
- Asking filter questions
- Social contexts
- Verbal cues
- Body language
Wow, how did I miss this? This topic is one of big interest to me, since I seem to be much better at explaining things to others than getting a sufficient explanation out of others. It's a routine occurence for me to train people up to my level in a fraction of the time it took me to get to that level.
I strongly endorse fiddlemath's post, as it basically matches my approach, and I appreciate the link too. Thanks to jsalvati for mentioning my latest remarks on the matter.
So, here's my general strategy:
First, you have to want to explain it. (This doesn't seem like a problem in your case, but it's important in general.) It's tempting to maintain one's monopoly on knowledge and get divided between pursuing that, vs. actually trying to convey an understanding
Second, you need to have a deep understanding yourself. In this context, that means a Level 2 understanding, which means you can not only come up with the right answers in a domain, but have a model for that domain that deeply connects to other domains so that you can see how they're related, and what each implies for the other.
Third, you have to find the nearest point of common understanding ("nepocu"), which identifies the extent of your inferential distance. So, your filter questions should be aimed at identifying what common understanding you can draw from, so that you can take what you both know, and guide the listener stepwise to the parts that only you know. Build up every pre-requisite concept (sometimes several layers out), starting from this nepocu.
With all that in mind, here are some guidelines to follow, using a recent non-interactive explanation for examples:
Always be ready to "fall back a level" and explain the grounding concepts and prerequisites for what you're currently having trouble with. This is where the Level 2 understanding comes in: if your model for this domain is well-connected to your model for the rest of reality, there are arbitrarily many "inferential paths" you can take, and so you're always able to start from a domain closer to what the listener is already familiar with.
Note: for many people, the very idea that the body of human knowledge is mutually connected ("consilient") is novel, and their experience with the education system may have steered them away from even thinking like that, so they're used to looking at any topic as a bunch of random facts to remember. Connecting different areas will make things really "click".
Motivate each step in your explanation. An explanation is easier to follow if the listener knows why you're explaining it, and therefore what important things to look for. (This is why, in the linked blog post, I first explained what a signature must accomplish, before I describe the mechanics of public key signatures.) When explaining a process or method, it's confusing for someone to hear all the numerous steps, so it helps to start with, say, a naive, straightforward (but wrong) method and say something like, "But if you do it that way, it has this problem, so we do this instead" ... and gradually build up to the full thing.
Recall what it was like before you understood the topic and imagine what sort of things you wish someone would have told you. If something seems counterintuitive to a newcomer, acknowledge it from the beginning so they're not stuck wondering about it.
Hope that helps.
(An article I've been writing on this has been in development hell for a while now...)
Do you have tips for Recalling what it was like before you understood? I frequently notice that I don't know how to do that.
I super endorse Motivating each step, especially when it comes to math. I find I have a lot of trouble with advanced math textbooks that do not do this well (and that's most of them).