We've learned not to expect short inferential distances when explaining ideas we understand. We've also learned that leaping too far ahead when explaining ideas like transhumanism can freak people out.
I want to be really really good at explaining ideas. Does anyone have recommendations about how to figure out what the next inferential step is in another person's mind?
Categories which are not answers themselves but are areas in which I expect to find answers:
- Asking filter questions
- Social contexts
- Verbal cues
- Body language
I'll assume that, by "explain", you want to communicate a graph of inferences.
In fact, the way you've framed the question, you want to have a specific notional dependency DAG in mind. Which may be Step 0: Even before you're trying to explain an idea to somebody, break it down into its constituent ideas. Make sure you understand what ideas you expect to follow from what other ideas, all the way down to ideas that you're certain that either your audience already understands, or which you understand at a deep enough level to explain on the fly. (Probably a level 2 or level 3 understanding.)
I bet actually making this sort of diagram is a really good idea before trying to actually explain tricky things. In fact, I spend a fair amount of my time writing down explanations; I should actually try this.
Given that you have such a diagram, either in front of you or internalized, you can at least isolate one idea at a time. For each idea, things that you might try:
Ask your interlocutor to explain specific parts; to put in their own words why a thing must be so, given the assumptions in play. If you worry that they might be guessing at passwords, ask them to explain those passwords, as well. Or:
Ask why you don't get a slightly different result. When explaining a math theorem, you can ask for a counterexample to a stronger theorem or a slightly different theorem. For a physical phenomenon, you could ask instead what happens when the initial arrangement is varied. To test their understanding of an inference, you could ask what happens when the assumptions are a little different. This takes more thought and more time, but might also be instructive to both parties as a side effect.
For certain kinds of ideas, you can ask your interlocutor to solve an example problem. Programming, math, and physics do this beautifully, but I'm straining just now to explain just what these have in common. In these cases, though, it's almost immediately obvious if your interlocutor understands the idea or not. If they understand, they'll start to think; if they don't, they'll start to panic.