Cyan comments on Book Review: The Root of Thought - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (91)
It's entirely possible.
I suppose I'd dispute that, then. It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant's internal ontology.
One could, in their own head, recognize far-reaching inferential flows between their field of expertise and the rest of their knowledge, and yet fail to recognize that the task of explaining essentially lies is seeking the nepocu and going from there. Level 2 understanding is a property of one individual's internal ontology; seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.
But it seems premature to go on with this discussion until you've made the post. I'm happy to continue if you want to (there's no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it's done.
Okay, point taken. In any case, it would be hard for me to simultaneously claim that understanding necessarily enables you to explain, and that I have advice that would enable you to explain if you only have an understanding.
On the other hand, the advice I'm giving is derided as "obvious", but, if it's so obvious, why aren't people following it?
But someone doesn't really need to recognize the difference between their own internal ontology and someone else's. In the worst case, they can just abandon attempts to link to the listener's ontology, and "overwrite" with their own, and this would be the obvious next step. In my (admittedly biased) opinion, the reason people don't take this route is not because this would take too long, but because the domain knowledge isn't even well-connected to the rest of their own internal ontology.
(Also, this is distinct from the "expecting short inferential distances" problem in that people don't simply expect it short, but that they wouldn't know what to do even if they knew it were very long.)
I still think advice would be helpful at this stage. I'll send you what I have so far, up to the understanding / nepocu points.