I'd venture it tends towards starting with poor mental models and then addressing a huge universe of learnable information with those models. Amplifies confirmation bias and leads to consistently learning the wrong lessons. So there's real value to optimizing your mental models before you even try to learn the settled knowledge, but of course the knowledge itself is the basis of most people's models.
Perhaps there's a happy medium in building out a set of models before you start work on any new field, and looking to those you respect in those fields for pointers on what the essential models are.
How exactly do people end up knowing little?
I'd venture it tends towards starting with poor mental models and then addressing a huge universe of learnable information with those models. Amplifies confirmation bias and leads to consistently learning the wrong lessons. So there's real value to optimizing your mental models before you even try to learn the settled knowledge, but of course the knowledge itself is the basis of most people's models.
Perhaps there's a happy medium in building out a set of models before you start work on any new field, and looking to those you respect in those fields for pointers on what the essential models are.